id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
15,968,804
https://en.wikipedia.org/wiki/Section%20modulus
In solid mechanics and structural engineering, section modulus is a geometric property of a given cross-section used in the design of beams or flexural members. Other geometric properties used in design include: area for tension and shear, radius of gyration for compression, and second moment of area and polar second moment of area for stiffness. Any relationship between these properties is highly dependent on the shape in question. There are two types of section modulus, elastic and plastic: The elastic section modulus is used to calculate a cross-section's resistance to bending within the elastic range, where stress and strain are proportional. The plastic section modulus is used to calculate a cross-section's capacity to resist bending after yielding has occurred across the entire section. It is used for determining the plastic, or full moment, strength and is larger than the elastic section modulus, reflecting the section's strength beyond the elastic range. Equations for the section moduli of common shapes are given below. The section moduli for various profiles are often available as numerical values in tables that list the properties of standard structural shapes. Note: Both the elastic and plastic section moduli are different to the first moment of area. It is used to determine how shear forces are distributed. Notation Different codes use varying notations for the elastic and plastic section modulus, as illustrated in the table below. The North American notation is used in this article. Elastic section modulus The elastic section modulus is used for general design. It is applicable up to the yield point for most metals and other common materials. It is defined as where: is the second moment of area (or area moment of inertia, not to be confused with moment of inertia), and is the distance from the neutral axis to the most extreme fibre. It is used to determine the yield moment strength of a section where is the yield strength of the material. The table below shows formulas for the elastic section modulus for various shapes. Plastic section modulus The plastic section modulus is used for materials and structures where limited plastic deformation is acceptable. It represents the section's capacity to resist bending once the material has yielded and entered the plastic range. It is used to determine the plastic, or full, moment strength of a section where is the yield strength of the material. Engineers often compare the plastic moment strength against factored applied moments to ensure that the structure can safely endure the required loads without significant or unacceptable permanent deformation. This is an integral part of the limit state design method. The plastic section modulus depends on the location of the plastic neutral axis (PNA). The PNA is defined as the axis that splits the cross section such that the compression force from the area in compression equals the tension force from the area in tension. For sections with constant, equal compressive and tensile yield strength, the area above and below the PNA will be equal These areas may differ in composite sections, which have differing material properties, resulting in unequal contributions to the plastic section modulus. The plastic section modulus is calculated as the sum of the areas of the cross section on either side of the PNA, each multiplied by the distance from their respective local centroids to the PNA. where: is the area in compression is the area in tension are the distances from the PNA to their centroids. Plastic section modulus and elastic section modulus can be related by a shape factor : This is an indication of a section's capacity beyond the yield strength of material. The shape factor for a rectangular section is 1.5. The table below shows formulas for the plastic section modulus for various shapes. Use in structural engineering In structural engineering, the choice between utilizing the elastic or plastic (full moment) strength of a section is determined by the specific application. Engineers follow relevant codes that dictate whether an elastic or plastic design approach is appropriate, which in turn informs the use of either the elastic or plastic section modulus. While a detailed examination of all relevant codes is beyond the scope of this article, the following observations are noteworthy: When assessing the strength of long, slender beams, it is essential to evaluate their capacity to resist lateral torsional buckling in addition to determining their moment capacity based on the section modulus. Although T-sections may not be the most efficient choice for resisting bending, they are sometimes selected for their architectural appeal. In such cases, it is crucial to carefully assess their capacity to resist lateral torsional buckling. While standard uniform cross-section beams are often used, they may not be optimally utilized when subjected to load moments that vary along their length. For large beams with predictable loading conditions, strategically adjusting the section modulus along the length can significantly enhance efficiency and cost-effectiveness. In certain applications, such as cranes and aeronautical or space structures, relying solely on calculations is often deemed insufficient. In these cases, structural testing is conducted to validate the load capacity of the structure. See also Beam theory Buckling List of area moments of inertia Second moment of area Structural testing Yield strength References Structural analysis Mechanical quantities
Section modulus
[ "Physics", "Mathematics", "Engineering" ]
1,035
[ "Structural engineering", "Mechanical quantities", "Physical quantities", "Quantity", "Structural analysis", "Mechanics", "Mechanical engineering", "Aerospace engineering" ]
9,295,203
https://en.wikipedia.org/wiki/Mayo%E2%80%93Lewis%20equation
The Mayo–Lewis equation or copolymer equation in polymer chemistry describes the distribution of monomers in a copolymer. It was proposed by Frank R. Mayo and Frederick M. Lewis. The equation considers a monomer mix of two components and and the four different reactions that can take place at the reactive chain end terminating in either monomer ( and ) with their reaction rate constants : The reactivity ratio for each propagating chain end is defined as the ratio of the rate constant for addition of a monomer of the species already at the chain end to the rate constant for addition of the other monomer. The copolymer equation is then: with the concentrations of the components in square brackets. The equation gives the relative instantaneous rates of incorporation of the two monomers. Equation derivation Monomer 1 is consumed with reaction rate: with the concentration of all the active chains terminating in monomer 1, summed over chain lengths. is defined similarly for monomer 2. Likewise the rate of disappearance for monomer 2 is: Division of both equations by followed by division of the first equation by the second yields: The ratio of active center concentrations can be found using the steady state approximation, meaning that the concentration of each type of active center remains constant. The rate of formation of active centers of monomer 1 () is equal to the rate of their destruction () so that or Substituting into the ratio of monomer consumption rates yields the Mayo–Lewis equation after rearrangement: Mole fraction form It is often useful to alter the copolymer equation by expressing concentrations in terms of mole fractions. Mole fractions of monomers and in the feed are defined as and where Similarly, represents the mole fraction of each monomer in the copolymer: These equations can be combined with the Mayo–Lewis equation to give This equation gives the composition of copolymer formed at each instant. However the feed and copolymer compositions can change as polymerization proceeds. Limiting cases Reactivity ratios indicate preference for propagation. Large indicates a tendency for to add , while small corresponds to a tendency for to add . Values of describe the tendency of to add or . From the definition of reactivity ratios, several special cases can be derived: If both reactivity ratios are very high, the two monomers only react with themselves and not with each other. This leads to a mixture of two homopolymers. . If both ratios are larger than 1, homopolymerization of each monomer is favored. However, in the event of crosspolymerization adding the other monomer, the chain-end will continue to add the new monomer and form a block copolymer. . If both ratios are near 1, a given monomer will add the two monomers with comparable speeds and a statistical or random copolymer is formed. If both values are near 0, the monomers are unable to homopolymerize. Each can add only the other resulting in an alternating polymer. For example, the copolymerization of maleic anhydride and styrene has reactivity ratios = 0.01 for maleic anhydride and = 0.02 for styrene. Maleic acid in fact does not homopolymerize in free radical polymerization, but will form an almost exclusively alternating copolymer with styrene. In the initial stage of the copolymerization, monomer 1 is incorporated faster and the copolymer is rich in monomer 1. When this monomer gets depleted, more monomer 2 segments are added. This is called composition drift. When both , the system has an azeotrope, where feed and copolymer composition are the same. Calculation of reactivity ratios Calculation of reactivity ratios generally involves carrying out several polymerizations at varying monomer ratios. The copolymer composition can be analysed with methods such as Proton nuclear magnetic resonance, Carbon-13 nuclear magnetic resonance, or Fourier transform infrared spectroscopy. The polymerizations are also carried out at low conversions, so monomer concentrations can be assumed to be constant. With all the other parameters in the copolymer equation known, and can be found. Curve Fitting One of the simplest methods for finding reactivity ratios is plotting the copolymer equation and using nonlinear least squares analysis to find the , pair that gives the best fit curve. This is preferred as methods such as Kelen-Tüdős or Fineman-Ross (see below) that involve linearization of the Mayo–Lewis equation will introduce bias to the results. Mayo-Lewis Method The Mayo-Lewis method uses a form of the copolymer equation relating to : For each different monomer composition, a line is generated using arbitrary values. The intersection of these lines is the , for the system. More frequently, the lines do not intersect in a single point and the area in which most lines intersect can be given as a range of , and values. Fineman-Ross Method Fineman and Ross rearranged the copolymer equation into a linear form: where and Thus, a plot of versus yields a straight line with slope and intercept Kelen-Tüdős method The Fineman-Ross method can be biased towards points at low or high monomer concentration, so Kelen and Tüdős introduced an arbitrary constant, where and are the highest and lowest values of from the Fineman-Ross method. The data can be plotted in a linear form where and . Plotting against yields a straight line that gives when and when . This distributes the data more symmetrically and can yield better results. Q-e scheme A semi-empirical method for the prediction of reactivity ratios is called the Q-e scheme which was proposed by Alfrey and Price in 1947. This involves using two parameters for each monomer, and . The reaction of radical with monomer is written as while the reaction of radical with monomer is written as Where P is a proportionality constant, Q is the measure of reactivity of monomer via resonance stabilization, and e is the measure of polarity of monomer (molecule or radical) via the effect of functional groups on vinyl groups. Using these definitions, and can be found by the ratio of the terms. An advantage of this system is that reactivity ratios can be found using tabulated Q-e values of monomers regardless of what the monomer pair is in the system. External links copolymers @zeus.plmsc.psu.edu Link References Polymer chemistry Equations
Mayo–Lewis equation
[ "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,338
[ "Equations", "Mathematical objects", "Materials science", "Polymer chemistry" ]
9,299,409
https://en.wikipedia.org/wiki/Nucleic%20acid%20thermodynamics
Nucleic acid thermodynamics is the study of how temperature affects the nucleic acid structure of double-stranded DNA (dsDNA). The melting temperature (Tm) is defined as the temperature at which half of the DNA strands are in the random coil or single-stranded (ssDNA) state. Tm depends on the length of the DNA molecule and its specific nucleotide sequence. DNA, when in a state where its two strands are dissociated (i.e., the dsDNA molecule exists as two independent strands), is referred to as having been denatured by the high temperature. Concepts Hybridization Hybridization is the process of establishing a non-covalent, sequence-specific interaction between two or more complementary strands of nucleic acids into a single complex, which in the case of two strands is referred to as a duplex. Oligonucleotides, DNA, or RNA will bind to their complement under normal conditions, so two perfectly complementary strands will bind to each other readily. In order to reduce the diversity and obtain the most energetically preferred complexes, a technique called annealing is used in laboratory practice. However, due to the different molecular geometries of the nucleotides, a single inconsistency between the two strands will make binding between them less energetically favorable. Measuring the effects of base incompatibility by quantifying the temperature at which two strands anneal can provide information as to the similarity in base sequence between the two strands being annealed. The complexes may be dissociated by thermal denaturation, also referred to as melting. In the absence of external negative factors, the processes of hybridization and melting may be repeated in succession indefinitely, which lays the ground for polymerase chain reaction. Most commonly, the pairs of nucleic bases A=T and G≡C are formed, of which the latter is more stable. Denaturation DNA denaturation, also called DNA melting, is the process by which double-stranded deoxyribonucleic acid unwinds and separates into single-stranded strands through the breaking of hydrophobic stacking attractions between the bases. See Hydrophobic effect. Both terms are used to refer to the process as it occurs when a mixture is heated, although "denaturation" can also refer to the separation of DNA strands induced by chemicals like formamide or urea. The process of DNA denaturation can be used to analyze some aspects of DNA. Because cytosine / guanine base-pairing is generally stronger than adenine / thymine base-pairing, the amount of cytosine and guanine in a genome is called its GC-content and can be estimated by measuring the temperature at which the genomic DNA melts. Higher temperatures are associated with high GC content. DNA denaturation can also be used to detect sequence differences between two different DNA sequences. DNA is heated and denatured into single-stranded state, and the mixture is cooled to allow strands to rehybridize. Hybrid molecules are formed between similar sequences and any differences between those sequences will result in a disruption of the base-pairing. On a genomic scale, the method has been used by researchers to estimate the genetic distance between two species, a process known as DNA-DNA hybridization. In the context of a single isolated region of DNA, denaturing gradient gels and temperature gradient gels can be used to detect the presence of small mismatches between two sequences, a process known as temperature gradient gel electrophoresis. Methods of DNA analysis based on melting temperature have the disadvantage of being proxies for studying the underlying sequence; DNA sequencing is generally considered a more accurate method. The process of DNA melting is also used in molecular biology techniques, notably in the polymerase chain reaction. Although the temperature of DNA melting is not diagnostic in the technique, methods for estimating Tm are important for determining the appropriate temperatures to use in a protocol. DNA melting temperatures can also be used as a proxy for equalizing the hybridization strengths of a set of molecules, e.g. the oligonucleotide probes of DNA microarrays. Annealing Annealing, in genetics, means for complementary sequences of single-stranded DNA or RNA to pair by hydrogen bonds to form a double-stranded polynucleotide. Before annealing can occur, one of the strands may need to be phosphorylated by an enzyme such as kinase to allow proper hydrogen bonding to occur. The term annealing is often used to describe the binding of a DNA probe, or the binding of a primer to a DNA strand during a polymerase chain reaction. The term is also often used to describe the reformation (renaturation) of reverse-complementary strands that were separated by heat (thermally denatured). Proteins such as RAD52 can help DNA anneal. DNA strand annealing is a key step in pathways of homologous recombination. In particular, during meiosis, synthesis-dependent strand annealing is a major pathway of homologous recombination. Stacking Stacking is the stabilizing interaction between the flat surfaces of adjacent bases. Stacking can happen with any face of the base, that is 5'-5', 3'-3', and vice versa. Stacking in "free" nucleic acid molecules is mainly contributed by intermolecular force, specifically electrostatic attraction among aromatic rings, a process also known as pi stacking. For biological systems with water as a solvent, hydrophobic effect contributes and helps in formation of a helix. Stacking is the main stabilizing factor in the DNA double helix. Contribution of stacking to the free energy of the molecule can be experimentally estimated by observing the bent-stacked equilibrium in nicked DNA. Such stabilization is dependent on the sequence. The extent of the stabilization varies with salt concentrations and temperature. Thermodynamics of the two-state model Several formulas are used to calculate Tm values. Some formulas are more accurate in predicting melting temperatures of DNA duplexes. For DNA oligonucleotides, i.e. short sequences of DNA, the thermodynamics of hybridization can be accurately described as a two-state process. In this approximation one neglects the possibility of intermediate partial binding states in the formation of a double strand state from two single stranded oligonucleotides. Under this assumption one can elegantly describe the thermodynamic parameters for forming double-stranded nucleic acid AB from single-stranded nucleic acids A and B. AB ↔ A + B The equilibrium constant for this reaction is . According to the Van´t Hoff equation, the relation between free energy, ΔG, and K is ΔG° = -RTln K, where R is the ideal gas law constant, and T is the kelvin temperature of the reaction. This gives, for the nucleic acid system, . The melting temperature, Tm, occurs when half of the double-stranded nucleic acid has dissociated. If no additional nucleic acids are present, then [A], [B], and [AB] will be equal, and equal to half the initial concentration of double-stranded nucleic acid, [AB]initial. This gives an expression for the melting point of a nucleic acid duplex of . Because ΔG° = ΔH° -TΔS°, Tm is also given by . The terms ΔH° and ΔS° are usually given for the association and not the dissociation reaction (see the nearest-neighbor method for example). This formula then turns into: , where [B]total ≤ [A]total. As mentioned, this equation is based on the assumption that only two states are involved in melting: the double stranded state and the random-coil state. However, nucleic acids may melt via several intermediate states. To account for such complicated behavior, the methods of statistical mechanics must be used, which is especially relevant for long sequences. Estimating thermodynamic properties from nucleic acid sequence The previous paragraph shows how melting temperature and thermodynamic parameters (ΔG° or ΔH° & ΔS°) are related to each other. From the observation of melting temperatures one can experimentally determine the thermodynamic parameters. Vice versa, and important for applications, when the thermodynamic parameters of a given nucleic acid sequence are known, the melting temperature can be predicted. It turns out that for oligonucleotides, these parameters can be well approximated by the nearest-neighbor model. Nearest-neighbor method The interaction between bases on different strands depends somewhat on the neighboring bases. Instead of treating a DNA helix as a string of interactions between base pairs, the nearest-neighbor model treats a DNA helix as a string of interactions between 'neighboring' base pairs. So, for example, the DNA shown below has nearest-neighbor interactions indicated by the arrows.     ↓ ↓ ↓ ↓ ↓ 5' C-G-T-T-G-A 3' 3' G-C-A-A-C-T 5' The free energy of forming this DNA from the individual strands, ΔG°, is represented (at 37 °C) as ΔG°37(predicted) = ΔG°37(C/G initiation) + ΔG°37(CG/GC) + ΔG°37(GT/CA) + ΔG°37(TT/AA) + ΔG°37(TG/AC) + ΔG°37(GA/CT) + ΔG°37(A/T initiation) Except for the C/G initiation term, the first term represents the free energy of the first base pair, CG, in the absence of a nearest neighbor. The second term includes both the free energy of formation of the second base pair, GC, and stacking interaction between this base pair and the previous base pair. The remaining terms are similarly defined. In general, the free energy of forming a nucleic acid duplex is , where represents the free energy associated with one of the ten possible the nearest-neighbor nucleotide pairs, and represents its count in the sequence. Each ΔG° term has enthalpic, ΔH°, and entropic, ΔS°, parameters, so the change in free energy is also given by . Values of ΔH° and ΔS° have been determined for the ten possible pairs of interactions. These are given in Table 1, along with the value of ΔG° calculated at 37 °C. Using these values, the value of ΔG37° for the DNA duplex shown above is calculated to be −22.4 kJ/mol. The experimental value is −21.8 kJ/mol. The parameters associated with the ten groups of neighbors shown in table 1 are determined from melting points of short oligonucleotide duplexes. It works out that only eight of the ten groups are independent. The nearest-neighbor model can be extended beyond the Watson-Crick pairs to include parameters for interactions between mismatches and neighboring base pairs. This allows the estimation of the thermodynamic parameters of sequences containing isolated mismatches, like e.g. (arrows indicating mismatch)           ↓↓↓ 5' G-G-A-C-T-G-A-C-G 3' 3' C-C-T-G-G-C-T-G-C 5' These parameters have been fitted from melting experiments and an extension of Table 1 which includes mismatches can be found in literature. A more realistic way of modeling the behavior of nucleic acids would seem to be to have parameters that depend on the neighboring groups on both sides of a nucleotide, giving a table with entries like "TCG/AGC". However, this would involve around 32 groups for Watson-Crick pairing and even more for sequences containing mismatches; the number of DNA melting experiments needed to get reliable data for so many groups would be inconveniently high. However, other means exist to access thermodynamic parameters of nucleic acids: microarray technology allows hybridization monitoring of tens of thousands sequences in parallel. This data, in combination with molecular adsorption theory allows the determination of many thermodynamic parameters in a single experiment and to go beyond the nearest neighbor model. In general the predictions from the nearest neighbor method agree reasonably well with experimental results, but some unexpected outlying sequences, calling for further insights, do exist. Finally, we should also mention the increased accuracy provided by single molecule unzipping assays which provide a wealth of new insight into the thermodynamics of DNA hybridization and the validity of the nearest-neighbour model as well. See also Melting point Primer (molecular biology) for calculations of Tm Base pair Complementary DNA Western blot References External links Tm calculations in OligoAnalyzer – Integrated DNA Technologies DNA thermodynamics calculations – Tm, melting profile, mismatches, free energy calculations Tm calculation – by bioPHP.org. https://web.archive.org/web/20080516194508/http://www.promega.com/biomath/calc11.htm#disc Invitrogen Tm calculation AnnHyb Open Source software for Tm calculation using the Nearest-neighbour method Sigma-aldrich technical notes Primer3 calculation "Discovery of the Hybrid Helix and the First DNA-RNA Hybridization" by Alexander Rich uMelt: Melting Curve Prediction Tm Tool Nearest Neighbor Database: Provides a description of RNA-RNA interaction nearest neighbor parameters and examples of their use. DNA Nucleic acids Molecular biology Biotechnology Biochemical engineering de:Desoxyribonukleinsäure#Schmelzpunkt
Nucleic acid thermodynamics
[ "Chemistry", "Engineering", "Biology" ]
2,881
[ "Biomolecules by chemical classification", "Biological engineering", "Chemical engineering", "Biochemical engineering", "Biotechnology", "nan", "Molecular biology", "Biochemistry", "Nucleic acids" ]
2,166,569
https://en.wikipedia.org/wiki/Shape%20resonance
In quantum mechanics, a shape resonance is a metastable state in which an electron is trapped due to the shape of a potential barrier. Altunata describes a state as being a shape resonance if, "the internal state of the system remains unchanged upon disintegration of the quasi-bound level." A more general discussion of resonances and their taxonomies in molecular system can be found in the review article by Schulz; for the discovery of the Fano resonance line-shape and for the Majorana pioneering work in this field by Antonio Bianconi; and for a mathematical review by Combes et al. Quantum mechanics In quantum mechanics, a shape resonance, in contrast to a Feshbach resonance, is a resonance which is not turned into a bound state if the coupling between some degrees of freedom and the degrees of freedom associated to the fragmentation (reaction coordinates) are set to zero. More simply, the shape resonance total energy is more than the separated fragment energy. Practical implications of this difference for lifetimes and spectral widths are mentioned in works such as Zobel. Related terms include a special kind of shape resonance, the core-excited shape resonance, and trap-induced shape resonance. Of course in one-dimensional systems, resonances are shape resonances. In a system with more than one degree of freedom, this definition makes sense only if the separable model, which supposes the two groups of degrees of freedom uncoupled, is a meaningful approximation. When the coupling becomes large, the situation is much less clear. In the case of atomic and molecular electronic structure problems, it is well known that the self-consistent field (SCF) approximation is relevant at least as a starting point of more elaborate methods. The Slater determinants built from SCF orbitals (atomic or molecular orbitals) are shape resonances if only one electronic transition is required to emit one electron. Today, there is some debate about the definition and even existence of the shape resonance in some systems observed with molecular spectroscopy. It has been experimentally observed in the anionic yields from photofragmentation of small molecules to provide details of internal structure. In nuclear physics the concept of "Shape Resonance" is described by Amos de-Shalit and Herman Feshbach in their book.<ref>[https://books.google.com/books?id=A-XvAAAAMAAJ&q=Theoretical-Nuclear-Physics-Structure&dq=Theoretical-Nuclear-Physics-Structure Nuclear Physics: Nuclear structure] Amos de Shalit and Herman Feshbach John Wiley & Sons Inc, New York, page 87 (1974)</ref> "It is well known that the scattering from a potential shows characteristics peaks, as a function of energy, for such values of E that make the integral number of wave lengths sit within the potential. The resulting shape resonances are rather broad, their width being of the order of ...." The shape resonances were observed around the years 1949–54 in nuclear scattering experiments. They indicate broad asymmetric peaks in the scattering cross section of neutrons or protons scattered by nuclei. The name "shape resonance" has been introduced to describe the fact that the resonance in the potential scattering for the particle of energy E is controlled by the shape of the nucleus. In fact the shape resonance occurs where the integral number of wavelengths of the particle sit within the potential of the nucleus of radius R. Therefore, the measure of the energies of the shape resonances in the neutron-nucleus scattering have been used in the years from 1947 to 1954 to measure the radii R of the nuclei with the precision of ±1×10−13 cm as it can be seen in the chapter "Elastic Cross Sections" of A Textbook in Nuclear Physics'' by R. D. Evans. The "shape resonances" are discussed in general introductory academic courses of quantum mechanics in the frame of potential scattering phenomena. The shape resonances arise from the quantum interference between closed and an open scattering channels. At the resonance energy a quasi bound state is degenerate with a continuum. This quantum interference in many body system has been described using quantum mechanics by Gregor Wentzel, for the interpretation of the Auger effect, by Ettore Majorana for the dissociation processes and quasi bound states, by Ugo Fano for the atomic auto-ionization states in the continuum of helium atomic spectrum and by Victor Frederick Weisskopf. J. M. Blatt and Herman Feshbach for nuclear scattering experiments. The shape resonances are related with the existence of nearly stable bound states (that is, resonances) of two objects that dramatically influences how those two objects interact when their total energy is near that of the bound state. When the total energy of the objects is close to the energy of the resonance they interact strongly, and their scattering cross-section becomes very large. A particular type of "shape resonance" occurs in multiband or two-band superconducting heterostructures at atomic limit called superstripes due to quantum interference of a first pairing channel in a first wide band and a second pairing channel in a second band where the chemical potential is tuned near a Lifshitz transition at the band edge or at the topological electronic transitions of the Fermi surface type "neck-collapsing" or "neck-disrupting" See also Resonance (particle physics) Feshbach–Fano partitioning References Scattering Spectroscopy
Shape resonance
[ "Physics", "Chemistry", "Materials_science" ]
1,118
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Scattering", "Particle physics", "Condensed matter physics", "Nuclear physics", "Spectroscopy" ]
2,166,926
https://en.wikipedia.org/wiki/Fano%20resonance
In physics, a Fano resonance is a type of resonant scattering phenomenon that gives rise to an asymmetric line-shape. Interference between a background and a resonant scattering process produces the asymmetric line-shape. It is named after Italian-American physicist Ugo Fano, who in 1961 gave a theoretical explanation for the scattering line-shape of inelastic scattering of electrons from helium; however, Ettore Majorana was the first to discover this phenomenon. Fano resonance is a weak coupling effect meaning that the decay rate is so high, that no hybridization occurs. The coupling modifies the resonance properties such as spectral position and width and its line-shape takes on the distinctive asymmetric Fano profile. Because it is a general wave phenomenon, examples can be found across many areas of physics and engineering. History The explanation of the Fano line-shape first appeared in the context of inelastic electron scattering by helium and autoionization. The incident electron doubly excites the atom to the state, a sort of shape resonance. The doubly excited atom spontaneously decays by ejecting one of the excited electrons. Fano showed that interference between the amplitude to simply scatter the incident electron and the amplitude to scatter via autoionization creates an asymmetric scattering line-shape around the autoionization energy with a line-width very close to the inverse of the autoionization lifetime. Explanation The Fano resonance line-shape is due to interference between two scattering amplitudes, one due to scattering within a continuum of states (the background process) and the second due to an excitation of a discrete state (the resonant process). The energy of the resonant state must lie in the energy range of the continuum (background) states for the effect to occur. Near the resonant energy, the background scattering amplitude typically varies slowly with energy while the resonant scattering amplitude changes both in magnitude and phase quickly. It is this variation that creates the asymmetric profile. For energies far from the resonant energy the background scattering process dominates. Within of the resonant energy, the phase of the resonant scattering amplitude changes by . It is this rapid variation in phase that creates the asymmetric line-shape. Fano showed that the total scattering cross-section assumes the following form, where describes the line width of the resonant energy and , the Fano parameter, measures the ratio of resonant scattering to the direct (background) scattering amplitude. This is consistent with the interpretation within the Feshbach–Fano partitioning theory. In the case the direct scattering amplitude vanishes, the parameter becomes zero and the Fano formula becomes : Looking at transmission shows that this last expression boils down to the expected Breit–Wigner (Lorentzian) formula, as , the three parameters Lorentzian function (note that it is not a density function and does not integrate to 1, as its amplitude is 1 and not ). Examples Examples of Fano resonances can be found in atomic physics, nuclear physics, condensed matter physics, electrical circuits, microwave engineering, nonlinear optics, nanophotonics, magnetic metamaterials, and in mechanical waves. Fano can be observed with photoelectron spectroscopy and Raman spectroscopy. The phenomenon can be also observed at visible frequencies using simple glass microspheres, which may allow enhancing the magnetic field of light (which is typically small) by a few orders of magnitude. See also Resonance (particle physics) Core-excited shape resonance Antiresonance References Quantum mechanics Scattering
Fano resonance
[ "Physics", "Chemistry", "Materials_science" ]
741
[ "Theoretical physics", "Quantum mechanics", "Scattering", "Particle physics", "Condensed matter physics", "Nuclear physics" ]
2,167,227
https://en.wikipedia.org/wiki/Feshbach%E2%80%93Fano%20partitioning
In quantum mechanics, and in particular in scattering theory, the Feshbach–Fano method, named after Herman Feshbach and Ugo Fano, separates (partitions) the resonant and the background components of the wave function and therefore of the associated quantities like cross sections or phase shift. This approach allows us to define rigorously the concept of resonance in quantum mechanics. In general, the partitioning formalism is based on the definition of two complementary projectors P and Q such that P + Q = 1. The subspaces onto which P and Q project are sets of states obeying the continuum and the bound state boundary conditions respectively. P and Q are interpreted as the projectors on the background and the resonant subspaces respectively. The projectors P and Q are not defined within the Feshbach–Fano method. This is its major power as well as its major weakness. On the one hand, this makes the method very general and, on the other hand, it introduces some arbitrariness which is difficult to control. Some authors define first the P space as an approximation to the background scattering but most authors define first the Q space as an approximation to the resonance. This step relies always on some physical intuition which is not easy to quantify. In practice P or Q should be chosen such that the resulting background scattering phase or cross-section is slowly depending on the scattering energy in the neighbourhood of the resonances (this is the so-called flat continuum hypothesis). If one succeeds in translating the flat continuum hypothesis in a mathematical form, it is possible to generate a set of equations defining P and Q on a less arbitrary basis. The aim of the Feshbach–Fano method is to solve the Schrödinger equation governing a scattering process (defined by the Hamiltonian H) in two steps: First by solving the scattering problem ruled by the background Hamiltonian PHP. It is often supposed that the solution of this problem is trivial or at least fulfilling some standard hypotheses which allow to skip its full resolution. Second by solving the resonant scattering problem corresponding to the effective complex (energy dependent) Hamiltonian whose dimension is equal to the number of interacting resonances and depends parametrically on the scattering energy E. The resonance parameters and are obtained by solving the so-called implicit equation for z in the lower complex plane. The solution is the resonance pole. If is close to the real axis it gives rise to a Breit–Wigner or a Fano profile in the corresponding cross section. Both resulting T matrices have to be added in order to obtain the T matrix corresponding to the full scattering problem : References See also Resonance (particle physics) Resonances in scattering from potentials Feshbach resonance Fano resonance Scattering theory
Feshbach–Fano partitioning
[ "Chemistry" ]
567
[ "Scattering", "Scattering theory" ]
2,167,401
https://en.wikipedia.org/wiki/Longest%20common%20substring
In computer science, a longest common substring of two or more strings is a longest string that is a substring of all of them. There may be more than one longest common substring. Applications include data deduplication and plagiarism detection. Examples The picture shows two strings where the problem has multiple solutions. Although the substring occurrences always overlap, it is impossible to obtain a longer common substring by "uniting" them. The strings "ABABC", "BABCA" and "ABCBA" have only one longest common substring, viz. "ABC" of length 3. Other common substrings are "A", "AB", "B", "BA", "BC" and "C". ABABC ||| BABCA ||| ABCBA Problem definition Given two strings, of length and of length , find a longest string which is substring of both and . A generalization is the k-common substring problem. Given the set of strings , where and . Find for each , a longest string which occurs as substring of at least strings. Algorithms One can find the lengths and starting positions of the longest common substrings of and in time with the help of a generalized suffix tree. A faster algorithm can be achieved in the word RAM model of computation if the size of the input alphabet is in . In particular, this algorithm runs in time using space. Solving the problem by dynamic programming costs . The solutions to the generalized problem take space and time with dynamic programming and take time with a generalized suffix tree. Suffix tree The longest common substrings of a set of strings can be found by building a generalized suffix tree for the strings, and then finding the deepest internal nodes which have leaf nodes from all the strings in the subtree below it. The figure on the right is the suffix tree for the strings "ABAB", "BABA" and "ABBA", padded with unique string terminators, to become "ABAB$0", "BABA$1" and "ABBA$2". The nodes representing "A", "B", "AB" and "BA" all have descendant leaves from all of the strings, numbered 0, 1 and 2. Building the suffix tree takes time (if the size of the alphabet is constant). If the tree is traversed from the bottom up with a bit vector telling which strings are seen below each node, the k-common substring problem can be solved in time. If the suffix tree is prepared for constant time lowest common ancestor retrieval, it can be solved in time. Dynamic programming The following pseudocode finds the set of longest common substrings between two strings with dynamic programming: function LongestCommonSubstring(S[1..r], T[1..n]) L := array(1..r, 1..n) z := 0 ret := {} for i := 1..r for j := 1..n if S[i] = T[j] if i = 1 or j = 1 L[i, j] := 1 else L[i, j] := L[i − 1, j − 1] + 1 if L[i, j] > z z := L[i, j] ret := {S[(i − z + 1)..i]} else if L[i, j] = z ret := ret ∪ {S[(i − z + 1)..i]} else L[i, j] := 0 return ret This algorithm runs in time. The array L stores the length of the longest common suffix of the prefixes S[1..i] and T[1..j] which end at position i and j, respectively. The variable z is used to hold the length of the longest common substring found so far. The set ret is used to hold the set of strings which are of length z. The set ret can be saved efficiently by just storing the index i, which is the last character of the longest common substring (of size z) instead of S[(i-z+1)..i]. Thus all the longest common substrings would be, for each i in ret, S[(ret[i]-z)..(ret[i])]. The following tricks can be used to reduce the memory usage of an implementation: Keep only the last and current row of the DP table to save memory ( instead of ) The last and current row can be stored on the same 1D array by traversing the inner loop backwards Store only non-zero values in the rows. This can be done using hash-tables instead of arrays. This is useful for large alphabets. See also Longest palindromic substring n-gram, all the possible substrings of length n that are contained in a string References External links Dictionary of Algorithms and Data Structures: longest common substring Perl/XS implementation of the dynamic programming algorithm Perl/XS implementation of the suffix tree algorithm Dynamic programming implementations in various languages on wikibooks working AS3 implementation of the dynamic programming algorithm Suffix Tree based C implementation of Longest common substring for two strings Problems on strings Dynamic programming Articles with example pseudocode
Longest common substring
[ "Mathematics" ]
1,134
[ "Problems on strings", "Mathematical problems", "Computational problems" ]
2,168,513
https://en.wikipedia.org/wiki/Vital%20rates
Vital rates refer to how fast vital statistics change in a population (usually measured per 1000 individuals). There are 2 categories within vital rates: crude rates and refined rates. Crude rates measure vital statistics in a general population (overall change in births and deaths per 1000). Refined rates measure the change in vital statistics in a specific demographic (such as age, sex, race, etc.). Marriage rates The national marriage rates since 1972,in the US have fallen by almost 50% at six people per 1000. According to Iran Index and National Organization for Civil Registration of Iran Iranian divorce rate is in the red at its record highest level since 1979, divorce quotas were introduced to curb enthuitasim. References Ecology
Vital rates
[ "Biology" ]
147
[ "Ecology" ]
2,168,763
https://en.wikipedia.org/wiki/Reyn
In fluid dynamics, the reyn is a British unit of dynamic viscosity, named in honour of Osbourne Reynolds, for whom the Reynolds number is also named. Conversions By definition, 1 reyn = 1 lbf s in−2. It follows that the relation between the reyn and the poise is approximately 1 reyn = 6.89476 × 104 P. In SI units, viscosity is expressed in newton-seconds per square meter, or equivalently in pascal-seconds. The conversion factor between the two is approximately 1 reyn = 6890 Pa s. References External links Reyn History of the unit Fluid dynamics Units of dynamic viscosity
Reyn
[ "Chemistry", "Mathematics", "Engineering" ]
139
[ "Fluid dynamics stubs", "Chemical engineering", "Quantity", "Units of dynamic viscosity", "Piping", "Units of measurement", "Fluid dynamics" ]
2,169,038
https://en.wikipedia.org/wiki/Homochirality
Homochirality is a uniformity of chirality, or handedness. Objects are chiral when they cannot be superposed on their mirror images. For example, the left and right hands of a human are approximately mirror images of each other but are not their own mirror images, so they are chiral. In biology, 19 of the 20 natural amino acids are homochiral, being L-chiral (left-handed), while sugars are D-chiral (right-handed). Homochirality can also refer to enantiopure substances in which all the constituents are the same enantiomer (a right-handed or left-handed version of an atom or molecule), but some sources discourage this use of the term. It is unclear whether homochirality has a purpose; however, it appears to be a form of information storage. One suggestion is that it reduces entropy barriers in the formation of large organized molecules. It has been experimentally verified that amino acids form large aggregates in larger abundance from an enantiopure samples of the amino acid than from racemic (enantiomerically mixed) ones. It is not clear whether homochirality emerged before or after life, and many mechanisms for its origin have been proposed. Some of these models propose three distinct steps: mirror-symmetry breaking creates a minute enantiomeric imbalance, chiral amplification builds on this imbalance, and chiral transmission is the transfer of chirality from one set of molecules to another. In biology Amino acids are the building blocks of peptides and enzymes while sugar-peptide chains are the backbone of RNA and DNA. In biological organisms, amino acids appear almost exclusively in the left-handed form (L-amino acids) and sugars in the right-handed form (R-sugars). Since the enzymes catalyze reactions, they enforce homochirality on a great variety of other chemicals, including hormones, toxins, fragrances and food flavors. Glycine is achiral, as are some other non-proteinogenic amino acids that are either achiral (such as dimethylglycine) or of the D enantiomeric form. Biological organisms easily discriminate between molecules with different chiralities. This can affect physiological reactions such as smell and taste. Carvone, a terpenoid found in essential oils, smells like mint in its L-form and caraway in its R-form. Limonene tastes like citrus when right-handed and pine when left-handed. Homochirality also affects the response to drugs. Thalidomide, in its left-handed form, cures morning sickness; in its right-handed form, it causes birth defects. Unfortunately, even if a pure left-handed version is administered, some of it can convert to the right-handed form in the patient. Many drugs are available as both a racemic mixture (equal amounts of both chiralities) and an enantiopure drug (only one chirality). Depending on the manufacturing process, enantiopure forms can be more expensive to produce than stereochemical mixtures. Chiral preferences can also be found at a macroscopic level. Snail shells can be right-turning or left-turning helices, but one form or the other is strongly preferred in a given species. In the edible snail Helix pomatia, only one out of 20,000 is left-helical. The coiling of plants can have a preferred chirality and even the chewing motion of cows has a 10% excess in one direction. Origins Symmetry breaking Theories for the origin of homochirality in the molecules of life can be classified as deterministic or based on chance depending on their proposed mechanism. If there is a relationship between cause and effect — that is, a specific chiral field or influence causing the mirror symmetry breaking — the theory is classified as deterministic; otherwise it is classified as a theory based on chance (in the sense of randomness) mechanisms. Another classification for the different theories of the origin of biological homochirality could be made depending on whether life emerged before the enantiodiscrimination step (biotic theories) or afterwards (abiotic theories). Biotic theories claim that homochirality is simply a result of the natural autoamplification process of life—that either the formation of life as preferring one chirality or the other was a chance rare event which happened to occur with the chiralities we observe, or that all chiralities of life emerged rapidly but due to catastrophic events and strong competition, the other unobserved chiral preferences were wiped out by the preponderance and metabolic, enantiomeric enrichment from the 'winning' chirality choices. If this was the case, remains of the extinct chirality sign should be found. Since this is not the case, nowadays biotic theories are no longer supported. The emergence of chirality consensus as a natural autoamplification process has also been associated with the 2nd law of thermodynamics. Deterministic theories Deterministic theories can be divided into two subgroups: if the initial chiral influence took place in a specific space or time location (averaging zero over large enough areas of observation or periods of time), the theory is classified as local deterministic; if the chiral influence is permanent at the time the chiral selection occurred, then it is classified as universal deterministic. The classification groups for local determinist theories and theories based on chance mechanisms can overlap. Even if an external chiral influence produced the initial chiral imbalance in a deterministic way, the outcome sign could be random since the external chiral influence has its enantiomeric counterpart elsewhere. In deterministic theories, the enantiomeric imbalance is created due to an external chiral field or influence, and the ultimate sign imprinted in biomolecules will be due to it. Deterministic mechanisms for the production of non-racemic mixtures from racemic starting materials include: asymmetric physical laws, such as the electroweak interaction (via cosmic rays) or asymmetric environments, such as those caused by circularly polarized light, quartz crystals, or the Earth's rotation, β-Radiolysis or the magnetochiral effect. The most accepted universal deterministic theory is the electroweak interaction. Once established, chirality would be selected for. One supposition is that the discovery of an enantiomeric imbalance in molecules in the Murchison meteorite supports an extraterrestrial origin of homochirality: there is evidence for the existence of circularly polarized light originating from Mie scattering on aligned interstellar dust particles which may trigger the formation of an enantiomeric excess within chiral material in space. Interstellar and near-stellar magnetic fields can align dust particles in this fashion. Another speculation (the Vester-Ulbricht hypothesis) suggests that fundamental chirality of physical processes such as that of the beta decay (see Parity violation) leads to slightly different half-lives of biologically relevant molecules. Chance theories Chance theories are based on the assumption that "Absolute asymmetric synthesis, i.e., the formation of enantiomerically enriched products from achiral precursors without the intervention of chiral chemical reagents or catalysts, is in practice unavoidable on statistical grounds alone". Consider the racemic state as a macroscopic property described by a binomial distribution; the experiment of tossing a coin, where the two possible outcomes are the two enantiomers is a good analogy. The discrete probability distribution of obtaining n successes out of Bernoulli trials, where the result of each Bernoulli trial occurs with probability and the opposite occurs with probability is given by: . The discrete probability distribution of having exactly molecules of one chirality and of the other, is given by: . As in the experiment of tossing a coin, in this case, we assume both events ( or ) to be equiprobable, . The probability of having exactly the same amount of both enantiomers is inversely proportional to the square root of the total number of molecules . For one mol of a racemic compound, molecules, this probability becomes . The probability of finding the racemic state is so small that we can consider it negligible. In this scenario, there is a need to amplify the initial stochastic enantiomeric excess through any efficient mechanism of amplification. The most likely path for this amplification step is by asymmetric autocatalysis. An autocatalytic chemical reaction is that in which the reaction product is itself a reactive, in other words, a chemical reaction is autocatalytic if the reaction product is itself the catalyst of the reaction. In asymmetric autocatalysis, the catalyst is a chiral molecule, which means that a chiral molecule is catalysing its own production. An initial enantiomeric excess, such as can be produced by polarized light, then allows the more abundant enantiomer to outcompete the other. Amplification Theory In 1953, Charles Frank proposed a model to demonstrate that homochirality is a consequence of autocatalysis. In his model the L and D enantiomers of a chiral molecule are autocatalytically produced from an achiral molecule A while suppressing each other through a reaction that he called mutual antagonism In this model the racemic state is unstable in the sense that the slightest enantiomeric excess will be amplified to a completely homochiral state. This can be shown by computing the reaction rates from the law of mass action: where is the rate constant for the autocatalytic reactions, is the rate constant for mutual antagonism reaction, and the concentration of A is kept constant for simplicity. The analytical solutions for are found to be . The ratio increases at a more than exponential rate if is positive (and vice versa). Every starting conditions different to lead to one of the asymptotes or . Thus the equality of and and so of and represents a condition of unstable equilibrium, this result depending on the presence of the term representing mutual antagonism. By defining the enantiomeric excess as the rate of change of enantiomeric excess can be calculated using chain rule from the rate of change of the concentrations of enantiomers L and D. Linear stability analysis of this equation shows that the racemic state is unstable. Starting from almost everywhere in the concentration space, the system evolves to a homochiral state. It is generally understood that autocatalysis alone does not yield to homochirality, and the presence of the mutually antagonistic relationship between the two enantiomers is necessary for the instability of the racemic mixture. However, recent studies show that homochirality could be achieved from autocatalysis in the absence of the mutually antagonistic relationship, but the underlying mechanism for symmetry-breaking is different. Experiments There are several laboratory experiments that demonstrate how a small amount of one enantiomer at the start of a reaction can lead to a large excess of a single enantiomer as the product. For example, the Soai reaction is autocatalytic. If the reaction is started with some of one of the product enantiomers already present, the product acts as an enantioselective catalyst for production of more of that same enantiomer. The initial presence of just 0.2 equivalent one enantiomer can lead to up to 93% enantiomeric excess of the product. Another study concerns the proline catalyzed aminoxylation of propionaldehyde by nitrosobenzene. In this system, a small enantiomeric excess of catalyst leads to a large enantiomeric excess of product. Serine octamer clusters are also contenders. These clusters of 8 serine molecules appear in mass spectrometry with an unusual homochiral preference, however there is no evidence that such clusters exist under non-ionizing conditions and amino acid phase behavior is far more prebiotically relevant. The recent observation that partial sublimation of a 10% enantioenriched sample of leucine results in up to 82% enrichment in the sublimate shows that enantioenrichment of amino acids could occur in space. Partial sublimation processes can take place on the surface of meteors where large variations in temperature exist. This finding may have consequences for the development of the Mars Organic Detector scheduled for launch in 2013 which aims to recover trace amounts of amino acids from the Mars surface exactly by a sublimation technique. A high asymmetric amplification of the enantiomeric excess of sugars are also present in the amino acid catalyzed asymmetric formation of carbohydrates One classic study involves an experiment that takes place in the laboratory. When sodium chlorate is allowed to crystallize from water and the collected crystals examined in a polarimeter, each crystal turns out to be chiral and either the L form or the D form. In an ordinary experiment the amount of L crystals collected equals the amount of D crystals (corrected for statistical effects). However, when the sodium chlorate solution is stirred during the crystallization process the crystals are either exclusively L or exclusively D. In 32 consecutive crystallization experiments 14 experiments deliver D-crystals and 18 others L-crystals. The explanation for this symmetry breaking is unclear but is related to autocatalysis taking place in the nucleation process. In a related experiment, a crystal suspension of a racemic amino acid derivative continuously stirred, results in a 100% crystal phase of one of the enantiomers because the enantiomeric pair is able to equilibrate in solution (compare with dynamic kinetic resolution). Transmission Once a significant enantiomeric enrichment has been produced in a system, the transference of chirality through the entire system is customary. This last step is known as the chiral transmission step. Many strategies in asymmetric synthesis are built on chiral transmission. Especially important is the so-called organocatalysis of organic reactions by proline for example in Mannich reactions. Some proposed models for the transmission of chiral asymmetry are polymerization, epimerization or copolymerization. Optical resolution in racemic amino acids There exists no theory elucidating correlations among L-amino acids. If one takes, for example, alanine, which has a small methyl group, and phenylalanine, which has a larger benzyl group, a simple question is in what aspect, L-alanine resembles L-phenylalanine more than D-phenylalanine, and what kind of mechanism causes the selection of all L-amino acids, because it might be possible that alanine was L and phenylalanine was D. It was reported in 2004 that excess racemic D,L-asparagine (Asn), which spontaneously forms crystals of either isomer during recrystallization, induces asymmetric resolution of a co-existing racemic amino acid such as arginine (Arg), aspartic acid (Asp), glutamine (Gln), histidine (His), leucine (Leu), methionine (Met), phenylalanine (Phe), serine (Ser), valine (Val), tyrosine (Tyr), and tryptophan (Trp). The enantiomeric excess of these amino acids was correlated almost linearly with that of the inducer, i.e., Asn. When recrystallizations from a mixture of 12 D,L-amino acids (Ala, Asp, Arg, Glu, Gln, His, Leu, Met, Ser, Val, Phe, and Tyr) and excess D,L-Asn were made, all amino acids with the same configuration with Asn were preferentially co-crystallized. It was incidental whether the enrichment took place in L- or D-Asn, however, once the selection was made, the co-existing amino acid with the same configuration at the α-carbon was preferentially involved because of thermodynamic stability in the crystal formation. The maximal ee was reported to be 100%. Based on these results, it is proposed that a mixture of racemic amino acids causes spontaneous and effective optical resolution, even if asymmetric synthesis of a single amino acid does not occur without an aid of an optically active molecule. This is the first study elucidating reasonably the formation of chirality from racemic amino acids with experimental evidences. History of term This term was introduced by Kelvin in 1904, the year that he published his Baltimore Lecture of 1884. Kelvin used the term homochirality as a relationship between two molecules, i.e. two molecules are homochiral if they have the same chirality. Recently, however, homochiral has been used in the same sense as enantiomerically pure. This is permitted in some journals (but not encouraged), its meaning changing into the preference of a process or system for a single optical isomer in a pair of isomers in these journals. See also Chiral life concept - of artificially synthesizing chiral-mirror version of life CIP system Stereochemistry Pfeiffer Effect Unsolved problems in chemistry References Further reading External links Observations Support Homochirality Theory. Photonics TechnologyWorld November 1998. Origins of Homochirality . Conference in Nordita Stockholm, February 2008. Origin of life Stereochemistry Pharmacology
Homochirality
[ "Physics", "Chemistry", "Biology" ]
3,714
[ "Pharmacology", "Origin of life", "Biochemistry", "Stereochemistry", "Chirality", "Space", "Medicinal chemistry", "Asymmetry", "nan", "Spacetime", "Symmetry", "Biological hypotheses" ]
4,107,858
https://en.wikipedia.org/wiki/Semiconductor%20fabrication%20plant
In the microelectronics industry, a semiconductor fabrication plant, also called a fab or a foundry, is a factory where integrated circuits (ICs) are manufactured. The cleanroom is where all fabrication takes place and contains the machinery for integrated circuit production such as steppers and/or scanners for photolithography, etching, cleaning, and doping. All these devices are extremely precise and thus extremely expensive. Prices for pieces of equipment for the processing of 300 mm wafers range to upwards of $4,000,000 each with a few pieces of equipment reaching as high as $340,000,000 (e.g. EUV scanners). A typical fab will have several hundred equipment items. Semiconductor fabrication requires many expensive devices. Estimates put the cost of building a new fab at over one billion U.S. dollars with values as high as $3–4 billion not being uncommon. For example, TSMC invested $9.3 billion in its Fab15 in Taiwan. The same company estimations suggest that their future fab might cost $20 billion. A foundry model emerged in the 1990s: Companies owning fabs that produced their own designs were known as integrated device manufacturers (IDMs). Companies that outsourced manufacturing of their designs were termed fabless semiconductor companies. Those foundries which did not create their own designs were called pure-play semiconductor foundries. In the cleanroom, the environment is controlled to eliminate all dust, since even a single speck can ruin a microcircuit, which has nanoscale features much smaller than dust particles. The clean room must also be damped against vibration to enable nanometer-scale alignment of photolithography machines and must be kept within narrow bands of temperature and humidity. Vibration control may be achieved by using deep piles in the cleanroom's foundation that anchor the cleanroom to the bedrock, careful selection of the construction site, and/or using vibration dampers. Controlling temperature and humidity is critical for minimizing static electricity. Corona discharge sources can also be used to reduce static electricity. Often, a fab will be constructed in the following manner (from top to bottom): the roof, which may contain air handling equipment that draws, purifies and cools outside air, an air plenum for distributing the air to several floor-mounted fan filter units, which are also part of the cleanroom's ceiling, the cleanroom itself, which may or may not have more than one story, a return air plenum, the clean subfab that may contain support equipment for the machines in the cleanroom such as chemical delivery, purification, recycling and destruction systems, and the ground floor, that may contain electrical equipment. Fabs also often have some office space. History Typically an advance in chip-making technology requires a completely new fab to be built. In the past, the equipment to outfit a fab was not very expensive and there were a huge number of smaller fabs producing chips in small quantities. However, the cost of the most up-to-date equipment has since grown to the point where a new fab can cost several billion dollars. Another side effect of the cost has been the challenge to make use of older fabs. For many companies these older fabs are useful for producing designs for unique markets, such as embedded processors, flash memory, and microcontrollers. However, for companies with more limited product lines, it is often best to either rent out the fab, or close it entirely. This is due to the tendency of the cost of upgrading an existing fab to produce devices requiring newer technology to exceed the cost of a completely new fab. There has been a trend to produce ever larger wafers, so each process step is being performed on more and more chips at once. The goal is to spread production costs (chemicals, fab time) over a larger number of saleable chips. It is impossible (or at least impracticable) to retrofit machinery to handle larger wafers. This is not to say that foundries using smaller wafers are necessarily obsolete; older foundries can be cheaper to operate, have higher yields for simple chips and still be productive. The industry was aiming to move from the state-of-the-art wafer size 300 mm (12 in) to 450 mm by 2018. In March 2014, Intel expected 450 mm deployment by 2020. But in 2016, corresponding joint research efforts were stopped. Additionally, there is a large push to completely automate the production of semiconductor chips from beginning to end. This is often referred to as the "lights-out fab" concept. The International Sematech Manufacturing Initiative (ISMI), an extension of the US consortium SEMATECH, is sponsoring the "300 mm Prime" initiative. An important goal of this initiative is to enable fabs to produce greater quantities of smaller chips as a response to shorter lifecycles seen in consumer electronics. The logic is that such a fab can produce smaller lots more easily and can efficiently switch its production to supply chips for a variety of new electronic devices. Another important goal is to reduce the waiting time between processing steps. See also Foundry model for the business aspects of foundries and fabless companies List of semiconductor fabrication plants Rock's law Semiconductor consolidation Semiconductor device fabrication for the process of manufacturing devices Notes References Handbook of Semiconductor Manufacturing Technology, Second Edition by Robert Doering and Yoshio Nishi (Hardcover – Jul 9, 2007) Semiconductor Manufacturing Technology by Michael Quirk and Julian Serda (paperback – Nov 19, 2000) Fundamentals of Semiconductor Manufacturing and Process Control by Gary S. May and Costas J. Spanos (hardcover – May 22, 2006) The Essential Guide to Semiconductors (Essential Guide Series) by Jim Turley (paperback – Dec 29, 2002) Semiconductor Manufacturing Handbook (McGraw–Hill Handbooks) by Hwaiyu Geng (hardcover – April 27, 2005) Further reading "Chip Makers Watch Their Waste", The Wall Street Journal, July 19, 2007, p.B3 Semiconductor device fabrication Manufacturing plants
Semiconductor fabrication plant
[ "Materials_science" ]
1,252
[ "Semiconductor device fabrication", "Microtechnology" ]
4,109,042
https://en.wikipedia.org/wiki/Cell%20signaling
In biology, cell signaling (cell signalling in British English) is the process by which a cell interacts with itself, other cells, and the environment. Cell signaling is a fundamental property of all cellular life in prokaryotes and eukaryotes. Typically, the signaling process involves three components: the signal, the receptor, and the effector. In biology, signals are mostly chemical in nature, but can also be physical cues such as pressure, voltage, temperature, or light. Chemical signals are molecules with the ability to bind and activate a specific receptor. These molecules, also referred as ligands, are chemically diverse, including ions (e.g. Na+, K+, Ca++, etc.), lipids (e.g. steroid, prostaglandin), peptides (e.g. insulin, ACTH), carbohydrates, glycosylated proteins (proteoglycans), nucleic acids, etc. Peptide and lipid ligands are particularly important, as most hormones belong to these classes of chemicals. Peptides are usually polar, hydrophilic molecules. As such they are unable to diffuse freely across the bi-lipid layer of the plasma membrane, so their action is mediated by a cell membrane bound receptor. On the other hand, liposoluble chemicals such as steroid hormones, can diffuse passively across the plasma membrane and interact with intracellular receptors. Cell signaling can occur over short or long distances, and can be further classified as autocrine, intracrine, juxtacrine, paracrine, or endocrine. Autocrine signaling occurs when the chemical signal acts on the same cell that produced the signaling chemical. Intracrine signaling occurs when the chemical signal produced by a cell acts on receptors located in the cytoplasm or nucleus of the same cell. Juxtacrine signaling occurs between physically adjacent cells. Paracrine signaling occurs between nearby cells. Endocrine interaction occurs between distant cells, with the chemical signal usually carried by the blood. Receptors are complex proteins or tightly bound multimer of proteins, located in the plasma membrane or within the interior of the cell such as in the cytoplasm, organelles, and nucleus. Receptors have the ability to detect a signal either by binding to a specific chemical or by undergoing a conformational change when interacting with physical agents. It is the specificity of the chemical interaction between a given ligand and its receptor that confers the ability to trigger a specific cellular response. Receptors can be broadly classified into cell membrane receptors and intracellular receptors. Cell membrane receptors can be further classified into ion channel linked receptors, G-Protein coupled receptors and enzyme linked receptors. Ion channels receptors are large transmembrane proteins with a ligand activated gate function. When these receptors are activated, they may allow or block passage of specific ions across the cell membrane. Most receptors activated by physical stimuli such as pressure or temperature belongs to this category. G-protein receptors are multimeric proteins embedded within the plasma membrane. These receptors have extracellular, trans-membrane and intracellular domains. The extracellular domain is responsible for the interaction with a specific ligand. The intracellular domain is responsible for the initiation of a cascade of chemical reactions which ultimately triggers the specific cellular function controlled by the receptor. Enzyme-linked receptors are transmembrane proteins with an extracellular domain responsible for binding a specific ligand and an intracellular domain with enzymatic or catalytic activity. Upon activation the enzymatic portion is responsible for promoting specific intracellular chemical reactions. Intracellular receptors have a different mechanism of action. They usually bind to lipid soluble ligands that diffuse passively through the plasma membrane such as steroid hormones. These ligands bind to specific cytoplasmic transporters that shuttle the hormone-transporter complex inside the nucleus where specific genes are activated and the synthesis of specific proteins is promoted. The effector component of the signaling pathway begins with signal transduction. In this process, the signal, by interacting with the receptor, starts a series of molecular events within the cell leading to the final effect of the signaling process. Typically the final effect consists in the activation of an ion channel (ligand-gated ion channel) or the initiation of a second messenger system cascade that propagates the signal through the cell. Second messenger systems can amplify or modulate a signal, in which activation of a few receptors results in multiple secondary messengers being activated, thereby amplifying the initial signal (the first messenger). The downstream effects of these signaling pathways may include additional enzymatic activities such as proteolytic cleavage, phosphorylation, methylation, and ubiquitinylation. Signaling molecules can be synthesized from various biosynthetic pathways and released through passive or active transports, or even from cell damage. Each cell is programmed to respond to specific extracellular signal molecules, and is the basis of development, tissue repair, immunity, and homeostasis. Errors in signaling interactions may cause diseases such as cancer, autoimmunity, and diabetes. Taxonomic range In many small organisms such as bacteria, quorum sensing enables individuals to begin an activity only when the population is sufficiently large. This signaling between cells was first observed in the marine bacterium Aliivibrio fischeri, which produces light when the population is dense enough. The mechanism involves the production and detection of a signaling molecule, and the regulation of gene transcription in response. Quorum sensing operates in both gram-positive and gram-negative bacteria, and both within and between species. In slime molds, individual cells aggregate together to form fruiting bodies and eventually spores, under the influence of a chemical signal, known as an acrasin. The individuals move by chemotaxis, i.e. they are attracted by the chemical gradient. Some species use cyclic AMP as the signal; others such as Polysphondylium violaceum use a dipeptide known as glorin. In plants and animals, signaling between cells occurs either through release into the extracellular space, divided in paracrine signaling (over short distances) and endocrine signaling (over long distances), or by direct contact, known as juxtacrine signaling such as notch signaling. Autocrine signaling is a special case of paracrine signaling where the secreting cell has the ability to respond to the secreted signaling molecule. Synaptic signaling is a special case of paracrine signaling (for chemical synapses) or juxtacrine signaling (for electrical synapses) between neurons and target cells. Extracellular signal Synthesis and release Many cell signals are carried by molecules that are released by one cell and move to make contact with another cell. Signaling molecules can belong to several chemical classes: lipids, phospholipids, amino acids, monoamines, proteins, glycoproteins, or gases. Signaling molecules binding surface receptors are generally large and hydrophilic (e.g. TRH, Vasopressin, Acetylcholine), while those entering the cell are generally small and hydrophobic (e.g. glucocorticoids, thyroid hormones, cholecalciferol, retinoic acid), but important exceptions to both are numerous, and the same molecule can act both via surface receptors or in an intracrine manner to different effects. In animal cells, specialized cells release these hormones and send them through the circulatory system to other parts of the body. They then reach target cells, which can recognize and respond to the hormones and produce a result. This is also known as endocrine signaling. Plant growth regulators, or plant hormones, move through cells or by diffusing through the air as a gas to reach their targets. Hydrogen sulfide is produced in small amounts by some cells of the human body and has a number of biological signaling functions. Only two other such gases are currently known to act as signaling molecules in the human body: nitric oxide and carbon monoxide. Exocytosis Exocytosis is the process by which a cell transports molecules such as neurotransmitters and proteins out of the cell. As an active transport mechanism, exocytosis requires the use of energy to transport material. Exocytosis and its counterpart, endocytosis, the process that brings substances into the cell, are used by all cells because most chemical substances important to them are large polar molecules that cannot pass through the hydrophobic portion of the cell membrane by passive transport. Exocytosis is the process by which a large amount of molecules are released; thus it is a form of bulk transport. Exocytosis occurs via secretory portals at the cell plasma membrane called porosomes. Porosomes are permanent cup-shaped lipoprotein structures at the cell plasma membrane, where secretory vesicles transiently dock and fuse to release intra-vesicular contents from the cell. In exocytosis, membrane-bound secretory vesicles are carried to the cell membrane, where they dock and fuse at porosomes and their contents (i.e., water-soluble molecules) are secreted into the extracellular environment. This secretion is possible because the vesicle transiently fuses with the plasma membrane. In the context of neurotransmission, neurotransmitters are typically released from synaptic vesicles into the synaptic cleft via exocytosis; however, neurotransmitters can also be released via reverse transport through membrane transport proteins. Forms of Cell Signaling Autocrine Autocrine signaling involves a cell secreting a hormone or chemical messenger (called the autocrine agent) that binds to autocrine receptors on that same cell, leading to changes in the cell itself. This can be contrasted with paracrine signaling, intracrine signaling, or classical endocrine signaling. Intracrine In intracrine signaling, the signaling chemicals are produced inside the cell and bind to cytosolic or nuclear receptors without being secreted from the cell.. In intracrine signaling, signals are relayed without being secreted from the cell. The intracrine signals not being secreted outside of the cell is what sets apart intracrine signaling from the other cell signaling mechanisms such as autocrine signaling. In both autocrine and intracrine signaling, the signal has an effect on the cell that produced it. Juxtacrine Juxtacrine signaling is a type of cell–cell or cell–extracellular matrix signaling in multicellular organisms that requires close contact. There are three types: A membrane ligand (protein, oligosaccharide, lipid) and a membrane protein of two adjacent cells interact. A communicating junction links the intracellular compartments of two adjacent cells, allowing transit of relatively small molecules. An extracellular matrix glycoprotein and a membrane protein interact. Additionally, in unicellular organisms such as bacteria, juxtacrine signaling means interactions by membrane contact. Juxtacrine signaling has been observed for some growth factors, cytokine and chemokine cellular signals, playing an important role in the immune response. Juxtacrine signalling via direct membrane contacts is also present between neuronal cell bodies and motile processes of microglia both during development, and in the adult brain. Paracrine In paracrine signaling, a cell produces a signal to induce changes in nearby cells, altering the behaviour of those cells. Signaling molecules known as paracrine factors diffuse over a relatively short distance (local action), as opposed to cell signaling by endocrine factors, hormones which travel considerably longer distances via the circulatory system; juxtacrine interactions; and autocrine signaling. Cells that produce paracrine factors secrete them into the immediate extracellular environment. Factors then travel to nearby cells in which the gradient of factor received determines the outcome. However, the exact distance that paracrine factors can travel is not certain. Paracrine signals such as retinoic acid target only cells in the vicinity of the emitting cell. Neurotransmitters represent another example of a paracrine signal. Some signaling molecules can function as both a hormone and a neurotransmitter. For example, epinephrine and norepinephrine can function as hormones when released from the adrenal gland and are transported to the heart by way of the blood stream. Norepinephrine can also be produced by neurons to function as a neurotransmitter within the brain. Estrogen can be released by the ovary and function as a hormone or act locally via paracrine or autocrine signaling. Although paracrine signaling elicits a diverse array of responses in the induced cells, most paracrine factors utilize a relatively streamlined set of receptors and pathways. In fact, different organs in the body - even between different species - are known to utilize a similar sets of paracrine factors in differential development. The highly conserved receptors and pathways can be organized into four major families based on similar structures: fibroblast growth factor (FGF) family, Hedgehog family, Wnt family, and TGF-β superfamily. Binding of a paracrine factor to its respective receptor initiates signal transduction cascades, eliciting different responses. Endocrine Endocrine signals are called hormones. Hormones are produced by endocrine cells and they travel through the blood to reach all parts of the body. Specificity of signaling can be controlled if only some cells can respond to a particular hormone. Endocrine signaling involves the release of hormones by internal glands of an organism directly into the circulatory system, regulating distant target organs. In vertebrates, the hypothalamus is the neural control center for all endocrine systems. In humans, the major endocrine glands are the thyroid gland and the adrenal glands. The study of the endocrine system and its disorders is known as endocrinology. Receptors Cells receive information from their neighbors through a class of proteins known as receptors. Receptors may bind with some molecules (ligands) or may interact with physical agents like light, mechanical temperature, pressure, etc. Reception occurs when the target cell (any cell with a receptor protein specific to the signal molecule) detects a signal, usually in the form of a small, water-soluble molecule, via binding to a receptor protein on the cell surface, or once inside the cell, the signaling molecule can bind to intracellular receptors, other elements, or stimulate enzyme activity (e.g. gasses), as in intracrine signaling. Signaling molecules interact with a target cell as a ligand to cell surface receptors, and/or by entering into the cell through its membrane or endocytosis for intracrine signaling. This generally results in the activation of second messengers, leading to various physiological effects. In many mammals, early embryo cells exchange signals with cells of the uterus. In the human gastrointestinal tract, bacteria exchange signals with each other and with human epithelial and immune system cells. For the yeast Saccharomyces cerevisiae during mating, some cells send a peptide signal (mating factor pheromones) into their environment. The mating factor peptide may bind to a cell surface receptor on other yeast cells and induce them to prepare for mating. Cell surface receptors Cell surface receptors play an essential role in the biological systems of single- and multi-cellular organisms and malfunction or damage to these proteins is associated with cancer, heart disease, and asthma. These trans-membrane receptors are able to transmit information from outside the cell to the inside because they change conformation when a specific ligand binds to it. There are three major types: Ion channel linked receptors, G protein–coupled receptors, and enzyme-linked receptors. Ion channel linked receptors Ion channel linked receptors are a group of transmembrane ion-channel proteins which open to allow ions such as Na+, K+, Ca2+, and/or Cl− to pass through the membrane in response to the binding of a chemical messenger (i.e. a ligand), such as a neurotransmitter. When a presynaptic neuron is excited, it releases a neurotransmitter from vesicles into the synaptic cleft. The neurotransmitter then binds to receptors located on the postsynaptic neuron. If these receptors are ligand-gated ion channels, a resulting conformational change opens the ion channels, which leads to a flow of ions across the cell membrane. This, in turn, results in either a depolarization, for an excitatory receptor response, or a hyperpolarization, for an inhibitory response. These receptor proteins are typically composed of at least two different domains: a transmembrane domain which includes the ion pore, and an extracellular domain which includes the ligand binding location (an allosteric binding site). This modularity has enabled a 'divide and conquer' approach to finding the structure of the proteins (crystallising each domain separately). The function of such receptors located at synapses is to convert the chemical signal of presynaptically released neurotransmitter directly and very quickly into a postsynaptic electrical signal. Many LICs are additionally modulated by allosteric ligands, by channel blockers, ions, or the membrane potential. LICs are classified into three superfamilies which lack evolutionary relationship: cys-loop receptors, ionotropic glutamate receptors and ATP-gated channels. G protein–coupled receptors G protein-coupled receptors are a large group of evolutionarily-related proteins that are cell surface receptors that detect molecules outside the cell and activate cellular responses. Coupling with G proteins, they are called seven-transmembrane receptors because they pass through the cell membrane seven times. The G-protein acts as a "middle man" transferring the signal from its activated receptor to its target and therefore indirectly regulates that target protein. Ligands can bind either to extracellular N-terminus and loops (e.g. glutamate receptors) or to the binding site within transmembrane helices (Rhodopsin-like family). They are all activated by agonists although a spontaneous auto-activation of an empty receptor can also be observed. G protein-coupled receptors are found only in eukaryotes, including yeast, choanoflagellates, and animals. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases. There are two principal signal transduction pathways involving the G protein-coupled receptors: cAMP signal pathway and phosphatidylinositol signal pathway. When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G protein by exchanging the GDP bound to the G protein for a GTP. The G protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13). G protein-coupled receptors are an important drug target and approximately 34% of all Food and Drug Administration (FDA) approved drugs target 108 members of this family. The global sales volume for these drugs is estimated to be 180 billion US dollars . It is estimated that GPCRs are targets for about 50% of drugs currently on the market, mainly due to their involvement in signaling pathways related to many diseases i.e. mental, metabolic including endocrinological disorders, immunological including viral infections, cardiovascular, inflammatory, senses disorders, and cancer. The long ago discovered association between GPCRs and many endogenous and exogenous substances, resulting in e.g. analgesia, is another dynamically developing field of pharmaceutical research. Enzyme-linked receptors Enzyme-linked receptors (or catalytic receptors) are transmembrane receptors that, upon activation by an extracellular ligand, causes enzymatic activity on the intracellular side. Hence a catalytic receptor is an integral membrane protein possessing both enzymatic, catalytic, and receptor functions. They have two important domains, an extra-cellular ligand binding domain and an intracellular domain, which has a catalytic function; and a single transmembrane helix. The signaling molecule binds to the receptor on the outside of the cell and causes a conformational change on the catalytic function located on the receptor inside the cell. Examples of the enzymatic activity include: Receptor tyrosine kinase, as in fibroblast growth factor receptor. Most enzyme-linked receptors are of this type. Serine/threonine-specific protein kinase, as in bone morphogenetic protein Guanylate cyclase, as in atrial natriuretic factor receptor Intracellular receptors Intracellular receptors exist freely in the cytoplasm, nucleus, or can be bound to organelles or membranes. For example, the presence of nuclear and mitochondrial receptors is well documented. The binding of a ligand to the intracellular receptor typically induces a response in the cell. Intracellular receptors often have a level of specificity, this allows the receptors to initiate certain responses when bound to a corresponding ligand. Intracellular receptors typically act on lipid soluble molecules. The receptors bind to a group of DNA binding proteins. Upon binding, the receptor-ligand complex translocates to the nucleus where they can alter patterns of gene expression. Steroid hormone receptor Steroid hormone receptors are found in the nucleus, cytosol, and also on the plasma membrane of target cells. They are generally intracellular receptors (typically cytoplasmic or nuclear) and initiate signal transduction for steroid hormones which lead to changes in gene expression over a time period of hours to days. The best studied steroid hormone receptors are members of the nuclear receptor subfamily 3 (NR3) that include receptors for estrogen (group NR3A) and 3-ketosteroids (group NR3C). In addition to nuclear receptors, several G protein-coupled receptors and ion channels act as cell surface receptors for certain steroid hormones. Mechanisms of Receptor Down-Regulation Receptor mediated endocytosis is common way of turning receptors "off". Endocytic down regulation is regarded as a means for reducing receptor signaling. The process involves the binding of a ligand to the receptor, which then triggers the formation of coated pits, the coated pits transform to coated vesicles and are transported to the endosome. Receptor Phosphorylation is another type of receptor down-regulation. Biochemical changes can reduce receptor affinity for a ligand. Reducing the sensitivity of the receptor is a result of receptors being occupied for a long time. This results in a receptor adaptation in which the receptor no longer responds to the signaling molecule. Many receptors have the ability to change in response to ligand concentration. Signal transduction pathways When binding to the signaling molecule, the receptor protein changes in some way and starts the process of transduction, which can occur in a single step or as a series of changes in a sequence of different molecules (called a signal transduction pathway). The molecules that compose these pathways are known as relay molecules. The multistep process of the transduction stage is often composed of the activation of proteins by addition or removal of phosphate groups or even the release of other small molecules or ions that can act as messengers. The amplifying of a signal is one of the benefits to this multiple step sequence. Other benefits include more opportunities for regulation than simpler systems do and the fine-tuning of the response, in both unicellular and multicellular organism. In some cases, receptor activation caused by ligand binding to a receptor is directly coupled to the cell's response to the ligand. For example, the neurotransmitter GABA can activate a cell surface receptor that is part of an ion channel. GABA binding to a GABAA receptor on a neuron opens a chloride-selective ion channel that is part of the receptor. GABAA receptor activation allows negatively charged chloride ions to move into the neuron, which inhibits the ability of the neuron to produce action potentials. However, for many cell surface receptors, ligand-receptor interactions are not directly linked to the cell's response. The activated receptor must first interact with other proteins inside the cell before the ultimate physiological effect of the ligand on the cell's behavior is produced. Often, the behavior of a chain of several interacting cell proteins is altered following receptor activation. The entire set of cell changes induced by receptor activation is called a signal transduction mechanism or pathway. A more complex signal transduction pathway is the MAPK/ERK pathway, which involves changes of protein–protein interactions inside the cell, induced by an external signal. Many growth factors bind to receptors at the cell surface and stimulate cells to progress through the cell cycle and divide. Several of these receptors are kinases that start to phosphorylate themselves and other proteins when binding to a ligand. This phosphorylation can generate a binding site for a different protein and thus induce protein–protein interaction. In this case, the ligand (called epidermal growth factor, or EGF) binds to the receptor (called EGFR). This activates the receptor to phosphorylate itself. The phosphorylated receptor binds to an adaptor protein (GRB2), which couples the signal to further downstream signaling processes. For example, one of the signal transduction pathways that are activated is called the mitogen-activated protein kinase (MAPK) pathway. The signal transduction component labeled as "MAPK" in the pathway was originally called "ERK," so the pathway is called the MAPK/ERK pathway. The MAPK protein is an enzyme, a protein kinase that can attach phosphate to target proteins such as the transcription factor MYC and, thus, alter gene transcription and, ultimately, cell cycle progression. Many cellular proteins are activated downstream of the growth factor receptors (such as EGFR) that initiate this signal transduction pathway. Some signaling transduction pathways respond differently, depending on the amount of signaling received by the cell. For instance, the hedgehog protein activates different genes, depending on the amount of hedgehog protein present. Complex multi-component signal transduction pathways provide opportunities for feedback, signal amplification, and interactions inside one cell between multiple signals and signaling pathways. A specific cellular response is the result of the transduced signal in the final stage of cell signaling. This response can essentially be any cellular activity that is present in a body. It can spur the rearrangement of the cytoskeleton, or even as catalysis by an enzyme. These three steps of cell signaling all ensure that the right cells are behaving as told, at the right time, and in synchronization with other cells and their own functions within the organism. At the end, the end of a signal pathway leads to the regulation of a cellular activity. This response can take place in the nucleus or in the cytoplasm of the cell. A majority of signaling pathways control protein synthesis by turning certain genes on and off in the nucleus. In unicellular organisms such as bacteria, signaling can be used to 'activate' peers from a dormant state, enhance virulence, defend against bacteriophages, etc. In quorum sensing, which is also found in social insects, the multiplicity of individual signals has the potentiality to create a positive feedback loop, generating coordinated response. In this context, the signaling molecules are called autoinducers. This signaling mechanism may have been involved in evolution from unicellular to multicellular organisms. Bacteria also use contact-dependent signaling, notably to limit their growth. Signaling molecules used by multicellular organisms are often called pheromones. They can have such purposes as alerting against danger, indicating food supply, or assisting in reproduction. Short-term cellular responses . Regulating gene activity . Notch signaling pathway Notch is a cell surface protein that functions as a receptor. Animals have a small set of genes that code for signaling proteins that interact specifically with Notch receptors and stimulate a response in cells that express Notch on their surface. Molecules that activate (or, in some cases, inhibit) receptors can be classified as hormones, neurotransmitters, cytokines, and growth factors, in general called receptor ligands. Ligand receptor interactions such as that of the Notch receptor interaction, are known to be the main interactions responsible for cell signaling mechanisms and communication. notch acts as a receptor for ligands that are expressed on adjacent cells. While some receptors are cell-surface proteins, others are found inside cells. For example, estrogen is a hydrophobic molecule that can pass through the lipid bilayer of the membranes. As part of the endocrine system, intracellular estrogen receptors from a variety of cell types can be activated by estrogen produced in the ovaries. In the case of Notch-mediated signaling, the signal transduction mechanism can be relatively simple. As shown in Figure 2, the activation of Notch can cause the Notch protein to be altered by a protease. Part of the Notch protein is released from the cell surface membrane and takes part in gene regulation. Cell signaling research involves studying the spatial and temporal dynamics of both receptors and the components of signaling pathways that are activated by receptors in various cell types. Emerging methods for single-cell mass-spectrometry analysis promise to enable studying signal transduction with single-cell resolution. In notch signaling, direct contact between cells allows for precise control of cell differentiation during embryonic development. In the worm Caenorhabditis elegans, two cells of the developing gonad each have an equal chance of terminally differentiating or becoming a uterine precursor cell that continues to divide. The choice of which cell continues to divide is controlled by competition of cell surface signals. One cell will happen to produce more of a cell surface protein that activates the Notch receptor on the adjacent cell. This activates a feedback loop or system that reduces Notch expression in the cell that will differentiate and that increases Notch on the surface of the cell that continues as a stem cell. See also Scaffold protein Biosemiotics Molecular cellular cognition Crosstalk (biology) Bacterial outer membrane vesicles Membrane vesicle trafficking Host–pathogen interaction Retinoic acid JAK-STAT signaling pathway Imd pathway Localisation signal Oscillation Protein dynamics Systems biology Lipid signaling Redox signaling Signaling cascade Cell Signaling Technology – an antibody development and production company Netpath – a curated resource of signal transduction pathways in humans Synthetic Biology Open Language Nanoscale networking – leveraging biological signaling to construct ad hoc in vivo communication networks Soliton model in neuroscience – physical communication via sound waves in membranes Temporal feedback References Further reading "The Inside Story of Cell Communication". learn.genetics.utah.edu. Retrieved 2018-10-20. "When Cell Communication Goes Wrong". learn.genetics.utah.edu. Retrieved 2018-10-24. External links NCI-Nature Pathway Interaction Database: authoritative information about signaling pathways in human cells. Signaling Pathways Project: cell signaling hypothesis generation knowledgebase constructed using biocurated archived transcriptomic and ChIP-Seq datasets Cell biology Cell communication Systems biology Human female endocrine system
Cell signaling
[ "Biology" ]
6,575
[ "Cell communication", "Cell biology", "Cellular processes", "Systems biology" ]
4,110,552
https://en.wikipedia.org/wiki/Lippmann%E2%80%93Schwinger%20equation
The Lippmann–Schwinger equation (named after Bernard Lippmann and Julian Schwinger) is one of the most used equations to describe particle collisions – or, more precisely, scattering – in quantum mechanics. It may be used in scattering of molecules, atoms, neutrons, photons or any other particles and is important mainly in atomic, molecular, and optical physics, nuclear physics and particle physics, but also for seismic scattering problems in geophysics. It relates the scattered wave function with the interaction that produces the scattering (the scattering potential) and therefore allows calculation of the relevant experimental parameters (scattering amplitude and cross sections). The most fundamental equation to describe any quantum phenomenon, including scattering, is the Schrödinger equation. In physical problems, this differential equation must be solved with the input of an additional set of initial and/or boundary conditions for the specific physical system studied. The Lippmann–Schwinger equation is equivalent to the Schrödinger equation plus the typical boundary conditions for scattering problems. In order to embed the boundary conditions, the Lippmann–Schwinger equation must be written as an integral equation. For scattering problems, the Lippmann–Schwinger equation is often more convenient than the original Schrödinger equation. The Lippmann–Schwinger equation's general form is (in reality, two equations are shown below, one for the sign and other for the sign): The potential energy describes the interaction between the two colliding systems. The Hamiltonian describes the situation in which the two systems are infinitely far apart and do not interact. Its eigenfunctions are and its eigenvalues are the energies . Finally, is a mathematical technicality necessary for the calculation of the integrals needed to solve the equation. It is a consequence of causality, ensuring that scattered waves consist only of outgoing waves. This is made rigorous by the limiting absorption principle. Usage The Lippmann–Schwinger equation is useful in a very large number of situations involving two-body scattering. For three or more colliding bodies it does not work well because of mathematical limitations; Faddeev equations may be used instead. However, there are approximations that can reduce a many-body problem to a set of two-body problems in a variety of cases. For example, in a collision between electrons and molecules, there may be tens or hundreds of particles involved. But the phenomenon may be reduced to a two-body problem by describing all the molecule constituent particle potentials together with a pseudopotential. In these cases, the Lippmann–Schwinger equations may be used. Of course, the main motivations of these approaches are also the possibility of doing the calculations with much lower computational efforts. Derivation We will assume that the Hamiltonian may be written as where is the free Hamiltonian (or more generally, a Hamiltonian with known eigenvectors). For example, in nonrelativistic quantum mechanics may be Intuitively is the interaction energy of the system. Let there be an eigenstate of : Now if we add the interaction into the mix, the Schrödinger equation reads Now consider the Hellmann–Feynman theorem, which requires the energy eigenvalues of the Hamiltonian to change continuously with continuous changes in the Hamiltonian. Therefore, we wish that as . A naive solution to this equation would be where the notation denotes the inverse of . However is singular since is an eigenvalue of . As is described below, this singularity is eliminated in two distinct ways by making the denominator slightly complex: By insertion of a complete set of free particle states, the Schrödinger equation is turned into an integral equation. The "in" and "out" states are assumed to form bases too, in the distant past and distant future respectively having the appearance of free particle states, but being eigenfunctions of the complete Hamiltonian. Thus endowing them with an index, the equation becomes Methods of solution From the mathematical point of view the Lippmann–Schwinger equation in coordinate representation is an integral equation of Fredholm type. It can be solved by discretization. Since it is equivalent to the differential time-independent Schrödinger equation with appropriate boundary conditions, it can also be solved by numerical methods for differential equations. In the case of the spherically symmetric potential it is usually solved by partial wave analysis. For high energies and/or weak potential it can also be solved perturbatively by means of Born series. The method convenient also in the case of many-body physics, like in description of atomic, nuclear or molecular collisions is the method of R-matrix of Wigner and Eisenbud. Another class of methods is based on separable expansion of the potential or Green's operator like the method of continued fractions of Horáček and Sasakawa. Very important class of methods is based on variational principles, for example the Schwinger-Lanczos method combining the variational principle of Schwinger with Lanczos algorithm. Interpretation as in and out states The S-matrix paradigm In the S-matrix formulation of particle physics, which was pioneered by John Archibald Wheeler among others, all physical processes are modeled according to the following paradigm. One begins with a non-interacting multiparticle state in the distant past. Non-interacting does not mean that all of the forces have been turned off, in which case for example protons would fall apart, but rather that there exists an interaction-free Hamiltonian H0, for which the bound states have the same energy level spectrum as the actual Hamiltonian . This initial state is referred to as the in state. Intuitively, it consists of elementary particles or bound states that are sufficiently well separated that their interactions with each other are ignored. The idea is that whatever physical process one is trying to study may be modeled as a scattering process of these well separated bound states. This process is described by the full Hamiltonian , but once it's over, all of the new elementary particles and new bound states separate again and one finds a new noninteracting state called the out state. The S-matrix is more symmetric under relativity than the Hamiltonian, because it does not require a choice of time slices to define. This paradigm allows one to calculate the probabilities of all of the processes that we have observed in 70 years of particle collider experiments with remarkable accuracy. But many interesting physical phenomena do not obviously fit into this paradigm. For example, if one wishes to consider the dynamics inside of a neutron star sometimes one wants to know more than what it will finally decay into. In other words, one may be interested in measurements that are not in the asymptotic future. Sometimes an asymptotic past or future is not even available. For example, it is very possible that there is no past before the Big Bang. In the 1960s, the S-matrix paradigm was elevated by many physicists to a fundamental law of nature. In S-matrix theory, it was stated that any quantity that one could measure should be found in the S-matrix for some process. This idea was inspired by the physical interpretation that S-matrix techniques could give to Feynman diagrams restricted to the mass-shell, and led to the construction of dual resonance models. But it was very controversial, because it denied the validity of quantum field theory based on local fields and Hamiltonians. The connection to Lippmann–Schwinger Intuitively, the slightly deformed eigenfunctions of the full Hamiltonian H are the in and out states. The are noninteracting states that resemble the in and out states in the infinite past and infinite future. Creating wavepackets This intuitive picture is not quite right, because is an eigenfunction of the Hamiltonian and so at different times only differs by a phase. Thus, in particular, the physical state does not evolve and so it cannot become noninteracting. This problem is easily circumvented by assembling and into wavepackets with some distribution of energies over a characteristic scale . The uncertainty principle now allows the interactions of the asymptotic states to occur over a timescale and in particular it is no longer inconceivable that the interactions may turn off outside of this interval. The following argument suggests that this is indeed the case. Plugging the Lippmann–Schwinger equations into the definitions and of the wavepackets we see that, at a given time, the difference between the and wavepackets is given by an integral over the energy . A contour integral This integral may be evaluated by defining the wave function over the complex E plane and closing the E contour using a semicircle on which the wavefunctions vanish. The integral over the closed contour may then be evaluated, using the Cauchy integral theorem, as a sum of the residues at the various poles. We will now argue that the residues of approach those of at time and so the corresponding wavepackets are equal at temporal infinity. In fact, for very positive times t the factor in a Schrödinger picture state forces one to close the contour on the lower half-plane. The pole in the from the Lippmann–Schwinger equation reflects the time-uncertainty of the interaction, while that in the wavepackets weight function reflects the duration of the interaction. Both of these varieties of poles occur at finite imaginary energies and so are suppressed at very large times. The pole in the energy difference in the denominator is on the upper half-plane in the case of , and so does not lie inside the integral contour and does not contribute to the integral. The remainder is equal to the wavepacket. Thus, at very late times , identifying as the asymptotic noninteracting out state. Similarly one may integrate the wavepacket corresponding to at very negative times. In this case the contour needs to be closed over the upper half-plane, which therefore misses the energy pole of , which is in the lower half-plane. One then finds that the and wavepackets are equal in the asymptotic past, identifying as the asymptotic noninteracting in state. The complex denominator of Lippmann–Schwinger This identification of the 's as asymptotic states is the justification for the in the denominator of the Lippmann–Schwinger equations. A formula for the S-matrix The S-matrix is defined to be the inner product of the ath and bth Heisenberg picture asymptotic states. One may obtain a formula relating the S-matrix to the potential V using the above contour integral strategy, but this time switching the roles of and . As a result, the contour now does pick up the energy pole. This can be related to the 's if one uses the S-matrix to swap the two 's. Identifying the coefficients of the 's on both sides of the equation one finds the desired formula relating S to the potential In the Born approximation, corresponding to first order perturbation theory, one replaces this last with the corresponding eigenfunction of the free Hamiltonian , yielding which expresses the S-matrix entirely in terms of V and free Hamiltonian eigenfunctions. These formulas may in turn be used to calculate the reaction rate of the process , which is equal to Homogenization With the use of Green's function, the Lippmann–Schwinger equation has counterparts in homogenization theory (e.g. mechanics, conductivity, permittivity). See also Bethe–Salpeter equation References Bibliography Original publications Scattering
Lippmann–Schwinger equation
[ "Physics", "Chemistry", "Materials_science" ]
2,434
[ "Nuclear physics", "Scattering", "Condensed matter physics", "Particle physics" ]
4,110,937
https://en.wikipedia.org/wiki/Penrose%20interpretation
The Penrose interpretation is a speculation by Roger Penrose about the relationship between quantum mechanics and general relativity. Penrose proposes that a quantum state remains in superposition until the difference of space-time curvature attains a significant level. Overview Penrose's idea is inspired by quantum gravity because it uses both the physical constants and . It is an alternative to the Copenhagen interpretation which posits that superposition fails when an observation is made (but that it is non-objective in nature), and the many-worlds interpretation, which states that alternative outcomes of a superposition are equally "real," while their mutual decoherence precludes subsequent observable interactions. Penrose's idea is a type of objective collapse theory. For these theories, the wavefunction is a physical wave, which experiences wave function collapse as a physical process, with observers not having any special role. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. He gives an approximate value for this difference: a Planck mass worth of matter, which he calls the "'one-graviton' level". He then hypothesizes that this energy difference causes the wave function to collapse to a single state, with a probability based on its amplitude in the original wave function, a procedure derived from standard quantum mechanics. Penrose's "'one-graviton' level" criterion forms the basis of his prediction, providing an objective criterion for wave function collapse. Despite the difficulties of specifying this in a rigorous way, he proposes that the basis states into which the collapse takes place are mathematically described by the stationary solutions of the Schrödinger–Newton equation. Recent theoretical work indicates an increasingly deep inter-relation between quantum mechanics and gravitation. Physical consequences Accepting that wavefunctions are physically real, Penrose believes that matter can exist in more than one place at one time. In his opinion, a macroscopic system, like a human being, cannot exist in more than one place for a measurable time, as the corresponding energy difference is very large. A microscopic system, like an electron, can exist in more than one location significantly longer (thousands of years), until its space-time curvature separation reaches collapse threshold. In Einstein's theory, any object that has mass causes a warp in the structure of space and time around it. This warping produces the effect we experience as gravity. Penrose points out that tiny objects, such as dust specks, atoms and electrons, produce space-time warps as well. Ignoring these warps is where most physicists go awry. If a dust speck is in two locations at the same time, each one should create its own distortions in space-time, yielding two superposed gravitational fields. According to Penrose's theory, it takes energy to sustain these dual fields. The stability of a system depends on the amount of energy involved: the higher the energy required to sustain a system, the less stable it is. Over time, an unstable system tends to settle back to its simplest, lowest-energy state: in this case, one object in one location producing one gravitational field. If Penrose is right, gravity yanks objects back into a single location, without any need to invoke observers or parallel universes. Penrose speculates that the transition between macroscopic and quantum states begins at the scale of dust particles (the mass of which is close to a Planck mass). He has proposed an experiment to test this theory, called FELIX (free-orbit experiment with laser interferometry X-rays), in which an X-ray laser in space is directed toward a tiny mirror and fissioned by a beam splitter from tens of thousands of miles away, with which the photons are directed toward other mirrors and reflected back. One photon will strike the tiny mirror while moving to another mirror and move the tiny mirror back as it returns, and according to conventional quantum theories, the tiny mirror can exist in superposition for a significant period of time. This would prevent any photons from reaching the detector. If Penrose's hypothesis is correct, the mirror's superposition will collapse to one location in about a second, allowing half the photons to reach the detector. However, because this experiment would be difficult to arrange, a table-top version that uses optical cavities to trap the photons long enough for achieving the desired delay has been proposed instead. See also Diósi–Penrose model Interpretations of quantum mechanics Orchestrated objective reduction Gravitational decoherence Schrödinger–Newton equation Stochastic quantum mechanics Relevant books by Roger Penrose The Emperor's New Mind The Road to Reality Shadows of the Mind References External links Molecules – Quantum Interpretations QM – the Penrose Interpretation (Internet Archive) Roger Penrose discusses his experiment on the BBC (25 minutes in)   Quantum measurement Interpretations of quantum mechanics
Penrose interpretation
[ "Physics" ]
1,012
[ "Interpretations of quantum mechanics", "Quantum measurement", "Quantum mechanics" ]
4,112,456
https://en.wikipedia.org/wiki/Cefradine
Cefradine (INN) or cephradine (BAN) is a first generation cephalosporin antibiotic. Indications Respiratory tract infections (such as tonsillitis, pharyngitis, and lobar pneumonia) caused by group A beta-hemolytic streptococci and S. pneumoniae (formerly D. pneumonia). Otitis media caused by group A beta-hemolytic streptococci, S. pneumoniae, H. influenzae, and staphylococci. Skin and skin structure infections caused by staphylococci (penicillin-susceptible and penicillin-resistant) and beta-hemolytic streptococci. Urinary tract infections, including prostatitis, caused by E. coli, P. mirabilis and Klebsiella species. Formulations Cefradine is distributed in the form of capsules containing 250 mg or 500 mg, as a syrup containing 250 mg/5 ml, or in vials for injection containing 500 mg or 1 g. It is not approved by the FDA for use in the United States. Synthesis Birch reduction of D-α-phenylglycine led to diene (2). This was N-protected using tert-butoxycarbonylazide and activated for amide formation via the mixed anhydride method using isobutylchloroformate to give 3. Mixed anhydride 3 reacted readily with 7-aminodesacetoxycephalosporanic acid to give, after deblocking, cephradine (5). Production names The antibiotic is produced under many brand names across the world. Bangladesh: Ancef, Ancef forte, Aphrin, Avlosef, Cefadin, Cephadin, Cephran, Cephran-DS, Cusef, Cusef DS, Dicef , Dicef forte, Dolocef, Efrad, Elocef, Extracef, Extracef-DS, Intracef, Kefdrin, Lebac, Lebac Forte, Medicef, Mega-Cef, Megacin, Polycef, Procef, Procef, Procef forte, Rocef, Rocef Forte DS, Sefin, Sefin DS, Sefnin, Sefrad, Sefrad DS, Sefril, Sefril-DS, Sefro, Sefro-HS, Sephar, Sephar-DS, Septa, Sinaceph, SK-Cef, Sk-Cef DS, Supracef and Supracef-F, Torped, Ultrasef, Vecef, Vecef-DS, Velogen, Sinaceph, Velox China: Cefradine, Cephradine, Kebili, Saifuding, Shen You, Taididing, Velosef, Xianyi, and Xindadelei Colombia: Cefagram, Cefrakov, Cefranil , Cefrex, and Kliacef Egypt: Cefadrin, Cefadrine, Cephradine, Cephraforte, Farcosef, Fortecef, Mepadrin, Ultracef, and Velosef France: Dexef Hong Kong: Cefradine and ChinaQualisef-250 Indonesia: Dynacef, Velodine, and Velodrom Lebanon: Eskacef, Julphacef, and Velosef Lithuania: Tafril Myanmar: Sinaceph Oman: Ceframed, Eskasef, Omadine, and Velocef Pakistan: Abidine, Ada-Cef, Ag-cef, Aksosef, Amspor, Anasef, Antimic, Atcosef, Bactocef, Biocef, Biodine, Velora, Velosef Peru: Abiocef, Cefradinal, Cefradur, Cefrid, Terbodina II, Velocef, Velomicin Philippines: Altozef, Racep, Senadex, Solphride, Yudinef, Zefadin, Zefradil, and Zolicef Poland: Tafril Portugal: Cefalmin, Cefradur South Africa: Cefril A South Korea: Cefradine and Tricef Taiwan: Cefadin, Cefamid, Cefin, Cekodin, Cephradine, Ceponin, Lacef, Licef-A, Lisacef, Lofadine, Recef, S-60, Sefree, Sephros, Topcef, Tydine, Unifradine, and U-Save UK: Cefradune (Kent) Vietnam: Eurosefro and Incef See also Cephapirin Cephacetrile Cefamandole Ampicillin (Has the same chemical formula) Notes References Cephalosporin antibiotics Enantiopure drugs
Cefradine
[ "Chemistry" ]
1,053
[ "Stereochemistry", "Enantiopure drugs" ]
5,482,655
https://en.wikipedia.org/wiki/Refactorable%20number
A refactorable number or tau number is an integer n that is divisible by the count of its divisors, or to put it algebraically, n is such that . The first few refactorable numbers are listed in as 1, 2, 8, 9, 12, 18, 24, 36, 40, 56, 60, 72, 80, 84, 88, 96, 104, 108, 128, 132, 136, 152, 156, 180, 184, 204, 225, 228, 232, 240, 248, 252, 276, 288, 296, ... For example, 18 has 6 divisors (1 and 18, 2 and 9, 3 and 6) and is divisible by 6. There are infinitely many refactorable numbers. Properties Cooper and Kennedy proved that refactorable numbers have natural density zero. Zelinsky proved that no three consecutive integers can all be refactorable. Colton proved that no refactorable number is perfect. The equation has solutions only if is a refactorable number, where is the greatest common divisor function. Let be the number of refactorable numbers which are at most . The problem of determining an asymptotic for is open. Spiro has proven that There are still unsolved problems regarding refactorable numbers. Colton asked if there are arbitrarily large such that both and are refactorable. Zelinsky wondered if there exists a refactorable number , does there necessarily exist such that is refactorable and . History First defined by Curtis Cooper and Robert E. Kennedy where they showed that the tau numbers have natural density zero, they were later rediscovered by Simon Colton using a computer program he wrote ("HR") which invents and judges definitions from a variety of areas of mathematics such as number theory and graph theory. Colton called such numbers "refactorable". While computer programs had discovered proofs before, this discovery was one of the first times that a computer program had discovered a new or previously obscure idea. Colton proved many results about refactorable numbers, showing that there were infinitely many and proving a variety of congruence restrictions on their distribution. Colton was only later alerted that Kennedy and Cooper had previously investigated the topic. See also Divisor function References Integer sequences
Refactorable number
[ "Mathematics" ]
487
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
5,485,741
https://en.wikipedia.org/wiki/Home%20modifications
Home modifications are defined as environmental interventions aiming to support activity performance in the home. More specifically, home modifications often are changes made to the home environment to help people with functional disability or impairment to be more independent and safe in their own homes and reduce any risk of injury to themselves or their caregivers. Examples of home modifications include installing ramps and rails, altering kitchen and bathroom areas (relocating switches and lowering bench heights), installing emergency alarms. There are a number of words in common use that are confused with home modifications. For example, home modification is more than home improvement, home renovation or remodelling; those three terms refer to changes made to housing for purposes other than disability. Other more closely related terms that are interchanged with home modifications in literature include combinations of home/housing/environment/residential coupled with modification/adaptation/intervention. Each of these terms has a nuanced meaning analysed in further detail by Bridge (2006). However these terms have in common a context of housing, health and care for people with disability or impairment. Home modifications directly impact the accessibility to and within a home and are considered one aspect of the complex relationship between housing and health. Home modifications are also the subject of ongoing public health and economics research because of their potential to support aging in place and substitute for caregiving. In addition to aging in place and caregiving, there is some evidence that home modifications can reduce falls in the home. References Accessible building Architectural design Caregiving
Home modifications
[ "Engineering" ]
300
[ "Accessible building", "Design", "Architectural design", "Architecture" ]
5,489,107
https://en.wikipedia.org/wiki/Coated%20paper
Coated paper (also known as enamel paper, gloss paper, and thin paper) is paper that has been coated by a mixture of materials or a polymer to impart certain qualities to the paper, including weight, surface gloss, smoothness, or reduced ink absorbency. Various materials, including kaolinite, calcium carbonate, bentonite, and talc, can be used to coat paper for high-quality printing used in the packaging industry and in magazines. The chalk or china clay is bound to the paper with synthetic s, such as styrene-butadiene latexes and natural organic binders such as starch. The coating formulation may also contain chemical additives as dispersants, resins, or polyethylene to give water resistance and wet strength to the paper, or to protect against ultraviolet radiation. Coated papers have been traditionally used for printing magazines. Varieties Machine-finished coated paper Machine-finished coated paper (MFC) has a basis weight of 48–80 g/m2. They have good surface properties, high print gloss and adequate sheet stiffness. MFC papers are made of 60–85% groundwood or thermomechanical pulp (TMP) and 15–40% chemical pulp with a total pigment content of 20–30%. The paper can be soft nip calendered or supercalendered. These are often used in paperbacks. Coated fine paper Coated fine paper or woodfree coated paper (WFC) are primarily produced for offset printing: Standard coated fine papers This paper quality is normally used for advertising materials, books, annual reports and high-quality catalogs. Grammage ranges from 90–170 g/m2 and ISO brightness between 80–96%. The fibre furnish consists of more than 90% chemical pulp. Total pigment content are in the range 30–45%, where calcium carbonate and clay are the most common. Low coat weight papers These paper grades have lower coat weights than the standard WFC (3–14 g/m2/side) and the grammage and pigment content are also generally lower, 55–135 g/m2 and 20–35% respectively. s Art papers are one of the highest-quality printing papers and are used for illustrated books, calendars and brochures. The grammage varies from 100 to 230 g/m2. These paper grades are triple coated with 20–40 g/m2/side and have matte or glossy finish. Higher qualities often contain cotton. Plastic coatings Plastic-coated paper includes types of paper coatings; polyethylene or polyolefin extrusion coating, silicone, and wax coating to make paper cups and photographic paper. Biopolymer coatings are available as more sustainable alternatives to common petrochemical coatings like low-density polyethylene (LDPE) or mylar. It is most used in the food and drink packaging industry. The plastic is used to improve functions such as water resistance, tear strength, abrasion resistance, ability to be heat sealed, etc. Some papers are laminated by heat or adhesive to a plastic film to provide barrier properties in use. Other papers are coated with a melted plastic layer: curtain coating is one common method. Printed papers commonly have a top coat of a protective polymer to seal the print, provide scuff resistance, and sometimes gloss. Some coatings are processed by UV curing for stability. Most plastic coatings in the packaging industry are polyethylene (LDPE) and to a much lesser degree PET. Liquid packaging board cartons typically contain 74% paper, 22% plastic and 4% aluminum. Frozen food cartons are usually made up of an 80% paper and 20% plastic combination. The most notable applications for plastic-coated paper are single use (disposable food packaging): Liquid packaging board for milk and juice folding cartons Hot and cold paper cups Paper plates Frozen food containers Plastic-lined paper bags Take-out containers Waterproof paper (also multi-use) Heat sealable paper Barrier packaging Plastic coatings or layers usually make paper recycling more difficult. Some plastic laminations can be separated from the paper during the recycling process, allowing filtering out the film. If the coated paper is shredded prior to recycling, the degree of separation depends on the particular process. Some plastic coatings are water dispersible to aid recycling and repulping. Special recycling processes are available to help separate plastics. Some plastic coated papers are incinerated for heat or landfilled rather than recycled. Most plastic coated papers are not suited to composting, but do variously end up in compost bins, sometimes even legally so. In this case, the remains of the non-biodegradable plastics components form part of the global microplastics waste problem. Others Printed papers commonly have a top coat of a protective polymer to seal the print, provide scuff resistance, and sometimes gloss. Some coatings are processed by UV curing for stability. A release liner is a paper (or film) sheet used to prevent a sticky surface from adhering. It is coated on one or both sides with a release agent. Heat printed papers such as receipts are coated with a chemical mixture, which often contains estrogenic and carcinogenic poisons, such as bisphenol A (BPA). It is possible to check whether a piece of paper is thermographically coated, as it will turn black from friction or heat. (see Thermal paper) Paper labels are often coated with adhesive (pressure sensitive or gummed) on one side and coated with printing or graphics on the other. See also Printing References Further reading Soroka, W, "Fundamentals of Packaging Technology", IoPP, 2002, Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, External links Chemical processes Composite materials Environmental impact of products Packaging materials Paper Papermaking Plastics and the environment
Coated paper
[ "Physics", "Chemistry" ]
1,216
[ "Composite materials", "Chemical processes", "Materials", "nan", "Chemical process engineering", "Matter" ]
5,491,440
https://en.wikipedia.org/wiki/Multi-scale%20approaches
The scale space representation of a signal obtained by Gaussian smoothing satisfies a number of special properties, scale-space axioms, which make it into a special form of multi-scale representation. There are, however, also other types of "multi-scale approaches" in the areas of computer vision, image processing and signal processing, in particular the notion of wavelets. The purpose of this article is to describe a few of these approaches: Scale-space theory for one-dimensional signals For one-dimensional signals, there exists quite a well-developed theory for continuous and discrete kernels that guarantee that new local extrema or zero-crossings cannot be created by a convolution operation. For continuous signals, it holds that all scale-space kernels can be decomposed into the following sets of primitive smoothing kernels: the Gaussian kernel : where , truncated exponential kernels (filters with one real pole in the s-plane): if and 0 otherwise where if and 0 otherwise where , translations, rescalings. For discrete signals, we can, up to trivial translations and rescalings, decompose any discrete scale-space kernel into the following primitive operations: the discrete Gaussian kernel where where are the modified Bessel functions of integer order, generalized binomial kernels corresponding to linear smoothing of the form where where , first-order recursive filters corresponding to linear smoothing of the form where where , the one-sided Poisson kernel for where for where . From this classification, it is apparent that we require a continuous semi-group structure, there are only three classes of scale-space kernels with a continuous scale parameter; the Gaussian kernel which forms the scale-space of continuous signals, the discrete Gaussian kernel which forms the scale-space of discrete signals and the time-causal Poisson kernel that forms a temporal scale-space over discrete time. If we on the other hand sacrifice the continuous semi-group structure, there are more options: For discrete signals, the use of generalized binomial kernels provides a formal basis for defining the smoothing operation in a pyramid. For temporal data, the one-sided truncated exponential kernels and the first-order recursive filters provide a way to define time-causal scale-spaces that allow for efficient numerical implementation and respect causality over time without access to the future. The first-order recursive filters also provide a framework for defining recursive approximations to the Gaussian kernel that in a weaker sense preserve some of the scale-space properties. See also Scale space Scale space implementation Scale-space segmentation References Image processing Computer vision
Multi-scale approaches
[ "Engineering" ]
543
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
45,162
https://en.wikipedia.org/wiki/Peridot
Peridot ( ), sometimes called chrysolite, is a yellow-green transparent variety of olivine. Peridot is one of the few gemstones that occur in only one color. Peridot can be found in mafic and ultramafic rocks occurring in lava and peridotite xenoliths of the mantle. The gem occurs in silica-deficient rocks such as volcanic basalt and pallasitic meteorites. Along with diamonds, peridot is one of only two gems observed to be formed not in Earth's crust, but in the molten rock of the upper mantle. Gem-quality peridot is rare on Earth's surface due to its susceptibility to alteration during its movement from deep within the mantle and weathering at the surface. Peridot has a chemical formula of . Peridot is one of the birthstones for the month of August. Etymology The origin of the name peridot is uncertain. The Oxford English Dictionary suggests an alteration of Anglo–Norman (classical Latin -), a kind of opal, rather than the Arabic word , meaning "gemstone". The Middle English Dictionarys entry on peridot includes several variations: , , and  — other variants substitute y for letter i used here. The earliest use of the word in English is possibly in the 1705 register of the St. Albans Abbey: The dual entry is in Latin with the translation to English listed as peridot. It records that on his death in 1245, Bishop John bequeathed various items, including peridot gems, to the Abbey. Appearance Peridot is one of the few gemstones that occur in only one color: an olive-green. The intensity and tint of the green, however, depends on the percentage of iron in the crystal structure, so the color of individual peridot gems can vary from yellow, to olive, to brownish-green. In rare cases, peridot may have a medium-dark toned, pure green with no secondary yellow hue or brown mask. Lighter-colored gems are due to lower iron concentrations. Mineral properties Crystal structure The molecular structure of peridot consists of isomorphic olivine, silicate, magnesium and iron in an orthorhombic crystal system. In an alternative view, the atomic structure can be described as a hexagonal, close-packed array of oxygen ions with half of the octahedral sites occupied by magnesium or iron ions and one-eighth of the tetrahedral sites occupied by silicon ions. Surface property Oxidation of peridot does not occur at natural surface temperature and pressure but begins to occur slowly at with rates increasing with temperature. The oxidation of the olivine occurs by an initial breakdown of the fayalite component, and subsequent reaction with the forsterite component, to give magnetite and orthopyroxene. Occurrence Geologically Olivine, of which peridot is a type, is a common mineral in mafic and ultramafic rocks, often found in lava and in peridotite xenoliths of the mantle, which lava carries to the surface; however, gem-quality peridot occurs in only a fraction of these settings. Peridots can also be found in meteorites. Peridots can be differentiated by size and composition. A peridot formed as a result of volcanic activity tends to contain higher concentrations of lithium, nickel and zinc than those found in meteorites. Olivine is an abundant mineral, but gem-quality peridot is rather rare due to its chemical instability on Earth's surface. Olivine is usually found as small grains and tends to exist in a heavily weathered state, unsuitable for decorative use. Large crystals of forsterite, the variety most often used to cut peridot gems, are rare; as a result, peridot is considered to be precious. In the ancient world, the mining of peridot was called topazios then, on St. John's Island, in the Red Sea began about 300 . The principal source of peridot olivine today is the San Carlos Apache Indian Reservation in Arizona. It is also mined at another location in Arizona, and in Arkansas, Hawaii, Nevada, and New Mexico at Kilbourne Hole, in the US; and in Australia, Brazil, China, Egypt, Kenya, Mexico, Myanmar (Burma), Norway, Pakistan, Saudi Arabia, South Africa, Sri Lanka, and Tanzania. In meteorites Peridot crystals have been collected from some pallasite meteorites. The most commonly studied pallasitic peridot belongs to the Indonesian Jeppara meteorite, but others exist such as the Brenham, Esquel, Fukang, and Imilac meteorites. Pallasitic (extraterrestrial) peridot differs chemically from its earthbound counterpart, in that pallasitic peridot lacks nickel. Gemology Orthorhombic minerals, like peridot, have biaxial birefringence defined by three principal axes: , and . Refractive index readings of faceted gems can range around = 1.651, = 1.668, and = 1.689, with a biaxial positive birefringence of 0.037–0.038. With decreasing magnesium and increasing iron concentration, the specific gravity, color darkness and refractive indices increase, and the shifts toward the index. Increasing iron concentration ultimately forms the iron-rich end-member of the olivine solid solution series fayalite. A study of Chinese peridot gem samples determined the hydro-static specific gravity to be 3.36 . The visible-light spectroscopy of the same Chinese peridot samples showed light bands between 493.0–481.0 nm, the strongest absorption at 492.0 nm. The largest cut peridot olivine is a specimen in the gem collection of the Smithsonian Museum in Washington, D.C. Inclusions are common in peridot crystals but their presence depends on the location where it was found and the geological conditions that led to its crystallization. Primary negative crystals – rounded gas bubbles – form in situ with peridot, and are common in Hawaiian peridots. Secondary negative crystals form in peridot fractures. "Lily pad" cleavages are often seen in San Carlos peridots, and are a type of secondary negative crystal. They can easily be seen under reflected light as circular discs surrounding a negative crystal. Silky and rod-like inclusions are common in Pakistani peridots. The most common mineral inclusion in peridot is the chromium-rich mineral chromite. Magnesium-rich minerals also can exist in the form of pyrope and magnesiochromite. These two types of mineral inclusions are typically surrounded "lily-pad" cleavages. Biotite flakes appear flat, brown, translucent, and tabular. Cultural history Peridot has been prized since the earliest civilizations for its claimed protective powers to drive away fears and nightmares, according to superstitions. There is a superstition that it carries the gift of "inner radiance", sharpening the mind and opening it to new levels of awareness and growth, helping one to recognize and realize one's destiny and spiritual purpose. (There is no scientific evidence for any such claims.) Peridot olivine is the birthstone for the month of August. Peridot has often been mistaken for emerald beryl and other green gems. Noted gemologist G.F. Kunz discussed the confusion between beryl and peridot in many church treasures, most notably the "Three Magi treasure" in the Dom of Cologne, Germany. Gallery Footnotes References External links Ganoksin Mineralminers USGS peridot data Emporia Edu Florida State University – Peridot Gemstones Silicate minerals
Peridot
[ "Physics" ]
1,634
[ "Materials", "Gemstones", "Matter" ]
45,165
https://en.wikipedia.org/wiki/Orthoclase
Orthoclase, or orthoclase feldspar (endmember formula KAlSi3O8), is an important tectosilicate mineral which forms igneous rock. The name is from the Ancient Greek for "straight fracture", because its two cleavage planes are at right angles to each other. It is a type of potassium feldspar, also known as K-feldspar. The gem known as moonstone (see below) is largely composed of orthoclase. Formation and subtypes Orthoclase is a common constituent of most granites and other felsic igneous rocks and often forms huge crystals and masses in pegmatite. Typically, the pure potassium endmember of orthoclase forms a solid solution with albite, the sodium endmember (NaAlSi3O8), of plagioclase. While slowly cooling within the earth, sodium-rich albite lamellae form by exsolution, enriching the remaining orthoclase with potassium. The resulting intergrowth of the two feldspars is called perthite. The higher-temperature polymorph of KAlSi3O8 is sanidine. Sanidine is common in rapidly cooled volcanic rocks such as obsidian and felsic pyroclastic rocks, and is notably found in trachytes of the Drachenfels, Germany. The lower-temperature polymorph of KAlSi3O8 is microcline. Adularia is a low temperature form of either microcline or orthoclase originally reported from the low temperature hydrothermal deposits in the Adula Alps of Switzerland. It was first described by Ermenegildo Pini in 1781. The optical effect of adularescence in moonstone is typically due to adularia. The largest documented single crystal of orthoclase was found in the Ural Mountains in Russia. It measured around and weighed around . Applications Together with the other potassium feldspars, orthoclase is a common raw material for the manufacture of some glasses and some ceramics such as porcelain, and as a constituent of scouring powder. Some intergrowths of orthoclase and albite have an attractive pale luster and are called moonstone when used in jewelry. Most moonstones are translucent and white, although grey and peach-colored varieties also occur. In gemology, their luster is called adularescence and is typically described as creamy or silvery white with a "billowy" quality. It is the state gem of Florida. The gemstone commonly called rainbow moonstone is more properly a colorless form of labradorite and can be distinguished from "true" moonstone by its greater transparency and play of color, although their value and durability do not greatly differ. Orthoclase is one of the ten defining minerals of the Mohs scale of mineral hardness, on which it is listed as having a hardness of 6. NASA's Curiosity rover discovery of high levels of orthoclase in Martian sandstones suggested that some Martian rocks may have experienced complex geological processing, such as repeated melting. See also List of minerals Schiller, optical effect References Potassium minerals Aluminium minerals Tectosilicates Monoclinic minerals Minerals in space group 12 Feldspar Gemstones
Orthoclase
[ "Physics" ]
692
[ "Materials", "Gemstones", "Matter" ]
45,177
https://en.wikipedia.org/wiki/Negative%20binomial%20distribution
In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified/constant/fixed number of successes occur. For example, we can define rolling a 6 on some dice as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success (). In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution. An alternative formulation is to model the number of total trials (instead of the number of failures). In fact, for a specified (non-random) number of successes (r), the number of failures (n − r) is random because the number of total trials (n) is random. For example, we could use the negative binomial distribution to model the number of days n (random) a certain machine works (specified by r) before it breaks down. The Pascal distribution (after Blaise Pascal) and Polya distribution (for George Pólya) are special cases of the negative binomial distribution. A convention among engineers, climatologists, and others is to use "negative binomial" or "Pascal" for the case of an integer-valued stopping-time parameter () and use "Polya" for the real-valued case. For occurrences of associated discrete events, like tornado outbreaks, the Polya distributions can be used to give more accurate models than the Poisson distribution by allowing the mean and variance to be different, unlike the Poisson. The negative binomial distribution has a variance , with the distribution becoming identical to Poisson in the limit for a given mean (i.e. when the failures are increasingly rare). This can make the distribution a useful overdispersed alternative to the Poisson distribution, for example for a robust modification of Poisson regression. In epidemiology, it has been used to model disease transmission for infectious diseases where the likely number of onward infections may vary considerably from individual to individual and from setting to setting. More generally, it may be appropriate where events have positively correlated occurrences causing a larger variance than if the occurrences were independent, due to a positive covariance term. The term "negative binomial" is likely due to the fact that a certain binomial coefficient that appears in the formula for the probability mass function of the distribution can be written more simply with negative numbers. Definitions Imagine a sequence of independent Bernoulli trials: each trial has two potential outcomes called "success" and "failure." In each trial the probability of success is and of failure is . We observe this sequence until a predefined number of successes occurs. Then the random number of observed failures, , follows the negative binomial (or Pascal) distribution: Probability mass function The probability mass function of the negative binomial distribution is where r is the number of successes, k is the number of failures, and p is the probability of success on each trial. Here, the quantity in parentheses is the binomial coefficient, and is equal to Note that Γ(r) is the Gamma function. There are k failures chosen from k + r − 1 trials rather than k + r because the last of the k + r trials is by definition a success. This quantity can alternatively be written in the following manner, explaining the name "negative binomial": Note that by the last expression and the binomial series, for every and , hence the terms of the probability mass function indeed add up to one as below. To understand the above definition of the probability mass function, note that the probability for every specific sequence of r successes and k failures is , because the outcomes of the k + r trials are supposed to happen independently. Since the rth success always comes last, it remains to choose the k trials with failures out of the remaining k + r − 1 trials. The above binomial coefficient, due to its combinatorial interpretation, gives precisely the number of all these sequences of length k + r − 1. Cumulative distribution function The cumulative distribution function can be expressed in terms of the regularized incomplete beta function: (This formula is using the same parameterization as in the article's table, with r the number of successes, and with the mean.) It can also be expressed in terms of the cumulative distribution function of the binomial distribution: Alternative formulations Some sources may define the negative binomial distribution slightly differently from the primary one here. The most common variations are where the random variable X is counting different things. These variations can be seen in the table here: Each of the four definitions of the negative binomial distribution can be expressed in slightly different but equivalent ways. The first alternative formulation is simply an equivalent form of the binomial coefficient, that is: . The second alternate formulation somewhat simplifies the expression by recognizing that the total number of trials is simply the number of successes and failures, that is: . These second formulations may be more intuitive to understand, however they are perhaps less practical as they have more terms. The definition where X is the number of n trials that occur for a given number of r successes is similar to the primary definition, except that the number of trials is given instead of the number of failures. This adds r to the value of the random variable, shifting its support and mean. The definition where X is the number of k successes (or n trials) that occur for a given number of r failures is similar to the primary definition used in this article, except that numbers of failures and successes are switched when considering what is being counted and what is given. Note however, that p still refers to the probability of "success". The definition of the negative binomial distribution can be extended to the case where the parameter r can take on a positive real value. Although it is impossible to visualize a non-integer number of "failures", we can still formally define the distribution through its probability mass function. The problem of extending the definition to real-valued (positive) r boils down to extending the binomial coefficient to its real-valued counterpart, based on the gamma function: After substituting this expression in the original definition, we say that X has a negative binomial (or Pólya) distribution if it has a probability mass function: Here r is a real, positive number. In negative binomial regression, the distribution is specified in terms of its mean, , which is then related to explanatory variables as in linear regression or other generalized linear models. From the expression for the mean m, one can derive and . Then, substituting these expressions in the one for the probability mass function when r is real-valued, yields this parametrization of the probability mass function in terms of m: The variance can then be written as . Some authors prefer to set , and express the variance as . In this context, and depending on the author, either the parameter r or its reciprocal α is referred to as the "dispersion parameter", "shape parameter" or "clustering coefficient", or the "heterogeneity" or "aggregation" parameter. The term "aggregation" is particularly used in ecology when describing counts of individual organisms. Decrease of the aggregation parameter r towards zero corresponds to increasing aggregation of the organisms; increase of r towards infinity corresponds to absence of aggregation, as can be described by Poisson regression. Alternative parameterizations Sometimes the distribution is parameterized in terms of its mean μ and variance σ2: Another popular parameterization uses r and the failure odds β: Examples Length of hospital stay Hospital length of stay is an example of real-world data that can be modelled well with a negative binomial distribution via negative binomial regression. Selling candy Pat Collis is required to sell candy bars to raise money for the 6th grade field trip. Pat is (somewhat harshly) not supposed to return home until five candy bars have been sold. So the child goes door to door, selling candy bars. At each house, there is a 0.6 probability of selling one candy bar and a 0.4 probability of selling nothing. What's the probability of selling the last candy bar at the nth house? Successfully selling candy enough times is what defines our stopping criterion (as opposed to failing to sell it), so k in this case represents the number of failures and r represents the number of successes. Recall that the NB(r, p) distribution describes the probability of k failures and r successes in k + r Bernoulli(p) trials with success on the last trial. Selling five candy bars means getting five successes. The number of trials (i.e. houses) this takes is therefore k + 5 = n. The random variable we are interested in is the number of houses, so we substitute k = n − 5 into a NB(5, 0.4) mass function and obtain the following mass function of the distribution of houses (for n ≥ 5): What's the probability that Pat finishes on the tenth house? What's the probability that Pat finishes on or before reaching the eighth house? To finish on or before the eighth house, Pat must finish at the fifth, sixth, seventh, or eighth house. Sum those probabilities: What's the probability that Pat exhausts all 30 houses that happen to stand in the neighborhood? This can be expressed as the probability that Pat does not finish on the fifth through the thirtieth house: Because of the rather high probability that Pat will sell to each house (60 percent), the probability of her not fulfilling her quest is vanishingly slim. Properties Expectation The expected total number of trials needed to see r successes is . Thus, the expected number of failures would be this value, minus the successes: Expectation of successes The expected total number of failures in a negative binomial distribution with parameters is r(1 − p)/p. To see this, imagine an experiment simulating the negative binomial is performed many times. That is, a set of trials is performed until successes are obtained, then another set of trials, and then another etc. Write down the number of trials performed in each experiment: and set . Now we would expect about successes in total. Say the experiment was performed times. Then there are successes in total. So we would expect , so . See that is just the average number of trials per experiment. That is what we mean by "expectation". The average number of failures per experiment is . This agrees with the mean given in the box on the right-hand side of this page. A rigorous derivation can be done by representing the negative binomial distribution as the sum of waiting times. Let with the convention represents the number of failures observed before successes with the probability of success being . And let where represents the number of failures before seeing a success. We can think of as the waiting time (number of failures) between the th and th success. Thus The mean is which follows from the fact . Variance When counting the number of failures before the r-th success, the variance is r(1 − p)/p2. When counting the number of successes before the r-th failure, as in alternative formulation (3) above, the variance is rp/(1 − p)2. Relation to the binomial theorem Suppose Y is a random variable with a binomial distribution with parameters n and p. Assume p + q = 1, with p, q ≥ 0, then Using Newton's binomial theorem, this can equally be written as: in which the upper bound of summation is infinite. In this case, the binomial coefficient is defined when n is a real number, instead of just a positive integer. But in our case of the binomial distribution it is zero when k > n. We can then say, for example Now suppose r > 0 and we use a negative exponent: Then all of the terms are positive, and the term is just the probability that the number of failures before the rth success is equal to k, provided r is an integer. (If r is a negative non-integer, so that the exponent is a positive non-integer, then some of the terms in the sum above are negative, so we do not have a probability distribution on the set of all nonnegative integers.) Now we also allow non-integer values of r. Then we have a proper negative binomial distribution, which is a generalization of the Pascal distribution, which coincides with the Pascal distribution when r happens to be a positive integer. Recall from above that The sum of independent negative-binomially distributed random variables r1 and r2 with the same value for parameter p is negative-binomially distributed with the same p but with r-value r1 + r2. This property persists when the definition is thus generalized, and affords a quick way to see that the negative binomial distribution is infinitely divisible. Recurrence relations The following recurrence relations hold: For the probability mass function For the moments For the cumulants Related distributions The geometric distribution (on { 0, 1, 2, 3, ... }) is a special case of the negative binomial distribution, with The negative binomial distribution is a special case of the discrete phase-type distribution. The negative binomial distribution is a special case of discrete compound Poisson distribution. Poisson distribution Consider a sequence of negative binomial random variables where the stopping parameter r goes to infinity, while the probability p of success in each trial goes to one, in such a way as to keep the mean of the distribution (i.e. the expected number of failures) constant. Denoting this mean as λ, the parameter p will be p = r/(r + λ) Under this parametrization the probability mass function will be Now if we consider the limit as r → ∞, the second factor will converge to one, and the third to the exponent function: which is the mass function of a Poisson-distributed random variable with expected value λ. In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r. Gamma–Poisson mixture The negative binomial distribution also arises as a continuous mixture of Poisson distributions (i.e. a compound probability distribution) where the mixing distribution of the Poisson rate is a gamma distribution. That is, we can view the negative binomial as a distribution, where λ is itself a random variable, distributed as a gamma distribution with shape r and scale θ = or correspondingly rate β =. To display the intuition behind this statement, consider two independent Poisson processes, "Success" and "Failure", with intensities p and 1 − p. Together, the Success and Failure processes are equivalent to a single Poisson process of intensity 1, where an occurrence of the process is a success if a corresponding independent coin toss comes up heads with probability p; otherwise, it is a failure. If r is a counting number, the coin tosses show that the count of successes before the rth failure follows a negative binomial distribution with parameters r and p. The count is also, however, the count of the Success Poisson process at the random time T of the rth occurrence in the Failure Poisson process. The Success count follows a Poisson distribution with mean pT, where T is the waiting time for r occurrences in a Poisson process of intensity 1 − p, i.e., T is gamma-distributed with shape parameter r and intensity 1 − p. Thus, the negative binomial distribution is equivalent to a Poisson distribution with mean pT, where the random variate T is gamma-distributed with shape parameter r and intensity . The preceding paragraph follows, because λ = pT is gamma-distributed with shape parameter r and intensity . The following formal derivation (which does not depend on r being a counting number) confirms the intuition. Because of this, the negative binomial distribution is also known as the gamma–Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution. Distribution of a sum of geometrically distributed random variables If Yr is a random variable following the negative binomial distribution with parameters r and p, and support {0, 1, 2, ...}, then Yr is a sum of r independent variables following the geometric distribution (on {0, 1, 2, ...}) with parameter p. As a result of the central limit theorem, Yr (properly scaled and shifted) is therefore approximately normal for sufficiently large r. Furthermore, if Bs+r is a random variable following the binomial distribution with parameters s + r and p, then In this sense, the negative binomial distribution is the "inverse" of the binomial distribution. The sum of independent negative-binomially distributed random variables r1 and r2 with the same value for parameter p is negative-binomially distributed with the same p but with r-value r1 + r2. The negative binomial distribution is infinitely divisible, i.e., if Y has a negative binomial distribution, then for any positive integer n, there exist independent identically distributed random variables Y1, ..., Yn whose sum has the same distribution that Y has. Representation as compound Poisson distribution The negative binomial distribution NB(r,p) can be represented as a compound Poisson distribution: Let denote a sequence of independent and identically distributed random variables, each one having the logarithmic series distribution Log(p), with probability mass function Let N be a random variable, independent of the sequence, and suppose that N has a Poisson distribution with mean . Then the random sum is NB(r,p)-distributed. To prove this, we calculate the probability generating function GX of X, which is the composition of the probability generating functions GN and GY1. Using and we obtain which is the probability generating function of the NB(r,p) distribution. The following table describes four distributions related to the number of successes in a sequence of draws: (a,b,0) class of distributions The negative binomial, along with the Poisson and binomial distributions, is a member of the (a,b,0) class of distributions. All three of these distributions are special cases of the Panjer distribution. They are also members of a natural exponential family. Statistical inference Parameter estimation MVUE for p Suppose p is unknown and an experiment is conducted where it is decided ahead of time that sampling will continue until r successes are found. A sufficient statistic for the experiment is k, the number of failures. In estimating p, the minimum variance unbiased estimator is Maximum likelihood estimation When r is known, the maximum likelihood estimate of p is but this is a biased estimate. Its inverse (r + k)/r, is an unbiased estimate of 1/p, however. When r is unknown, the maximum likelihood estimator for p and r together only exists for samples for which the sample variance is larger than the sample mean. The likelihood function for N iid observations (k1, ..., kN) is from which we calculate the log-likelihood function To find the maximum we take the partial derivatives with respect to r and p and set them equal to zero: and where is the digamma function. Solving the first equation for p gives: Substituting this in the second equation gives: This equation cannot be solved for r in closed form. If a numerical solution is desired, an iterative technique such as Newton's method can be used. Alternatively, the expectation–maximization algorithm can be used. Occurrence and applications Waiting time in a Bernoulli process For the special case where r is an integer, the negative binomial distribution is known as the Pascal distribution. It is the probability distribution of a certain number of failures and successes in a series of independent and identically distributed Bernoulli trials. For k + r Bernoulli trials with success probability p, the negative binomial gives the probability of k successes and r failures, with a failure on the last trial. In other words, the negative binomial distribution is the probability distribution of the number of successes before the rth failure in a Bernoulli process, with probability p of successes on each trial. A Bernoulli process is a discrete time process, and so the number of trials, failures, and successes are integers. Consider the following example. Suppose we repeatedly throw a die, and consider a 1 to be a failure. The probability of success on each trial is 5/6. The number of successes before the third failure belongs to the infinite set { 0, 1, 2, 3, ... }. That number of successes is a negative-binomially distributed random variable. When r = 1 we get the probability distribution of number of successes before the first failure (i.e. the probability of the first failure occurring on the (k + 1)st trial), which is a geometric distribution: Overdispersed Poisson The negative binomial distribution, especially in its alternative parameterization described above, can be used as an alternative to the Poisson distribution. It is especially useful for discrete data over an unbounded positive range whose sample variance exceeds the sample mean. In such cases, the observations are overdispersed with respect to a Poisson distribution, for which the mean is equal to the variance. Hence a Poisson distribution is not an appropriate model. Since the negative binomial distribution has one more parameter than the Poisson, the second parameter can be used to adjust the variance independently of the mean. See Cumulants of some discrete probability distributions. An application of this is to annual counts of tropical cyclones in the North Atlantic or to monthly to 6-monthly counts of wintertime extratropical cyclones over Europe, for which the variance is greater than the mean. In the case of modest overdispersion, this may produce substantially similar results to an overdispersed Poisson distribution. Negative binomial modeling is widely employed in ecology and biodiversity research for analyzing count data where overdispersion is very common. This is because overdispersion is indicative of biological aggregation, such as species or communities forming clusters. Ignoring overdispersion can lead to significantly inflated model parameters, resulting in misleading statistical inferences. The negative binomial distribution effectively addresses overdispersed counts by permitting the variance to vary quadratically with the mean. An additional dispersion parameter governs the slope of the quadratic term, determining the severity of overdispersion. The model's quadratic mean-variance relationship proves to be a realistic approach for handling overdispersion, as supported by empirical evidence from many studies. Overall, the NB model offers two attractive features: (1) the convenient interpretation of the dispersion parameter as an index of clustering or aggregation, and (2) its tractable form, featuring a closed expression for the probability mass function. In genetics, the negative binomial distribution is commonly used to model data in the form of discrete sequence read counts from high-throughput RNA and DNA sequencing experiments. In epidemiology of infectious diseases, the negative binomial has been used as a better option than the Poisson distribution to model overdispersed counts of secondary infections from one infected case (super-spreading events). Multiplicity observations (physics) The negative binomial distribution has been the most effective statistical model for a broad range of multiplicity observations in particle collision experiments, e.g., (See for an overview), and is argued to be a scale-invariant property of matter, providing the best fit for astronomical observations, where it predicts the number of galaxies in a region of space. The phenomenological justification for the effectiveness of the negative binomial distribution in these contexts remained unknown for fifty years, since their first observation in 1973. In 2023, a proof from first principles was eventually demonstrated by Scott V. Tezlaf, where it was shown that the negative binomial distribution emerges from symmetries in the dynamical equations of a canonical ensemble of particles in Minkowski space. Roughly, given an expected number of trials and expected number of successes , where an isomorphic set of equations can be identified with the parameters of a relativistic current density of a canonical ensemble of massive particles, via where is the rest density, is the relativistic mean square density, is the relativistic mean square current density, and , where is the mean square speed of the particle ensemble and is the speed of light—such that one can establish the following bijective map: A rigorous alternative proof of the above correspondence has also been demonstrated through quantum mechanics via the Feynman path integral. History This distribution was first studied in 1713 by Pierre Remond de Montmort in his Essay d'analyse sur les jeux de hazard, as the distribution of the number of trials required in an experiment to obtain a given number of successes. It had previously been mentioned by Pascal. See also Coupon collector's problem Beta negative binomial distribution Extended negative binomial distribution Negative multinomial distribution Binomial distribution Poisson distribution Compound Poisson distribution Exponential family Negative binomial regression Vector generalized linear model References Discrete distributions Exponential family distributions Compound probability distributions Factorial and binomial topics Infinitely divisible probability distributions
Negative binomial distribution
[ "Mathematics" ]
5,340
[ "Factorial and binomial topics", "Combinatorics" ]
45,194
https://en.wikipedia.org/wiki/Lp%20space
{{DISPLAYTITLE:Lp space}} In mathematics, the spaces are function spaces defined using a natural generalization of the -norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue , although according to the Bourbaki group they were first introduced by Frigyes Riesz . spaces form an important class of Banach spaces in functional analysis, and of topological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, economics, finance, engineering, and other disciplines. Preliminaries The -norm in finite dimensions The Euclidean length of a vector in the -dimensional real vector space is given by the Euclidean norm: The Euclidean distance between two points and is the length of the straight line between the two points. In many situations, the Euclidean distance is appropriate for capturing the actual distances in a given space. In contrast, consider taxi drivers in a grid street plan who should measure distance not in terms of the length of the straight line to their destination, but in terms of the rectilinear distance, which takes into account that streets are either orthogonal or parallel to each other. The class of -norms generalizes these two examples and has an abundance of applications in many parts of mathematics, physics, and computer science. For a real number the -norm or -norm of is defined by The absolute value bars can be dropped when is a rational number with an even numerator in its reduced form, and is drawn from the set of real numbers, or one of its subsets. The Euclidean norm from above falls into this class and is the -norm, and the -norm is the norm that corresponds to the rectilinear distance. The -norm or maximum norm (or uniform norm) is the limit of the -norms for , given by: For all the -norms and maximum norm satisfy the properties of a "length function" (or norm), that is: only the zero vector has zero length, the length of the vector is positive homogeneous with respect to multiplication by a scalar (positive homogeneity), and the length of the sum of two vectors is no larger than the sum of lengths of the vectors (triangle inequality). Abstractly speaking, this means that together with the -norm is a normed vector space. Moreover, it turns out that this space is complete, thus making it a Banach space. Relations between -norms The grid distance or rectilinear distance (sometimes called the "Manhattan distance") between two points is never shorter than the length of the line segment between them (the Euclidean or "as the crow flies" distance). Formally, this means that the Euclidean norm of any vector is bounded by its 1-norm: This fact generalizes to -norms in that the -norm of any given vector does not grow with : For the opposite direction, the following relation between the -norm and the -norm is known: This inequality depends on the dimension of the underlying vector space and follows directly from the Cauchy–Schwarz inequality. In general, for vectors in where This is a consequence of Hölder's inequality. When In for the formula defines an absolutely homogeneous function for however, the resulting function does not define a norm, because it is not subadditive. On the other hand, the formula defines a subadditive function at the cost of losing absolute homogeneity. It does define an F-norm, though, which is homogeneous of degree Hence, the function defines a metric. The metric space is denoted by Although the -unit ball around the origin in this metric is "concave", the topology defined on by the metric is the usual vector space topology of hence is a locally convex topological vector space. Beyond this qualitative statement, a quantitative way to measure the lack of convexity of is to denote by the smallest constant such that the scalar multiple of the -unit ball contains the convex hull of which is equal to The fact that for fixed we have shows that the infinite-dimensional sequence space defined below, is no longer locally convex. When There is one norm and another function called the "norm" (with quotation marks). The mathematical definition of the norm was established by Banach's Theory of Linear Operations. The space of sequences has a complete metric topology provided by the F-norm on the product metric: The -normed space is studied in functional analysis, probability theory, and harmonic analysis. Another function was called the "norm" by David Donoho—whose quotation marks warn that this function is not a proper norm—is the number of non-zero entries of the vector Many authors abuse terminology by omitting the quotation marks. Defining the zero "norm" of is equal to This is not a norm because it is not homogeneous. For example, scaling the vector by a positive constant does not change the "norm". Despite these defects as a mathematical norm, the non-zero counting "norm" has uses in scientific computing, information theory, and statistics–notably in compressed sensing in signal processing and computational harmonic analysis. Despite not being a norm, the associated metric, known as Hamming distance, is a valid distance, since homogeneity is not required for distances. spaces and sequence spaces The -norm can be extended to vectors that have an infinite number of components (sequences), which yields the space This contains as special cases: the space of sequences whose series are absolutely convergent, the space of square-summable sequences, which is a Hilbert space, and the space of bounded sequences. The space of sequences has a natural vector space structure by applying scalar addition and multiplication. Explicitly, the vector sum and the scalar action for infinite sequences of real (or complex) numbers are given by: Define the -norm: Here, a complication arises, namely that the series on the right is not always convergent, so for example, the sequence made up of only ones, will have an infinite -norm for The space is then defined as the set of all infinite sequences of real (or complex) numbers such that the -norm is finite. One can check that as increases, the set grows larger. For example, the sequence is not in but it is in for as the series diverges for (the harmonic series), but is convergent for One also defines the -norm using the supremum: and the corresponding space of all bounded sequences. It turns out that if the right-hand side is finite, or the left-hand side is infinite. Thus, we will consider spaces for The -norm thus defined on is indeed a norm, and together with this norm is a Banach space. General ℓp-space In complete analogy to the preceding definition one can define the space over a general index set (and ) as where convergence on the right means that only countably many summands are nonzero (see also Unconditional convergence). With the norm the space becomes a Banach space. In the case where is finite with elements, this construction yields with the -norm defined above. If is countably infinite, this is exactly the sequence space defined above. For uncountable sets this is a non-separable Banach space which can be seen as the locally convex direct limit of -sequence spaces. For the -norm is even induced by a canonical inner product called the , which means that holds for all vectors This inner product can expressed in terms of the norm by using the polarization identity. On it can be defined by Now consider the case Define where for all The index set can be turned into a measure space by giving it the discrete σ-algebra and the counting measure. Then the space is just a special case of the more general -space (defined below). Lp spaces and Lebesgue integrals An space may be defined as a space of measurable functions for which the -th power of the absolute value is Lebesgue integrable, where functions which agree almost everywhere are identified. More generally, let be a measure space and When , consider the set of all measurable functions from to or whose absolute value raised to the -th power has a finite integral, or in symbols: To define the set for recall that two functions and defined on are said to be , written , if the set is measurable and has measure zero. Similarly, a measurable function (and its absolute value) is (or ) by a real number written , if the (necessarily) measurable set has measure zero. The space is the set of all measurable functions that are bounded almost everywhere (by some real ) and is defined as the infimum of these bounds: When then this is the same as the essential supremum of the absolute value of : For example, if is a measurable function that is equal to almost everywhere then for every and thus for all For every positive the value under of a measurable function and its absolute value are always the same (that is, for all ) and so a measurable function belongs to if and only if its absolute value does. Because of this, many formulas involving -norms are stated only for non-negative real-valued functions. Consider for example the identity which holds whenever is measurable, is real, and (here when ). The non-negativity requirement can be removed by substituting in for which gives Note in particular that when is finite then the formula relates the -norm to the -norm. Seminormed space of -th power integrable functions Each set of functions forms a vector space when addition and scalar multiplication are defined pointwise. That the sum of two -th power integrable functions and is again -th power integrable follows from although it is also a consequence of Minkowski's inequality which establishes that satisfies the triangle inequality for (the triangle inequality does not hold for ). That is closed under scalar multiplication is due to being absolutely homogeneous, which means that for every scalar and every function Absolute homogeneity, the triangle inequality, and non-negativity are the defining properties of a seminorm. Thus is a seminorm and the set of -th power integrable functions together with the function defines a seminormed vector space. In general, the seminorm is not a norm because there might exist measurable functions that satisfy but are not equal to ( is a norm if and only if no such exists). Zero sets of -seminorms If is measurable and equals a.e. then for all positive On the other hand, if is a measurable function for which there exists some such that then almost everywhere. When is finite then this follows from the case and the formula mentioned above. Thus if is positive and is any measurable function, then if and only if almost everywhere. Since the right hand side ( a.e.) does not mention it follows that all have the same zero set (it does not depend on ). So denote this common set by This set is a vector subspace of for every positive Quotient vector space Like every seminorm, the seminorm induces a norm (defined shortly) on the canonical quotient vector space of by its vector subspace This normed quotient space is called and it is the subject of this article. We begin by defining the quotient vector space. Given any the coset consists of all measurable functions that are equal to almost everywhere. The set of all cosets, typically denoted by forms a vector space with origin when vector addition and scalar multiplication are defined by and This particular quotient vector space will be denoted by Two cosets are equal if and only if (or equivalently, ), which happens if and only if almost everywhere; if this is the case then and are identified in the quotient space. Hence, strictly speaking consists of equivalence classes of functions. Given any the value of the seminorm on the coset is constant and equal to , that is: The map is a norm on called the . The value of a coset is independent of the particular function that was chosen to represent the coset, meaning that if is any coset then for every (since for every ). The Lebesgue space The normed vector space is called or the of -th power integrable functions and it is a Banach space for every (meaning that it is a complete metric space, a result that is sometimes called the [[Riesz–Fischer theorem#Completeness of Lp, 0 < p ≤ ∞|Riesz–Fischer theorem]]). When the underlying measure space is understood then is often abbreviated or even just Depending on the author, the subscript notation might denote either or If the seminorm on happens to be a norm (which happens if and only if ) then the normed space will be linearly isometrically isomorphic to the normed quotient space via the canonical map (since ); in other words, they will be, up to a linear isometry, the same normed space and so they may both be called " space". The above definitions generalize to Bochner spaces. In general, this process cannot be reversed: there is no consistent way to define a "canonical" representative of each coset of in For however, there is a theory of lifts enabling such recovery. Special cases For the spaces are a special case of spaces; when are the natural numbers and is the counting measure. More generally, if one considers any set with the counting measure, the resulting space is denoted For example, is the space of all sequences indexed by the integers, and when defining the -norm on such a space, one sums over all the integers. The space where is the set with elements, is with its -norm as defined above. Similar to spaces, is the only Hilbert space among spaces. In the complex case, the inner product on is defined by Functions in are sometimes called square-integrable functions, quadratically integrable functions or square-summable functions, but sometimes these terms are reserved for functions that are square-integrable in some other sense, such as in the sense of a Riemann integral . As any Hilbert space, every space is linearly isometric to a suitable where the cardinality of the set is the cardinality of an arbitrary basis for this particular If we use complex-valued functions, the space is a commutative C*-algebra with pointwise multiplication and conjugation. For many measure spaces, including all sigma-finite ones, it is in fact a commutative von Neumann algebra. An element of defines a bounded operator on any space by multiplication. When If then can be defined as above, that is: In this case, however, the -norm does not satisfy the triangle inequality and defines only a quasi-norm. The inequality valid for implies that and so the function is a metric on The resulting metric space is complete. In this setting satisfies a reverse Minkowski inequality, that is for This result may be used to prove Clarkson's inequalities, which are in turn used to establish the uniform convexity of the spaces for . The space for is an F-space: it admits a complete translation-invariant metric with respect to which the vector space operations are continuous. It is the prototypical example of an F-space that, for most reasonable measure spaces, is not locally convex: in or every open convex set containing the function is unbounded for the -quasi-norm; therefore, the vector does not possess a fundamental system of convex neighborhoods. Specifically, this is true if the measure space contains an infinite family of disjoint measurable sets of finite positive measure. The only nonempty convex open set in is the entire space. Consequently, there are no nonzero continuous linear functionals on the continuous dual space is the zero space. In the case of the counting measure on the natural numbers (i.e. ), the bounded linear functionals on are exactly those that are bounded on , i.e., those given by sequences in Although does contain non-trivial convex open sets, it fails to have enough of them to give a base for the topology. Having no linear functionals is highly undesirable for the purposes of doing analysis. In case of the Lebesgue measure on rather than work with for it is common to work with the Hardy space whenever possible, as this has quite a few linear functionals: enough to distinguish points from one another. However, the Hahn–Banach theorem still fails in for . Properties Hölder's inequality Suppose satisfy . If and then and This inequality, called Hölder's inequality, is in some sense optimal since if and is a measurable function such that where the supremum is taken over the closed unit ball of then and Atomic decomposition If then every non-negative has an , meaning that there exist a sequence of non-negative real numbers and a sequence of non-negative functions called , whose supports are pairwise disjoint sets of measure such that and for every integer and and where moreover, the sequence of functions depends only on (it is independent of ). These inequalities guarantee that for all integers while the supports of being pairwise disjoint implies Dual spaces The dual space of for has a natural isomorphism with where is such that . This isomorphism associates with the functional defined by for every is a well defined continuous linear mapping which is an isometry by the extremal case of Hölder's inequality. If is a -finite measure space one can use the Radon–Nikodym theorem to show that any can be expressed this way, i.e., is an isometric isomorphism of Banach spaces. Hence, it is usual to say simply that is the continuous dual space of For the space is reflexive. Let be as above and let be the corresponding linear isometry. Consider the map from to obtained by composing with the transpose (or adjoint) of the inverse of This map coincides with the canonical embedding of into its bidual. Moreover, the map is onto, as composition of two onto isometries, and this proves reflexivity. If the measure on is sigma-finite, then the dual of is isometrically isomorphic to (more precisely, the map corresponding to is an isometry from onto The dual of is subtler. Elements of can be identified with bounded signed finitely additive measures on that are absolutely continuous with respect to See ba space for more details. If we assume the axiom of choice, this space is much bigger than except in some trivial cases. However, Saharon Shelah proved that there are relatively consistent extensions of Zermelo–Fraenkel set theory (ZF + DC + "Every subset of the real numbers has the Baire property") in which the dual of is Embeddings Colloquially, if then contains functions that are more locally singular, while elements of can be more spread out. Consider the Lebesgue measure on the half line A continuous function in might blow up near but must decay sufficiently fast toward infinity. On the other hand, continuous functions in need not decay at all but no blow-up is allowed. More formally, suppose that , then: if and only if does not contain sets of finite but arbitrarily large measure (e.g. any finite measure). if and only if does not contain sets of non-zero but arbitrarily small measure (e.g. the counting measure). Neither condition holds for the Lebesgue measure on the real line while both conditions holds for the counting measure on any finite set. As a consequence of the closed graph theorem, the embedding is continuous, i.e., the identity operator is a bounded linear map from to in the first case and to in the second. Indeed, if the domain has finite measure, one can make the following explicit calculation using Hölder's inequality leading to The constant appearing in the above inequality is optimal, in the sense that the operator norm of the identity is precisely the case of equality being achieved exactly when -almost-everywhere. Dense subspaces Let and be a measure space and consider an integrable simple function on given by where are scalars, has finite measure and is the indicator function of the set for By construction of the integral, the vector space of integrable simple functions is dense in More can be said when is a normal topological space and its Borel –algebra. Suppose is an open set with Then for every Borel set contained in there exist a closed set and an open set such that for every . Subsequently, there exists a Urysohn function on that is on and on with If can be covered by an increasing sequence of open sets that have finite measure, then the space of –integrable continuous functions is dense in More precisely, one can use bounded continuous functions that vanish outside one of the open sets This applies in particular when and when is the Lebesgue measure. For example, the space of continuous and compactly supported functions as well as the space of integrable step functions are dense in . Closed subspaces Suppose . If is a probability space and is a closed subspace of then is finite-dimensional. It is crucial that the vector space be a subset of since it is possible to construct an infinite-dimensional closed vector subspace of which lies in ; taking the Lebesgue measure on the circle group divided by as the probability measure. Applications Statistics In statistics, measures of central tendency and statistical dispersion, such as the mean, median, and standard deviation, can be defined in terms of metrics, and measures of central tendency can be characterized as solutions to variational problems. In penalized regression, "L1 penalty" and "L2 penalty" refer to penalizing either the norm of a solution's vector of parameter values (i.e. the sum of its absolute values), or its squared norm (its Euclidean length). Techniques which use an L1 penalty, like LASSO, encourage sparse solutions (where the many parameters are zero). Elastic net regularization uses a penalty term that is a combination of the norm and the squared norm of the parameter vector. Hausdorff–Young inequality The Fourier transform for the real line (or, for periodic functions, see Fourier series), maps to (or to ) respectively, where and This is a consequence of the Riesz–Thorin interpolation theorem, and is made precise with the Hausdorff–Young inequality. By contrast, if the Fourier transform does not map into Hilbert spaces Hilbert spaces are central to many applications, from quantum mechanics to stochastic calculus. The spaces and are both Hilbert spaces. In fact, by choosing a Hilbert basis i.e., a maximal orthonormal subset of or any Hilbert space, one sees that every Hilbert space is isometrically isomorphic to (same as above), i.e., a Hilbert space of type Generalizations and extensions Weak Let be a measure space, and a measurable function with real or complex values on The distribution function of is defined for by If is in for some with then by Markov's inequality, A function is said to be in the space weak , or if there is a constant such that, for all The best constant for this inequality is the -norm of and is denoted by The weak coincide with the Lorentz spaces so this notation is also used to denote them. The -norm is not a true norm, since the triangle inequality fails to hold. Nevertheless, for in and in particular In fact, one has and raising to power and taking the supremum in one has Under the convention that two functions are equal if they are equal almost everywhere, then the spaces are complete . For any the expression is comparable to the -norm. Further in the case this expression defines a norm if Hence for the weak spaces are Banach spaces . A major result that uses the -spaces is the Marcinkiewicz interpolation theorem, which has broad applications to harmonic analysis and the study of singular integrals. Weighted spaces As before, consider a measure space Let be a measurable function. The -weighted space is defined as where means the measure defined by or, in terms of the Radon–Nikodym derivative, the norm for is explicitly As -spaces, the weighted spaces have nothing special, since is equal to But they are the natural framework for several results in harmonic analysis ; they appear for example in the Muckenhoupt theorem: for the classical Hilbert transform is defined on where denotes the unit circle and the Lebesgue measure; the (nonlinear) Hardy–Littlewood maximal operator is bounded on Muckenhoupt's theorem describes weights such that the Hilbert transform remains bounded on and the maximal operator on spaces on manifolds One may also define spaces on a manifold, called the intrinsic spaces of the manifold, using densities. Vector-valued spaces Given a measure space and a locally convex space (here assumed to be complete), it is possible to define spaces of -integrable -valued functions on in a number of ways. One way is to define the spaces of Bochner integrable and Pettis integrable functions, and then endow them with locally convex TVS-topologies that are (each in their own way) a natural generalization of the usual topology. Another way involves topological tensor products of with Element of the vector space are finite sums of simple tensors where each simple tensor may be identified with the function that sends This tensor product is then endowed with a locally convex topology that turns it into a topological tensor product, the most common of which are the projective tensor product, denoted by and the injective tensor product, denoted by In general, neither of these space are complete so their completions are constructed, which are respectively denoted by and (this is analogous to how the space of scalar-valued simple functions on when seminormed by any is not complete so a completion is constructed which, after being quotiented by is isometrically isomorphic to the Banach space ). Alexander Grothendieck showed that when is a nuclear space (a concept he introduced), then these two constructions are, respectively, canonically TVS-isomorphic with the spaces of Bochner and Pettis integral functions mentioned earlier; in short, they are indistinguishable. space of measurable functions The vector space of (equivalence classes of) measurable functions on is denoted . By definition, it contains all the and is equipped with the topology of convergence in measure. When is a probability measure (i.e., ), this mode of convergence is named convergence in probability. The space is always a topological abelian group but is only a topological vector space if This is because scalar multiplication is continuous if and only if If is -finite then the weaker topology of local convergence in measure is an F-space, i.e. a completely metrizable topological vector space. Moreover, this topology is isometric to global convergence in measure for a suitable choice of probability measure The description is easier when is finite. If is a finite measure on the function admits for the convergence in measure the following fundamental system of neighborhoods The topology can be defined by any metric of the form where is bounded continuous concave and non-decreasing on with and when (for example, Such a metric is called Lévy-metric for Under this metric the space is complete. However, as mentioned above, scalar multiplication is continuous with respect to this metric only if . To see this, consider the Lebesgue measurable function defined by . Then clearly . The space is in general not locally bounded, and not locally convex. For the infinite Lebesgue measure on the definition of the fundamental system of neighborhoods could be modified as follows The resulting space , with the topology of local convergence in measure, is isomorphic to the space for any positive –integrable density See also Notes References . . . . . . External links Proof that Lp spaces are complete Banach spaces Function spaces Mathematical series Measure theory Normed spaces Lp spaces
Lp space
[ "Mathematics" ]
5,796
[ "Sequences and series", "Function spaces", "Series (mathematics)", "Vector spaces", "Mathematical structures", "Calculus", "Space (mathematics)" ]
45,196
https://en.wikipedia.org/wiki/Injective%20function
In mathematics, an injective function (also known as injection, or one-to-one function ) is a function that maps distinct elements of its domain to distinct elements of its codomain; that is, implies (equivalently by contraposition, implies ). In other words, every element of the function's codomain is the image of one element of its domain. The term must not be confused with that refers to bijective functions, which are functions such that each element in the codomain is an image of exactly one element in the domain. A homomorphism between algebraic structures is a function that is compatible with the operations of the structures. For all common algebraic structures, and, in particular for vector spaces, an is also called a . However, in the more general context of category theory, the definition of a monomorphism differs from that of an injective homomorphism. This is thus a theorem that they are equivalent for algebraic structures; see for more details. A function that is not injective is sometimes called many-to-one. Definition Let be a function whose domain is a set The function is said to be injective provided that for all and in if then ; that is, implies Equivalently, if then in the contrapositive statement. Symbolically, which is logically equivalent to the contrapositive,An injective function (or, more generally, a monomorphism) is often denoted by using the specialized arrows ↣ or ↪ (for example, or ), although some authors specifically reserve ↪ for an inclusion map. Examples For visual examples, readers are directed to the gallery section. For any set and any subset the inclusion map (which sends any element to itself) is injective. In particular, the identity function is always injective (and in fact bijective). If the domain of a function is the empty set, then the function is the empty function, which is injective. If the domain of a function has one element (that is, it is a singleton set), then the function is always injective. The function defined by is injective. The function defined by is injective, because (for example) However, if is redefined so that its domain is the non-negative real numbers [0,+∞), then is injective. The exponential function defined by is injective (but not surjective, as no real value maps to a negative number). The natural logarithm function defined by is injective. The function defined by is not injective, since, for example, More generally, when and are both the real line then an injective function is one whose graph is never intersected by any horizontal line more than once. This principle is referred to as the . Injections can be undone Functions with left inverses are always injections. That is, given if there is a function such that for every , , then is injective. In this case, is called a retraction of Conversely, is called a section of Conversely, every injection with a non-empty domain has a left inverse . It can be defined by choosing an element in the domain of and setting to the unique element of the pre-image (if it is non-empty) or to (otherwise). The left inverse is not necessarily an inverse of because the composition in the other order, may differ from the identity on In other words, an injective function can be "reversed" by a left inverse, but is not necessarily invertible, which requires that the function is bijective. Injections may be made invertible In fact, to turn an injective function into a bijective (hence invertible) function, it suffices to replace its codomain by its actual image That is, let such that for all ; then is bijective. Indeed, can be factored as where is the inclusion function from into More generally, injective partial functions are called partial bijections. Other properties If and are both injective then is injective. If is injective, then is injective (but need not be). is injective if and only if, given any functions whenever then In other words, injective functions are precisely the monomorphisms in the category Set of sets. If is injective and is a subset of then Thus, can be recovered from its image If is injective and and are both subsets of then Every function can be decomposed as for a suitable injection and surjection This decomposition is unique up to isomorphism, and may be thought of as the inclusion function of the range of as a subset of the codomain of If is an injective function, then has at least as many elements as in the sense of cardinal numbers. In particular, if, in addition, there is an injection from to then and have the same cardinal number. (This is known as the Cantor–Bernstein–Schroeder theorem.) If both and are finite with the same number of elements, then is injective if and only if is surjective (in which case is bijective). An injective function which is a homomorphism between two algebraic structures is an embedding. Unlike surjectivity, which is a relation between the graph of a function and its codomain, injectivity is a property of the graph of the function alone; that is, whether a function is injective can be decided by only considering the graph (and not the codomain) of Proving that functions are injective A proof that a function is injective depends on how the function is presented and what properties the function holds. For functions that are given by some formula there is a basic idea. We use the definition of injectivity, namely that if then Here is an example: Proof: Let Suppose So implies which implies Therefore, it follows from the definition that is injective. There are multiple other methods of proving that a function is injective. For example, in calculus if is a differentiable function defined on some interval, then it is sufficient to show that the derivative is always positive or always negative on that interval. In linear algebra, if is a linear transformation it is sufficient to show that the kernel of contains only the zero vector. If is a function with finite domain it is sufficient to look through the list of images of each domain element and check that no image occurs twice on the list. A graphical approach for a real-valued function of a real variable is the horizontal line test. If every horizontal line intersects the curve of in at most one point, then is injective or one-to-one. Gallery See also Notes References , p. 17 ff. , p. 38 ff. External links Earliest Uses of Some of the Words of Mathematics: entry on Injection, Surjection and Bijection has the history of Injection and related terms. Khan Academy – Surjective (onto) and Injective (one-to-one) functions: Introduction to surjective and injective functions Functions and mappings Basic concepts in set theory Types of functions
Injective function
[ "Mathematics" ]
1,491
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Basic concepts in set theory", "Mathematical relations", "Types of functions" ]
45,200
https://en.wikipedia.org/wiki/Universal%20algebra
Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures themselves, not examples ("models") of algebraic structures. For instance, rather than take particular groups as the object of study, in universal algebra one takes the class of groups as an object of study. Basic idea In universal algebra, an algebra (or algebraic structure) is a set A together with a collection of operations on A. Arity An n-ary operation on A is a function that takes n elements of A and returns a single element of A. Thus, a 0-ary operation (or nullary operation) can be represented simply as an element of A, or a constant, often denoted by a letter like a. A 1-ary operation (or unary operation) is simply a function from A to A, often denoted by a symbol placed in front of its argument, like ~x. A 2-ary operation (or binary operation) is often denoted by a symbol placed between its arguments (also called infix notation), like x ∗ y. Operations of higher or unspecified arity are usually denoted by function symbols, with the arguments placed in parentheses and separated by commas, like f(x,y,z) or f(x1,...,xn). One way of talking about an algebra, then, is by referring to it as an algebra of a certain type , where is an ordered sequence of natural numbers representing the arity of the operations of the algebra. However, some researchers also allow infinitary operations, such as where J is an infinite index set, which is an operation in the algebraic theory of complete lattices. Equations After the operations have been specified, the nature of the algebra is further defined by axioms, which in universal algebra often take the form of identities, or equational laws. An example is the associative axiom for a binary operation, which is given by the equation x ∗ (y ∗ z) = (x ∗ y) ∗ z. The axiom is intended to hold for all elements x, y, and z of the set A. Varieties A collection of algebraic structures defined by identities is called a variety or equational class. Restricting one's study to varieties rules out: quantification, including universal quantification (∀) except before an equation, and existential quantification (∃) logical connectives other than conjunction (∧) relations other than equality, in particular inequalities, both and order relations The study of equational classes can be seen as a special branch of model theory, typically dealing with structures having operations only (i.e. the type can have symbols for functions but not for relations other than equality), and in which the language used to talk about these structures uses equations only. Not all algebraic structures in a wider sense fall into this scope. For example, ordered groups involve an ordering relation, so would not fall within this scope. The class of fields is not an equational class because there is no type (or "signature") in which all field laws can be written as equations (inverses of elements are defined for all non-zero elements in a field, so inversion cannot be added to the type). One advantage of this restriction is that the structures studied in universal algebra can be defined in any category that has finite products. For example, a topological group is just a group in the category of topological spaces. Examples Most of the usual algebraic systems of mathematics are examples of varieties, but not always in an obvious way, since the usual definitions often involve quantification or inequalities. Groups As an example, consider the definition of a group. Usually a group is defined in terms of a single binary operation ∗, subject to the axioms: Associativity (as in the previous section): x ∗ (y ∗ z)  =  (x ∗ y) ∗ z;   formally: ∀x,y,z. x∗(y∗z)=(x∗y)∗z. Identity element: There exists an element e such that for each element x, one has e ∗ x  =  x  =  x ∗ e;   formally: ∃e ∀x. e∗x=x=x∗e. Inverse element: The identity element is easily seen to be unique, and is usually denoted by e. Then for each x, there exists an element i such that x ∗ i  =  e  =  i ∗ x;   formally: ∀x ∃i. x∗i=e=i∗x. (Some authors also use the "closure" axiom that x ∗ y belongs to A whenever x and y do, but here this is already implied by calling ∗ a binary operation.) This definition of a group does not immediately fit the point of view of universal algebra, because the axioms of the identity element and inversion are not stated purely in terms of equational laws which hold universally "for all ..." elements, but also involve the existential quantifier "there exists ...". The group axioms can be phrased as universally quantified equations by specifying, in addition to the binary operation ∗, a nullary operation e and a unary operation ~, with ~x usually written as x−1. The axioms become: Associativity: . Identity element: ;   formally: . Inverse element: ;   formally: . To summarize, the usual definition has: a single binary operation (signature (2)) 1 equational law (associativity) 2 quantified laws (identity and inverse) while the universal algebra definition has: 3 operations: one binary, one unary, and one nullary (signature ) 3 equational laws (associativity, identity, and inverse) no quantified laws (except outermost universal quantifiers, which are allowed in varieties) A key point is that the extra operations do not add information, but follow uniquely from the usual definition of a group. Although the usual definition did not uniquely specify the identity element e, an easy exercise shows that it is unique, as is the inverse of each element. The universal algebra point of view is well adapted to category theory. For example, when defining a group object in category theory, where the object in question may not be a set, one must use equational laws (which make sense in general categories), rather than quantified laws (which refer to individual elements). Further, the inverse and identity are specified as morphisms in the category. For example, in a topological group, the inverse must not only exist element-wise, but must give a continuous mapping (a morphism). Some authors also require the identity map to be a closed inclusion (a cofibration). Other examples Most algebraic structures are examples of universal algebras. Rings, semigroups, quasigroups, groupoids, magmas, loops, and others. Vector spaces over a fixed field and modules over a fixed ring are universal algebras. These have a binary addition and a family of unary scalar multiplication operators, one for each element of the field or ring. Examples of relational algebras include semilattices, lattices, and Boolean algebras. Basic constructions We assume that the type, , has been fixed. Then there are three basic constructions in universal algebra: homomorphic image, subalgebra, and product. A homomorphism between two algebras A and B is a function from the set A to the set B such that, for every operation fA of A and corresponding fB of B (of arity, say, n), h(fA(x1, ..., xn)) = fB(h(x1), ..., h(xn)). (Sometimes the subscripts on f are taken off when it is clear from context which algebra the function is from.) For example, if e is a constant (nullary operation), then h(eA) = eB. If ~ is a unary operation, then h(~x) = ~h(x). If ∗ is a binary operation, then h(x ∗ y) = h(x) ∗ h(y). And so on. A few of the things that can be done with homomorphisms, as well as definitions of certain special kinds of homomorphisms, are listed under Homomorphism. In particular, we can take the homomorphic image of an algebra, h(A). A subalgebra of A is a subset of A that is closed under all the operations of A. A product of some set of algebraic structures is the cartesian product of the sets with the operations defined coordinatewise. Some basic theorems The isomorphism theorems, which encompass the isomorphism theorems of groups, rings, modules, etc. Birkhoff's HSP Theorem, which states that a class of algebras is a variety if and only if it is closed under homomorphic images, subalgebras, and arbitrary direct products. Motivations and applications In addition to its unifying approach, universal algebra also gives deep theorems and important examples and counterexamples. It provides a useful framework for those who intend to start the study of new classes of algebras. It can enable the use of methods invented for some particular classes of algebras to other classes of algebras, by recasting the methods in terms of universal algebra (if possible), and then interpreting these as applied to other classes. It has also provided conceptual clarification; as J.D.H. Smith puts it, "What looks messy and complicated in a particular framework may turn out to be simple and obvious in the proper general one." In particular, universal algebra can be applied to the study of monoids, rings, and lattices. Before universal algebra came along, many theorems (most notably the isomorphism theorems) were proved separately in all of these classes, but with universal algebra, they can be proven once and for all for every kind of algebraic system. The 1956 paper by Higgins referenced below has been well followed up for its framework for a range of particular algebraic systems, while his 1963 paper is notable for its discussion of algebras with operations which are only partially defined, typical examples for this being categories and groupoids. This leads on to the subject of higher-dimensional algebra which can be defined as the study of algebraic theories with partial operations whose domains are defined under geometric conditions. Notable examples of these are various forms of higher-dimensional categories and groupoids. Constraint satisfaction problem Universal algebra provides a natural language for the constraint satisfaction problem (CSP). CSP refers to an important class of computational problems where, given a relational algebra A and an existential sentence over this algebra, the question is to find out whether can be satisfied in A. The algebra A is often fixed, so that CSPA refers to the problem whose instance is only the existential sentence . It is proved that every computational problem can be formulated as CSPA for some algebra A. For example, the n-coloring problem can be stated as CSP of the algebra , i.e. an algebra with n elements and a single relation, inequality. The dichotomy conjecture (proved in April 2017) states that if A is a finite algebra, then CSPA is either P or NP-complete. Generalizations Universal algebra has also been studied using the techniques of category theory. In this approach, instead of writing a list of operations and equations obeyed by those operations, one can describe an algebraic structure using categories of a special sort, known as Lawvere theories or more generally algebraic theories. Alternatively, one can describe algebraic structures using monads. The two approaches are closely related, with each having their own advantages. In particular, every Lawvere theory gives a monad on the category of sets, while any "finitary" monad on the category of sets arises from a Lawvere theory. However, a monad describes algebraic structures within one particular category (for example the category of sets), while algebraic theories describe structure within any of a large class of categories (namely those having finite products). A more recent development in category theory is operad theory – an operad is a set of operations, similar to a universal algebra, but restricted in that equations are only allowed between expressions with the variables, with no duplication or omission of variables allowed. Thus, rings can be described as the so-called "algebras" of some operad, but not groups, since the law duplicates the variable g on the left side and omits it on the right side. At first this may seem to be a troublesome restriction, but the payoff is that operads have certain advantages: for example, one can hybridize the concepts of ring and vector space to obtain the concept of associative algebra, but one cannot form a similar hybrid of the concepts of group and vector space. Another development is partial algebra where the operators can be partial functions. Certain partial functions can also be handled by a generalization of Lawvere theories known as "essentially algebraic theories". Another generalization of universal algebra is model theory, which is sometimes described as "universal algebra + logic". History In Alfred North Whitehead's book A Treatise on Universal Algebra, published in 1898, the term universal algebra had essentially the same meaning that it has today. Whitehead credits William Rowan Hamilton and Augustus De Morgan as originators of the subject matter, and James Joseph Sylvester with coining the term itself. At the time structures such as Lie algebras and hyperbolic quaternions drew attention to the need to expand algebraic structures beyond the associatively multiplicative class. In a review Alexander Macfarlane wrote: "The main idea of the work is not unification of the several methods, nor generalization of ordinary algebra so as to include them, but rather the comparative study of their several structures." At the time George Boole's algebra of logic made a strong counterpoint to ordinary number algebra, so the term "universal" served to calm strained sensibilities. Whitehead's early work sought to unify quaternions (due to Hamilton), Grassmann's Ausdehnungslehre, and Boole's algebra of logic. Whitehead wrote in his book: "Such algebras have an intrinsic value for separate detailed study; also they are worthy of comparative study, for the sake of the light thereby thrown on the general theory of symbolic reasoning, and on algebraic symbolism in particular. The comparative study necessarily presupposes some previous separate study, comparison being impossible without knowledge." Whitehead, however, had no results of a general nature. Work on the subject was minimal until the early 1930s, when Garrett Birkhoff and Øystein Ore began publishing on universal algebras. Developments in metamathematics and category theory in the 1940s and 1950s furthered the field, particularly the work of Abraham Robinson, Alfred Tarski, Andrzej Mostowski, and their students. In the period between 1935 and 1950, most papers were written along the lines suggested by Birkhoff's papers, dealing with free algebras, congruence and subalgebra lattices, and homomorphism theorems. Although the development of mathematical logic had made applications to algebra possible, they came about slowly; results published by Anatoly Maltsev in the 1940s went unnoticed because of the war. Tarski's lecture at the 1950 International Congress of Mathematicians in Cambridge ushered in a new period in which model-theoretic aspects were developed, mainly by Tarski himself, as well as C.C. Chang, Leon Henkin, Bjarni Jónsson, Roger Lyndon, and others. In the late 1950s, Edward Marczewski emphasized the importance of free algebras, leading to the publication of more than 50 papers on the algebraic theory of free algebras by Marczewski himself, together with Jan Mycielski, Władysław Narkiewicz, Witold Nitka, J. Płonka, S. Świerczkowski, K. Urbanik, and others. Starting with William Lawvere's thesis in 1963, techniques from category theory have become important in universal algebra. See also Equational logic Graph algebra Term algebra Clone Universal algebraic geometry Simple algebra (universal algebra) Footnotes References . Burris, Stanley N., and H.P. Sankappanavar, 1981. A Course in Universal Algebra Springer-Verlag. Free online edition. (First published in 1965 by Harper & Row) Freese, Ralph, and Ralph McKenzie, 1987. Commutator Theory for Congruence Modular Varieties, 1st ed. London Mathematical Society Lecture Note Series, 125. Cambridge Univ. Press. . Free online second edition. Hobby, David, and Ralph McKenzie, 1988. The Structure of Finite Algebras American Mathematical Society. . Free online edition. Jipsen, Peter, and Henry Rose, 1992. Varieties of Lattices, Lecture Notes in Mathematics 1533. Springer Verlag. . Free online edition. Pigozzi, Don. General Theory of Algebras. Free online edition. (Mainly of historical interest.) External links Algebra Universalis—a journal dedicated to Universal Algebra.
Universal algebra
[ "Mathematics" ]
3,556
[ "Fields of abstract algebra", "Universal algebra" ]
45,207
https://en.wikipedia.org/wiki/Communications%20satellite
A communications satellite is an artificial satellite that relays and amplifies radio telecommunication signals via a transponder; it creates a communication channel between a source transmitter and a receiver at different locations on Earth. Communications satellites are used for television, telephone, radio, internet, and military applications. Many communications satellites are in geostationary orbit above the equator, so that the satellite appears stationary at the same point in the sky; therefore the satellite dish antennas of ground stations can be aimed permanently at that spot and do not have to move to track the satellite. Others form satellite constellations in low Earth orbit, where antennas on the ground have to follow the position of the satellites and switch between satellites frequently. The radio waves used for telecommunications links travel by line of sight and so are obstructed by the curve of the Earth. The purpose of communications satellites is to relay the signal around the curve of the Earth allowing communication between widely separated geographical points. Communications satellites use a wide range of radio and microwave frequencies. To avoid signal interference, international organizations have regulations for which frequency ranges or "bands" certain organizations are allowed to use. This allocation of bands minimizes the risk of signal interference. History Origins In October 1945, Arthur C. Clarke published an article titled "Extraterrestrial Relays" in the British magazine Wireless World. The article described the fundamentals behind the deployment of artificial satellites in geostationary orbits to relay radio signals. Because of this, Arthur C. Clarke is often quoted as being the inventor of the concept of the communications satellite, and the term 'Clarke Belt' is employed as a description of the orbit. The first artificial Earth satellite was Sputnik 1, which was put into orbit by the Soviet Union on 4 October 1957. It was developed by Mikhail Tikhonravov and Sergey Korolev, building on work by Konstantin Tsiolkovsky. Sputnik 1 was equipped with an on-board radio transmitter that worked on two frequencies of 20.005 and 40.002 MHz, or 7 and 15 meters wavelength. The satellite was not placed in orbit to send data from one point on Earth to another, but the radio transmitter was meant to study the properties of radio wave distribution throughout the ionosphere. The launch of Sputnik 1 was a major step in the exploration of space and rocket development, and marks the beginning of the Space Age. Early active and passive satellite experiments There are two major classes of communications satellites, passive and active. Passive satellites only reflect the signal coming from the source, toward the direction of the receiver. With passive satellites, the reflected signal is not amplified at the satellite, and only a small amount of the transmitted energy actually reaches the receiver. Since the satellite is so far above Earth, the radio signal is attenuated due to free-space path loss, so the signal received on Earth is very weak. Active satellites, on the other hand, amplify the received signal before retransmitting it to the receiver on the ground. Passive satellites were the first communications satellites, but are little used now. Work that was begun in the field of electrical intelligence gathering at the United States Naval Research Laboratory in 1951 led to a project named Communication Moon Relay. Military planners had long shown considerable interest in secure and reliable communications lines as a tactical necessity, and the ultimate goal of this project was the creation of the longest communications circuit in human history, with the Moon, Earth's natural satellite, acting as a passive relay. After achieving the first transoceanic communication between Washington, D.C., and Hawaii on 23 January 1956, this system was publicly inaugurated and put into formal production in January 1960. The first satellite purpose-built to actively relay communications was Project SCORE, led by Advanced Research Projects Agency (ARPA) and launched on 18 December 1958, which used a tape recorder to carry a stored voice message, as well as to receive, store, and retransmit messages. It was used to send a Christmas greeting to the world from U.S. President Dwight D. Eisenhower. The satellite also executed several realtime transmissions before the non-rechargeable batteries failed on 30 December 1958 after eight hours of actual operation. The direct successor to SCORE was another ARPA-led project called Courier. Courier 1B was launched on 4 October 1960 to explore whether it would be possible to establish a global military communications network by using "delayed repeater" satellites, which receive and store information until commanded to rebroadcast them. After 17 days, a command system failure ended communications from the satellite. NASA's satellite applications program launched the first artificial satellite used for passive relay communications in Echo 1 on 12 August 1960. Echo 1 was an aluminized balloon satellite acting as a passive reflector of microwave signals. Communication signals were bounced off the satellite from one point on Earth to another. This experiment sought to establish the feasibility of worldwide broadcasts of telephone, radio, and television signals. More firsts and further experiments Telstar was the first active, direct relay communications commercial satellite and marked the first transatlantic transmission of television signals. Belonging to AT&T as part of a multi-national agreement between AT&T, Bell Telephone Laboratories, NASA, the British General Post Office, and the French National PTT (Post Office) to develop satellite communications, it was launched by NASA from Cape Canaveral on 10 July 1962, in the first privately sponsored space launch. Another passive relay experiment primarily intended for military communications purposes was Project West Ford, which was led by Massachusetts Institute of Technology's Lincoln Laboratory. After an initial failure in 1961, a launch on 9 May 1963 dispersed 350 million copper needle dipoles to create a passive reflecting belt. Even though only about half of the dipoles properly separated from each other, the project was able to successfully experiment and communicate using frequencies in the SHF X band spectrum. An immediate antecedent of the geostationary satellites was the Hughes Aircraft Company's Syncom 2, launched on 26 July 1963. Syncom 2 was the first communications satellite in a geosynchronous orbit. It revolved around the Earth once per day at constant speed, but because it still had north–south motion, special equipment was needed to track it. Its successor, Syncom 3, launched on 19 July 1964, was the first geostationary communications satellite. Syncom 3 obtained a geosynchronous orbit, without a north–south motion, making it appear from the ground as a stationary object in the sky. A direct extension of the passive experiments of Project West Ford was the Lincoln Experimental Satellite program, also conducted by the Lincoln Laboratory on behalf of the United States Department of Defense. The LES-1 active communications satellite was launched on 11 February 1965 to explore the feasibility of active solid-state X band long-range military communications. A total of nine satellites were launched between 1965 and 1976 as part of this series. International commercial satellite projects In the United States, 1962 saw the creation of the Communications Satellite Corporation (COMSAT) private corporation, which was subject to instruction by the US Government on matters of national policy. Over the next two years, international negotiations led to the Intelsat Agreements, which in turn led to the launch of Intelsat 1, also known as Early Bird, on 6 April 1965, and which was the first commercial communications satellite to be placed in geosynchronous orbit. Subsequent Intelsat launches in the 1960s provided multi-destination service and video, audio, and data service to ships at sea (Intelsat 2 in 1966–67), and the completion of a fully global network with Intelsat 3 in 1969–70. By the 1980s, with significant expansions in commercial satellite capacity, Intelsat was on its way to become part of the competitive private telecommunications industry, and had started to get competition from the likes of PanAmSat in the United States, which, ironically, was then bought by its archrival in 2005. When Intelsat was launched, the United States was the only launch source outside of the Soviet Union, who did not participate in the Intelsat agreements. The Soviet Union launched its first communications satellite on 23 April 1965 as part of the Molniya program. This program was also unique at the time for its use of what then became known as the Molniya orbit, which describes a highly elliptical orbit, with two high apogees daily over the northern hemisphere. This orbit provides a long dwell time over Russian territory as well as over Canada at higher latitudes than geostationary orbits over the equator. In the 2020s, the popularity of low Earth orbit satellite internet constellations providing relatively low-cost internet services led to reducing demand for new geostationary orbit communications satellites. Satellite orbits Communications satellites usually have one of three primary types of orbit, while other orbital classifications are used to further specify orbital details. MEO and LEO are non-geostationary orbit (NGSO). Geostationary satellites have a geostationary orbit (GEO), which is from Earth's surface. This orbit has the special characteristic that the apparent position of the satellite in the sky when viewed by a ground observer does not change, the satellite appears to "stand still" in the sky. This is because the satellite's orbital period is the same as the rotation rate of the Earth. The advantage of this orbit is that ground antennas do not have to track the satellite across the sky, they can be fixed to point at the location in the sky the satellite appears. Medium Earth orbit (MEO) satellites are closer to Earth. Orbital altitudes range from above Earth. The region below medium orbits is referred to as low Earth orbit (LEO), and is about above Earth. As satellites in MEO and LEO orbit the Earth faster, they do not remain visible in the sky to a fixed point on Earth continually like a geostationary satellite, but appear to a ground observer to cross the sky and "set" when they go behind the Earth beyond the visible horizon. Therefore, to provide continuous communications capability with these lower orbits requires a larger number of satellites, so that one of these satellites will always be visible in the sky for transmission of communication signals. However, due to their closer distance to the Earth, LEO or MEO satellites can communicate to ground with reduced latency and at lower power than would be required from a geosynchronous orbit. Low Earth orbit (LEO) A low Earth orbit (LEO) typically is a circular orbit about above the Earth's surface and, correspondingly, a period (time to revolve around the Earth) of about 90 minutes. Because of their low altitude, these satellites are only visible from within a radius of roughly from the sub-satellite point. In addition, satellites in low Earth orbit change their position relative to the ground position quickly. So even for local applications, many satellites are needed if the mission requires uninterrupted connectivity. Low-Earth-orbiting satellites are less expensive to launch into orbit than geostationary satellites and, due to proximity to the ground, do not require as high signal strength (signal strength falls off as the square of the distance from the source, so the effect is considerable). Thus there is a trade off between the number of satellites and their cost. In addition, there are important differences in the onboard and ground equipment needed to support the two types of missions. Satellite constellation A group of satellites working in concert is known as a satellite constellation. Two such constellations, intended to provide satellite phone and low-speed data services, primarily to remote areas, are the Iridium and Globalstar systems. The Iridium system has 66 satellites, which orbital inclination of 86.4° and inter-satellite links provide service availability over the entire surface of Earth. Starlink is a satellite internet constellation operated by SpaceX, that aims for global satellite Internet access coverage. It is also possible to offer discontinuous coverage using a low-Earth-orbit satellite capable of storing data received while passing over one part of Earth and transmitting it later while passing over another part. This will be the case with the CASCADE system of Canada's CASSIOPE communications satellite. Another system using this store and forward method is Orbcomm. Medium Earth orbit (MEO) A medium Earth orbit is a satellite in orbit somewhere between above the Earth's surface. MEO satellites are similar to LEO satellites in functionality. MEO satellites are visible for much longer periods of time than LEO satellites, usually between 2 and 8 hours. MEO satellites have a larger coverage area than LEO satellites. A MEO satellite's longer duration of visibility and wider footprint means fewer satellites are needed in a MEO network than a LEO network. One disadvantage is that a MEO satellite's distance gives it a longer time delay and weaker signal than a LEO satellite, although these limitations are not as severe as those of a GEO satellite. Like LEOs, these satellites do not maintain a stationary distance from the Earth. This is in contrast to the geostationary orbit, where satellites are always from Earth. Typically the orbit of a medium Earth orbit satellite is about above Earth. In various patterns, these satellites make the trip around Earth in anywhere from 2 to 8 hours. Examples of MEO In 1962, the communications satellite, Telstar, was launched. It was a medium Earth orbit satellite designed to help facilitate high-speed telephone signals. Although it was the first practical way to transmit signals over the horizon, its major drawback was soon realised. Because its orbital period of about 2.5 hours did not match the Earth's rotational period of 24 hours, continuous coverage was impossible. It was apparent that multiple MEOs needed to be used in order to provide continuous coverage. In 2013, the first four of a constellation of 20 MEO satellites was launched. The O3b satellites provide broadband internet services, in particular to remote locations and maritime and in-flight use, and orbit at an altitude of ). Geostationary orbit (GEO) To an observer on Earth, a satellite in a gestationary orbit appears motionless, in a fixed position in the sky. This is because it revolves around the Earth at Earth's own angular velocity (one revolution per sidereal day, in an equatorial orbit). A geostationary orbit is useful for communications because ground antennas can be aimed at the satellite without their having to track the satellite's motion. This is relatively inexpensive. In applications that require many ground antennas, such as DirecTV distribution, the savings in ground equipment can more than outweigh the cost and complexity of placing a satellite into orbit. Examples of GEO The first geostationary satellite was Syncom 3, launched on 19 August 1964, and used for communication across the Pacific starting with television coverage of the 1964 Summer Olympics. Shortly after Syncom 3, Intelsat I, aka Early Bird, was launched on 6 April 1965 and placed in orbit at 28° west longitude. It was the first geostationary satellite for telecommunications over the Atlantic Ocean. On 9 November 1972, Canada's first geostationary satellite serving the continent, Anik A1, was launched by Telesat Canada, with the United States following suit with the launch of Westar 1 by Western Union on 13 April 1974. On 30 May 1974, the first geostationary communications satellite in the world to be three-axis stabilized was launched: the experimental satellite ATS-6 built for NASA. After the launches of the Telstar through Westar 1 satellites, RCA Americom (later GE Americom, now SES) launched Satcom 1 in 1975. It was Satcom 1 that was instrumental in helping early cable TV channels such as WTBS (now TBS), HBO, CBN (now Freeform) and The Weather Channel become successful, because these channels distributed their programming to all of the local cable TV headends using the satellite. Additionally, it was the first satellite used by broadcast television networks in the United States, like ABC, NBC, and CBS, to distribute programming to their local affiliate stations. Satcom 1 was widely used because it had twice the communications capacity of the competing Westar 1 in America (24 transponders as opposed to the 12 of Westar 1), resulting in lower transponder-usage costs. Satellites in later decades tended to have even higher transponder numbers. By 2000, Hughes Space and Communications (now Boeing Satellite Development Center) had built nearly 40 percent of the more than one hundred satellites in service worldwide. Other major satellite manufacturers include Space Systems/Loral, Orbital Sciences Corporation with the Star Bus series, Indian Space Research Organisation, Lockheed Martin (owns the former RCA Astro Electronics/GE Astro Space business), Northrop Grumman, Alcatel Space, now Thales Alenia Space, with the Spacebus series, and Astrium. Molniya orbit Geostationary satellites must operate above the equator and therefore appear lower on the horizon as the receiver gets farther from the equator. This will cause problems for extreme northerly latitudes, affecting connectivity and causing multipath interference (caused by signals reflecting off the ground and into the ground antenna). Thus, for areas close to the North (and South) Pole, a geostationary satellite may appear below the horizon. Therefore, Molniya orbit satellites have been launched, mainly in Russia, to alleviate this problem. Molniya orbits can be an appealing alternative in such cases. The Molniya orbit is highly inclined, guaranteeing good elevation over selected positions during the northern portion of the orbit. (Elevation is the extent of the satellite's position above the horizon. Thus, a satellite at the horizon has zero elevation and a satellite directly overhead has elevation of 90 degrees.) The Molniya orbit is designed so that the satellite spends the great majority of its time over the far northern latitudes, during which its ground footprint moves only slightly. Its period is one half day, so that the satellite is available for operation over the targeted region for six to nine hours every second revolution. In this way a constellation of three Molniya satellites (plus in-orbit spares) can provide uninterrupted coverage. The first satellite of the Molniya series was launched on 23 April 1965 and was used for experimental transmission of TV signals from a Moscow uplink station to downlink stations located in Siberia and the Russian Far East, in Norilsk, Khabarovsk, Magadan and Vladivostok. In November 1967 Soviet engineers created a unique system of national TV network of satellite television, called Orbita, that was based on Molniya satellites. Polar orbit In the United States, the National Polar-orbiting Operational Environmental Satellite System (NPOESS) was established in 1994 to consolidate the polar satellite operations of NASA (National Aeronautics and Space Administration) NOAA (National Oceanic and Atmospheric Administration). NPOESS manages a number of satellites for various purposes; for example, METSAT for meteorological satellite, EUMETSAT for the European branch of the program, and METOP for meteorological operations. These orbits are Sun synchronous, meaning that they cross the equator at the same local time each day. For example, the satellites in the NPOESS (civilian) orbit will cross the equator, going from south to north, at times 1:30 P.M., 5:30 P.M., and 9:30 P.M. Beyond geostationary orbit There are plans and initiatives to bring dedicated communications satellite beyond geostationary orbits. NASA proposed LunaNet as a data network aiming to provide a "Lunar Internet" for cis-lunar spacecraft and Installations. The Moonlight Initiative is an equivalent ESA project that is stated to be compatible and providing navigational services for the lunar surface. Both programmes are satellite constellations of several satellites in various orbits around the Moon. Other orbits are also planned to be used. Positions in the Earth-Moon-Libration points are also proposed for communication satellites covering the Moon alike communication satellites in geosynchronous orbit cover the Earth. Also, dedicated communication satellites in orbits around Mars supporting different missions on surface and other orbits are considered, such as the Mars Telecommunications Orbiter. Structure Communications Satellites are usually composed of the following subsystems: Communication Payload, normally composed of transponders, antennas, amplifiers and switching systems Engines used to bring the satellite to its desired orbit A station keeping tracking and stabilization subsystem used to keep the satellite in the right orbit, with its antennas pointed in the right direction, and its power system pointed towards the Sun Power subsystem, used to power the Satellite systems, normally composed of solar cells, and batteries that maintain power during solar eclipse Command and Control subsystem, which maintains communications with ground control stations. The ground control Earth stations monitor the satellite performance and control its functionality during various phases of its life-cycle. The bandwidth available from a satellite depends upon the number of transponders provided by the satellite. Each service (TV, Voice, Internet, radio) requires a different amount of bandwidth for transmission. This is typically known as link budgeting and a network simulator can be used to arrive at the exact value. Frequency allocation for satellite systems Allocating frequencies to satellite services is a complicated process which requires international coordination and planning. This is carried out under the auspices of the International Telecommunication Union (ITU). To facilitate frequency planning, the world is divided into three regions: Region 1: Europe, Africa, the Middle East, what was formerly the Soviet Union, and Mongolia Region 2: North and South America and Greenland Region 3: Asia (excluding region 1 areas), Australia, and the southwest Pacific Within these regions, frequency bands are allocated to various satellite services, although a given service may be allocated different frequency bands in different regions. Some of the services provided by satellites are: Fixed satellite service (FSS) Broadcasting satellite service (BSS) Mobile-satellite service Radionavigation-satellite service Meteorological-satellite service Applications Telephony The first and historically most important application for communication satellites was in intercontinental long distance telephony. The fixed Public Switched Telephone Network relays telephone calls from land line telephones to an Earth station, where they are then transmitted to a geostationary satellite. The downlink follows an analogous path. Improvements in submarine communications cables through the use of fiber-optics caused some decline in the use of satellites for fixed telephony in the late 20th century. Satellite communications are still used in many applications today. Remote islands such as Ascension Island, Saint Helena, Diego Garcia, and Easter Island, where no submarine cables are in service, need satellite telephones. There are also regions of some continents and countries where landline telecommunications are rare to non existent, for example large regions of South America, Africa, Canada, China, Russia, and Australia. Satellite communications also provide connection to the edges of Antarctica and Greenland. Other land use for satellite phones are rigs at sea, a backup for hospitals, military, and recreation. Ships at sea, as well as planes, often use satellite phones. Satellite phone systems can be accomplished by a number of means. On a large scale, often there will be a local telephone system in an isolated area with a link to the telephone system in a main land area. There are also services that will patch a radio signal to a telephone system. In this example, almost any type of satellite can be used. Satellite phones connect directly to a constellation of either geostationary or low-Earth-orbit satellites. Calls are then forwarded to a satellite teleport connected to the Public Switched Telephone Network . Television As television became the main market, its demand for simultaneous delivery of relatively few signals of large bandwidth to many receivers being a more precise match for the capabilities of geosynchronous comsats. Two satellite types are used for North American television and radio: Direct broadcast satellite (DBS), and Fixed Service Satellite (FSS). The definitions of FSS and DBS satellites outside of North America, especially in Europe, are a bit more ambiguous. Most satellites used for direct-to-home television in Europe have the same high power output as DBS-class satellites in North America, but use the same linear polarization as FSS-class satellites. Examples of these are the Astra, Eutelsat, and Hotbird spacecraft in orbit over the European continent. Because of this, the terms FSS and DBS are more so used throughout the North American continent, and are uncommon in Europe. Fixed Service Satellites use the C band, and the lower portions of the Ku band. They are normally used for broadcast feeds to and from television networks and local affiliate stations (such as program feeds for network and syndicated programming, live shots, and backhauls), as well as being used for distance learning by schools and universities, business television (BTV), Videoconferencing, and general commercial telecommunications. FSS satellites are also used to distribute national cable channels to cable television headends. Free-to-air satellite TV channels are also usually distributed on FSS satellites in the Ku band. The Intelsat Americas 5, Galaxy 10R and AMC 3 satellites over North America provide a quite large amount of FTA channels on their Ku band transponders. The American Dish Network DBS service has also recently used FSS technology as well for their programming packages requiring their SuperDish antenna, due to Dish Network needing more capacity to carry local television stations per the FCC's "must-carry" regulations, and for more bandwidth to carry HDTV channels. A direct broadcast satellite is a communications satellite that transmits to small DBS satellite dishes (usually 18 to 24 inches or 45 to 60 cm in diameter). Direct broadcast satellites generally operate in the upper portion of the microwave Ku band. DBS technology is used for DTH-oriented (Direct-To-Home) satellite TV services, such as DirecTV, DISH Network and Orby TV in the United States, Bell Satellite TV and Shaw Direct in Canada, Freesat and Sky in the UK, Ireland, and New Zealand and DSTV in South Africa. Operating at lower frequency and lower power than DBS, FSS satellites require a much larger dish for reception (3 to 8 feet (1 to 2.5 m) in diameter for Ku band, and 12 feet (3.6 m) or larger for C band). They use linear polarization for each of the transponders' RF input and output (as opposed to circular polarization used by DBS satellites), but this is a minor technical difference that users do not notice. FSS satellite technology was also originally used for DTH satellite TV from the late 1970s to the early 1990s in the United States in the form of TVRO (Television Receive Only) receivers and dishes. It was also used in its Ku band form for the now-defunct Primestar satellite TV service. Some satellites have been launched that have transponders in the Ka band, such as DirecTV's SPACEWAY-1 satellite, and Anik F2. NASA and ISRO have also launched experimental satellites carrying Ka band beacons recently. Some manufacturers have also introduced special antennas for mobile reception of DBS television. Using Global Positioning System (GPS) technology as a reference, these antennas automatically re-aim to the satellite no matter where or how the vehicle (on which the antenna is mounted) is situated. These mobile satellite antennas are popular with some recreational vehicle owners. Such mobile DBS antennas are also used by JetBlue Airways for DirecTV (supplied by LiveTV, a subsidiary of JetBlue), which passengers can view on-board on LCD screens mounted in the seats. Radio broadcasting Satellite radio offers audio broadcast services in some countries, notably the United States. Mobile services allow listeners to roam a continent, listening to the same audio programming anywhere. A satellite radio or subscription radio (SR) is a digital radio signal that is broadcast by a communications satellite, which covers a much wider geographical range than terrestrial radio signals. Amateur radio Amateur radio operators have access to amateur satellites, which have been designed specifically to carry amateur radio traffic. Most such satellites operate as spaceborne repeaters, and are generally accessed by amateurs equipped with UHF or VHF radio equipment and highly directional antennas such as Yagis or dish antennas. Due to launch costs, most current amateur satellites are launched into fairly low Earth orbits, and are designed to deal with only a limited number of brief contacts at any given time. Some satellites also provide data-forwarding services using the X.25 or similar protocols. Internet access After the 1990s, satellite communication technology has been used as a means to connect to the Internet via broadband data connections. This can be very useful for users who are located in remote areas, and cannot access a broadband connection, or require high availability of services. Military Communications satellites are used for military communications applications, such as Global Command and Control Systems. Examples of military systems that use communication satellites are the MILSTAR, the DSCS, and the FLTSATCOM of the United States, NATO satellites, United Kingdom satellites (for instance Skynet), and satellites of the former Soviet Union. India has launched its first Military Communication satellite GSAT-7, its transponders operate in UHF, F, C and bands. Typically military satellites operate in the UHF, SHF (also known as X-band) or EHF (also known as Ka band) frequency bands. Data collection Near-ground in situ environmental monitoring equipment (such as tide gauges, weather stations, weather buoys, and radiosondes), may use satellites for one-way data transmission or two-way telemetry and telecontrol. It may be based on a secondary payload of a weather satellite (as in the case of GOES and METEOSAT and others in the Argos system) or in dedicated satellites (such as SCD). The data rate is typically much lower than in satellite Internet access. See also Commercialization of space High-altitude platform station History of telecommunication Inter-satellite communications satellite List of communication satellite companies List of communications satellite firsts NewSpace Reconnaissance satellite Relay (disambiguation) Satcom On The Move Satellite data unit Satellite delay Satellite modem Satellite space segment Space pollution Traveling-wave tube References Notes Citations Further reading Slotten, Hugh R. Beyond Sputnik and the Space Race: The Origins of Global Satellite Communications (Johns Hopkins University Press, 2022); online review External links Satellite Industry Association Communications satellites short history by David J. Whalen Beyond The Ionosphere: Fifty Years of Satellite Communication (NASA SP-4217, 1997) Satellite broadcasting Satellites by type Telecommunications-related introductions in 1962 Wireless communication systems
Communications satellite
[ "Technology", "Engineering" ]
6,247
[ "Telecommunications engineering", "Wireless communication systems", "Satellite broadcasting" ]
45,259
https://en.wikipedia.org/wiki/Biosafety
Biosafety is the prevention of large-scale loss of biological integrity, focusing both on ecology and human health. These prevention mechanisms include the conduction of regular reviews of biosafety in laboratory settings, as well as strict guidelines to follow. Biosafety is used to protect from harmful incidents. Many laboratories handling biohazards employ an ongoing risk management assessment and enforcement process for biosafety. Failures to follow such protocols can lead to increased risk of exposure to biohazards or pathogens. Human error and poor technique contribute to unnecessary exposure and compromise the best safeguards set into place for protection. The international Cartagena Protocol on Biosafety deals primarily with the agricultural definition but many advocacy groups seek to expand it to include post-genetic threats: new molecules, artificial life forms, and even robots which may compete directly in the natural food chain. Biosafety in agriculture, chemistry, medicine, exobiology and beyond will likely require the application of the precautionary principle, and a new definition focused on the biological nature of the threatened organism rather than the nature of the threat. When biological warfare or new, currently hypothetical, threats (i.e., robots, new artificial bacteria) are considered, biosafety precautions are generally not sufficient. The new field of biosecurity addresses these complex threats. Biosafety level refers to the stringency of biocontainment precautions deemed necessary by the Centers for Disease Control and Prevention (CDC) for laboratory work with infectious materials. Typically, institutions that experiment with or create potentially harmful biological material will have a committee or board of supervisors that is in charge of the institution's biosafety. They create and monitor the biosafety standards that must be met by labs in order to prevent the accidental release of potentially destructive biological material. (note that in the US, several groups are involved, and efforts are being made to improve processes for government run labs, but there is no unifying regulatory authority for all labs. Biosafety is related to several fields: In ecology (referring to imported life forms from beyond ecoregion borders), In agriculture (reducing the risk of alien viral or transgenic genes, genetic engineering or prions such as BSE/"MadCow", reducing the risk of food bacterial contamination) In medicine (referring to organs or tissues from biological origin, or genetic therapy products, virus; levels of lab containment protocols measured as 1, 2, 3, 4 in rising order of danger), In chemistry (i.e., nitrates in water, PCB levels affecting fertility) In exobiology (i.e., NASA's policy for containing alien microbes that may exist on space samples. See planetary protection and interplanetary contamination), and In synthetic biology (referring to the risks associated with this type of lab practice) Hazards Chemical hazards typically found in laboratory settings include carcinogens, toxins, irritants, corrosives, and sensitizers. Biological hazards include viruses, bacteria, fungi, prions, and biologically derived toxins, which may be present in body fluids and tissue, cell culture specimens, and laboratory animals. Routes of exposure for chemical and biological hazards include inhalation, ingestion, skin contact, and eye contact. Physical hazards include ergonomic hazards, ionizing and non-ionizing radiation, and noise hazards. Additional safety hazards include burns and cuts from autoclaves, injuries from centrifuges, compressed gas leaks, cold burns from cryogens, electrical hazards, fires, injuries from machinery, and falls. In synthetic biology A complete understanding of experimental risks associated with synthetic biology is helping to enforce the knowledge and effectiveness of biosafety. With the potential future creation of man-made unicellular organisms, some are beginning to consider the effect that these organisms will have on biomass already present. Scientists estimate that within the next few decades, organism design will be sophisticated enough to accomplish tasks such as creating biofuels and lowering the levels of harmful substances in the atmosphere. Scientist that favor the development of synthetic biology claim that the use of biosafety mechanisms such as suicide genes and nutrient dependencies will ensure the organisms cannot survive outside of the lab setting in which they were originally created. Organizations like the ETC Group argue that regulations should control the creation of organisms that could potentially harm existing life. They also argue that the development of these organisms will simply shift the consumption of petroleum to the utilization of biomass in order to create energy. These organisms can harm existing life by affecting the prey/predator food chain, reproduction between species, as well as competition against other species (species at risk, or act as an invasive species). Synthetic vaccines are now being produced in the lab. These have caused a lot of excitement in the pharmaceutical industry as they will be cheaper to produce, allow quicker production, as well as enhance the knowledge of virology and immunology. In medicine, healthcare settings and laboratories Biosafety, in medicine and health care settings, specifically refers to proper handling of organs or tissues from biological origin, or genetic therapy products, viruses with respect to the environment, to ensure the safety of health care workers, researchers, lab staff, patients, and the general public. Laboratories are assigned a biosafety level numbered 1 through 4 based on their potential biohazard risk level. The employing authority, through the laboratory director, is responsible for ensuring that there is adequate surveillance of the health of laboratory personnel. The objective of such surveillance is to monitor for occupationally acquired diseases. The World Health Organization attributes human error and poor technique as the primary cause of mishandling of biohazardous materials. Biosafety is also becoming a global concern and requires multilevel resources and international collaboration to monitor, prevent and correct accidents from unintended and malicious release and also to prevent that bioterrorists get their hands-on biologics sample to create biologic weapons of mass destruction. Even people outside of the health sector needs to be involved as in the case of the Ebola outbreak the impact that it had on businesses and travel required that private sectors, international banks together pledged more than $2 billion to combat the epidemic. The bureau of international Security and nonproliferation (ISN) is responsible for managing a broad range of U.S. nonproliferation policies, programs, agreements, and initiatives, and biological weapon is one their concerns Biosafety has its risks and benefits. All stakeholders must try to find a balance between cost-effectiveness of safety measures and use evidence-based safety practices and recommendations, measure the outcomes and consistently reevaluate the potential benefits that biosafety represents for human health. Biosafety level designations are based on a composite of the design features, construction, containment facilities, equipment, practices and operational procedures required for working with agents from the various risk groups. Classification of biohazardous materials is subjective and the risk assessment is determined by the individuals most familiar with the specific characteristics of the organism. There are several factors taken into account when assessing an organism and the classification process. Risk Group 1: (no or low individual and community risk) A microorganism that is unlikely to cause human or animal disease. Risk Group 2 : (moderate individual risk, low community risk) A pathogen that can cause human or animal disease but is unlikely to be a serious hazard to laboratory workers, the community, livestock or the environment. Laboratory exposures may cause serious infection, but effective treatment and preventive measures are available and the risk of spread of infection is limited. Risk Group 3 : (high individual risk, low community risk) A pathogen that usually causes serious human or animal disease but does not ordinarily spread from one infected individual to another. Effective treatment and preventive measures are available. Risk Group 4 : (high individual and community risk) A pathogen that usually causes serious human or animal disease and that can be readily transmitted from one individual to another, directly or indirectly. Effective treatment and preventive measures are not usually available. See World Health Organization Biosafety Laboratory Guidelines (4th edition, 2020): World Health Organization Biosafety Laboratory Guidelines Investigations have shown that there are hundreds of unreported biosafety accidents, with laboratories self-policing the handling of biohazardous materials and lack of reporting. Poor record keeping, improper disposal, and mishandling biohazardous materials result in increased risks of biochemical contamination for both the public and environment. Along with the precautions taken during the handling process of biohazardous materials, the World Health Organization recommends: Staff training should always include information on safe methods for highly hazardous procedures that are commonly encountered by all laboratory personnel, and which involve: Inhalation risks (i.e. aerosol production) when using loops, streaking agar plates, pipetting, making smears, opening cultures, taking blood/serum samples, centrifuging, etc. Ingestion risks when handling specimens, smears and cultures Risks of percutaneous exposures when using syringes and needles Bites and scratches when handling animals Handling of blood and other potentially hazardous pathological materials Decontamination and disposal of infectious material. Biosafety management in laboratory First of all the laboratory director, who holds immediate responsibility for the laboratory, is tasked with ensuring the development and adoption of a biosafety management plan as well as a safety or operations manual. Secondly, the laboratory supervisor, who reports to the laboratory director, is responsible for organizing regular training sessions on laboratory safety. The third point, the personnel must be informed about any special hazards and be required to review the safety or operations manual and adhere to established practices and procedures. The laboratory supervisor is responsible for ensuring that all personnel have a clear understanding of these guidelines, and a copy of the safety or operations manual should be readily available within the laboratory. Finally, adequate medical assessment, monitoring, and treatment must be made available to all personnel when needed, and comprehensive medical records should be maintained. Policy and practice in the United States Legal information In June 2009, the Trans-Federal Task Force on Optimizing Biosafety and Biocontainment Oversight recommended the formation of an agency to coordinate high safety risk level labs (3 and 4), and voluntary, non-punitive measures for incident reporting. However, it is unclear as to what changes may or may not have been implemented following their recommendations. United States Code of Federal Regulations The United States Code of Federal Regulations is the codification (law), or collection of laws specific to a specific to a jurisdiction that represent broad areas subject to federal regulation. Title 42 of the Code of Federal Regulations addresses laws concerning Public Health issues including biosafety which can be found under the citation 42 CFR 73 to 42 CFR 73.21 by accessing the US Code of Federal Regulations (CFR) website. Title 42 Section 73 of the CFR addresses specific aspects of biosafety including occupational safety and health, transportation of biohazardous materials and safety plans for laboratories using potential biohazards. While biocontainment, as defined in the Biosafety in Microbiological and Biomedical Laboratories and Primary Containment for Biohazards: Selection, Installation and Use of Biosafety Cabinets manuals available at the Centers for Disease Control and Prevention website much of the design, implementation and monitoring of protocols are left up to state and local authorities. The United States CFR states "An individual or entity required to register [as a user of biological agents] must develop and implement a written biosafety plan that is commensurate with the risk of the select agent or toxin" which is followed by three recommended sources for laboratory reference: The CDC/NIH publication, "Biosafety in Microbiological and Biomedical Laboratories." The Occupational Safety and Health Administration (OSHA) regulations in 29 CFR parts 1910.1200 and 1910.1450. The "NIH Guidelines for Research Involving Recombinant DNA Molecules" (NIH Guidelines). While clearly the needs of biocontainment and biosafety measures vary across government, academic and private industry laboratories, biological agents pose similar risks independent of their locale. Laws relating to biosafety are not easily accessible and there are few federal regulations that are readily available for a potential trainee to reference outside of the publications recommended in 42 CFR 73.12. Therefore, training is the responsibility of lab employers and is not consistent across various laboratory types thereby increasing the risk of accidental release of biological hazards that pose serious health threats to the humans, animals and the ecosystem as a whole. Agency guidance Many government agencies have made guidelines and recommendations in an effort to increase biosafety measures across laboratories in the United States. Agencies involved in producing policies surrounding biosafety within a hospital, pharmacy or clinical research laboratory include: the CDC, FDA, USDA, DHHS, DoT, EPA and potentially other local organizations including public health departments. The federal government does set some standards and recommendations for States to meet their standards, most of which fall under the Occupational Safety and Health Act of 1970. but currently, there is no single federal regulating agency directly responsible for ensuring the safety of biohazardous handling, storage, identification, clean-up and disposal. In addition to the CDC, the Environmental Protection Agency has some of the most accessible information on ecological impacts of biohazards, how to handle spills, reporting guidelines and proper disposal of agents dangerous to the environment. Many of these agencies have their own manuals and guidance documents relating to training and certain aspects of biosafety directly tied to their agency's scope, including transportation, storage and handling of blood borne pathogens (OSHA, IATA). The American Biological Safety Association (ABSA) has a list of such agencies and links to their websites, along with links to publications and guidance documents to assist in risk assessment, lab design and adherence to laboratory exposure control plans. Many of these agencies were members of the 2009 Task Force on BioSafety. There was also a formation of a Blue Ribbon Study Panel on Biodefense, but this is more concerned with national defense programs and biosecurity. Ultimately states and local governments, as well as private industry labs, are left to make the final determinants for their own biosafety programs, which vary widely in scope and enforcement across the United States. Not all state programs address biosafety from all necessary perspectives, which should not just include personal safety, but also emphasize an full understanding among laboratory personnel of quality control and assurance, exposure potential impacts on the environment, and general public safety. Toby Ord puts into question whether the current international conventions regarding biotechnology research and development regulation, and self-regulation by biotechnology companies and the scientific community are adequate. State occupational safety plans are often focused on transportation, disposal, and risk assessment, allowing caveats for safety audits, but ultimately leaves the training in the hands of the employer. 22 states have approved Occupational Safety plans by OSHA that are audited annually for effectiveness. These plans apply to private and public sector workers, and not necessarily state/ government workers, and not all specifically have a comprehensive program for all aspects of biohazard management from start to finish. Sometimes biohazard management plans are limited only to workers in transportation specific job titles. The enforcement and training on such regulations can vary from lab to lab based on the State's plans for occupational health and safety. With the exception of DoD lab personnel, CDC lab personnel, First responders, and DoT employees, enforcement of training is inconsistent, and while training is required to be done, specifics on the breadth and frequency of refresher training does not seem consistent from state to state; penalties may never be assessed without larger regulating bodies being aware of non-compliance, and enforcement is limited. Medical waste management in the United States Medical waste management was identified as an issue in the 1980s, with the Medical Waste Tracking Act of 1988 becoming the new standard in biohazard waste disposal. Although the Federal Government, EPA & DOT provide some oversight of regulated medical waste storage, transportation, and disposal the majority of biohazard medical waste is regulated at the state level. Each state is responsible for regulation and management of their own biohazardous waste with each state varying in their regulatory process. Record keeping of biohazardous waste also varies between states. Medical healthcare centers, hospitals veterinary clinics, clinical laboratories and other facilities generate over one million tons of waste each year. Although the majority of this waste is as harmless as common household waste, as much as 15 percent of this waste poses a potential infection hazard, according to the Environmental Protection Agency (EPA). Medical waste is required to be rendered non-infectious before it can be disposed of. There are several different methods to treat and dispose of biohazardous waste. In the United States, the primary methods for treatment and disposal of biohazard, medical and sharps waste may include: Incineration Microwave Autoclaves Mechanical/Chemical Disinfection Irradiation Different forms of biohazardous wasted required different treatments for their proper waste management. This is determined largely be each states regulations. Incidents of non-compliance and reform efforts The United States Government has made it clear that biosafety is to be taken very seriously. In 2014, incidents with anthrax and Ebola pathogens in CDC laboratories prompted the CDC director Tom Frieden to issue a moratorium for research with these types of select agents. An investigation concluded that there was a lack of adherence to safety protocols and "inadequate safeguards" in place. This indicated a lack of proper training or reinforcement of training and supervision on regular basis for lab personnel. Following these incidents, the CDC established an External Laboratory Safety Workgroup (ELSW), and suggestions have been made to reform effectiveness of the Federal Select Agent Program. The White House issued a report on national biosafety priorities in 2015, outlining next steps for a national biosafety and security program, and addressed biological safety needs for health research, national defense, and public safety. In 2016, the Association of Public Health Laboratories (APHL) had a presentation at their annual meeting focused on improving biosafety culture. This same year, The UPMC Center for Health Security issued a case study report including reviews of ten different nations' current biosafety regulations, including the United States. Their goal was to "provide a foundation for identifying national-level biosafety norms and enable initial assessment of biosafety priorities necessary for developing effective national biosafety regulation and oversight." See also Biological hazard Cartagena Protocol on Biosafety Centers for Disease Control European BioSafety Association Interplanetary contamination Quarantine References External links WHO Biosafety Manual CDC Biosafety pages International Centre for Genetic Engineering and Biotechnology (ICGEB): Biosafety pages Greenpeace safe trade campaign American Biological Safety Association Biosafety in Microbiological and Biomedical Laboratories Genetic engineering Bioethics Safety Biological hazards
Biosafety
[ "Chemistry", "Technology", "Engineering", "Biology" ]
3,913
[ "Bioethics", "Biological engineering", "Genetic engineering", "Ethics of science and technology", "Molecular biology" ]
45,307
https://en.wikipedia.org/wiki/Atomic%20electron%20transition
In atomic physics and chemistry, an atomic electron transition (also called an atomic transition, quantum jump, or quantum leap) is an electron changing from one energy level to another within an atom or artificial atom. The time scale of a quantum jump has not been measured experimentally. However, the Franck–Condon principle binds the upper limit of this parameter to the order of attoseconds. Electrons jumping to energy levels of smaller n emit electromagnetic radiation in the form of a photon. Electrons can also absorb passing photons, which drives a quantum jump to a level of higher n. The larger the energy separation between the electron's initial and final state, the shorter the photons' wavelength. History Danish physicist Niels Bohr first theorized that electrons can perform quantum jumps in 1913. Soon after, James Franck and Gustav Ludwig Hertz proved experimentally that atoms have quantized energy states. The observability of quantum jumps was predicted by Hans Dehmelt in 1975, and they were first observed using trapped ions of barium at University of Hamburg and mercury at NIST in 1986. Theory An atom interacts with the oscillating electric field: with amplitude , angular frequency , and polarization vector . Note that the actual phase is . However, in many cases, the variation of is small over the atom (or equivalently, the radiation wavelength is much greater than the size of an atom) and this term can be ignored. This is called the dipole approximation. The atom can also interact with the oscillating magnetic field produced by the radiation, although much more weakly. The Hamiltonian for this interaction, analogous to the energy of a classical dipole in an electric field, is . The stimulated transition rate can be calculated using time-dependent perturbation theory; however, the result can be summarized using Fermi's golden rule: The dipole matrix element can be decomposed into the product of the radial integral and the angular integral. The angular integral is zero unless the selection rules for the atomic transition are satisfied. Recent discoveries In 2019, it was demonstrated in an experiment with a superconducting artificial atom consisting of two strongly-hybridized transmon qubits placed inside a readout resonator cavity at 15 mK, that the evolution of some jumps is continuous, coherent, deterministic, and reversible. On the other hand, other quantum jumps are inherently unpredictable. See also Burst noise Ensemble interpretation Fluorescence Glowing pickle demonstration Molecular electronic transition, for molecules Phosphorescence Quantum jump Spontaneous emission Stimulated emission References External links Part 2 "There are no quantum jumps, nor are there particles!" by H. D. Zeh, Physics Letters A172, 189 (1993). "Surface plasmon at a metal-dielectric interface with an epsilon-near-zero transition layer" by Kevin Roccapriore et al., Physical Review B 103, L161404 (2021). Atomic physics Electron states
Atomic electron transition
[ "Physics", "Chemistry" ]
612
[ "Electron", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", "Electron states", " and optical physics" ]
45,317
https://en.wikipedia.org/wiki/Flange
A flange is a protruded ridge, lip or rim, either external or internal, that serves to increase strength (as the flange of a steel beam such as an I-beam or a T-beam); for easy attachment/transfer of contact force with another object (as the flange on the end of a pipe, steam cylinder, etc., or on the lens mount of a camera); or for stabilizing and guiding the movements of a machine or its parts (as the inside flange of a rail car or tram wheel, which keep the wheels from running off the rails). Flanges are often attached using bolts in the pattern of a bolt circle. Flanges play a pivotal role in piping systems by allowing easy access for maintenance, inspection, and modification. They provide a means to connect or disconnect pipes and equipment without the need for welding, which simplifies installation and reduces downtime during repairs or upgrades. Additionally, flanges facilitate the alignment of pipes, ensuring a proper fit and minimizing stress on the system. Plumbing or piping A flange can also be a plate or ring to form a rim at the end of a pipe when fastened to the pipe (for example, a closet flange). A blind flange is a plate for covering or closing the end of a pipe. A flange joint is a connection of pipes, where the connecting pieces have flanges by which the parts are bolted together. Although the word 'flange' generally refers to the actual raised rim or lip of a fitting, many flanged plumbing fittings are themselves known as flanges. Common flanges used in plumbing are the Surrey flange or Danzey flange, York flange, Sussex flange and Essex flange. Surrey and York flanges fit to the top of the hot water tank allowing all the water to be taken without disturbance to the tank. They are often used to ensure an even flow of water to showers. An Essex flange requires a hole to be drilled in the side of the tank. There is also a Warix flange which is the same as a York flange but the shower output is on the top of the flange and the vent on the side. The York and Warix flange have female adapters so that they fit onto a male tank, whereas the Surrey flange connects to a female tank. A closet flange provides the mount for a toilet. Pipe flanges Piping components can be bolted together between flanges. Flanges are used to connect pipes with each other, to valves, to fittings, and to specialty items such as strainers and pressure vessels. A cover plate can be connected to create a "blind flange". Flanges are joined by bolting, and sealing is often completed with the use of gaskets or other methods. Mechanical means to mitigate effects of leaks, like spray guards or specific spray flanges, may be included. Industries where flammable, volatile, toxic or corrosive substances are being processed have greater need of special protection at flanged connections. Flange guards can provide that added level of protection to ensure safety. There are many different flange standards to be found worldwide. To allow easy functionality and interchangeability, these are designed to have standardised dimensions. Common world standards include ASA/ASME (USA), PN/DIN (European), BS10 (British/Australian), and JIS/KS (Japanese/Korean). In the USA, the standard is ASME B16.5 (ANSI stopped publishing B16.5 in 1996). ASME B16.5 covers flanges up to 24 inches size and up to pressure rating of Class 2500. Flanges larger than 24 inches are covered in ASME B16.47. In most cases, standards are interchangeable, as most local standards have been aligned to ISO standards; however, some local standards still differ. For example, an ASME flange will not mate against an ISO flange. Further, many of the flanges in each standard are divided into "pressure classes", allowing flanges to be capable of taking different pressure ratings. Again these are not generally interchangeable (e.g. an ASME 150 will not mate with an ASME 300). These pressure classes also have differing pressure and temperature ratings for different materials. Unique pressure classes for piping can also be developed for a process plant or power generating station; these may be specific to the corporation, engineering procurement and construction (EPC) contractor, or the process plant owner. The ASME pressure classes for flat-face flanges are Class 125 and Class 250. The classes for ring-joint, tongue and groove, and raised-face flanges are Class 150, Class 300, Class 400 (unusual), Class 600, Class 900, Class 1500, and Class 2500. The flange faces are also made to standardized dimensions and are typically "flat face", "raised face", "tongue and groove", or "ring joint" styles, although other obscure styles are possible. Flange designs are available as "weld neck", "slip-on", "lap joint", "socket weld", "threaded", and also "blind". Types of flanges Flanges come in various types, each designed to meet specific requirements based on factors such as pressure, temperature, and application. Some common types include: Weld Neck Flanges: Weld neck flanges feature a long tapered hub that provides reinforcement to the connection, making them suitable for high-pressure and high-temperature applications. Slip-On Flanges: Slip-on flanges have a slightly larger diameter than the pipe they connect to and are slipped over the pipe before welding. They are commonly used in low-pressure and non-critical applications. Socket Weld Flanges: Socket weld flanges have a recessed area (socket) into which the pipe end fits, allowing for fillet welding. They are suitable for small-bore piping systems and applications with moderate pressure and temperature requirements. Blind Flanges: Blind flanges are solid discs used to close the end of a piping system or vessel. They are often used for pressure testing or as a permanent seal when a pipe end needs to be closed off. ASME standards (U.S.) Pipe flanges that are made to standards called out by ASME B16.5 or ASME B16.47, and MSS SP-44. They are typically made from forged materials and have machined surfaces. ASME B16.5 refers to nominal pipe sizes (NPS) from " to 24". B16.47 covers NPSs from 26" to 60". Each specification further delineates flanges into pressure classes: 150, 300, 400, 600, 900, 1500 and 2500 for B16.5, and B16.47 delineates its flanges into pressure classes 75, 150, 300, 400, 600, 900. However these classes do not correspond to maximum pressures in psi. Instead, the maximum pressure depends on the material of the flange and the temperature. For example, the maximum pressure for a Class 150 flange is 285 psi, and for a Class 300 flange it is 740 psi (both are for ASTM a105 carbon steel and temperatures below 100 °F). The gasket type and bolt type are generally specified by the standard(s); however, sometimes the standards refer to the ASME Boiler and Pressure Vessel Code (B&PVC) for details (see ASME Code Section VIII Division 1 – Appendix 2). These flanges are recognized by ASME Pipe Codes such as ASME B31.1 Power Piping, and ASME B31.3 Process Piping. Materials for flanges are usually under ASME designation: SA-105 (Specification for Carbon Steel Forgings for Piping Applications), SA-266 (Specification for Carbon Steel Forgings for Pressure Vessel Components), or SA-182 (Specification for Forged or Rolled Alloy-Steel Pipe Flanges, Forged Fittings, and Valves and Parts for High-Temperature Service). In addition, there are many "industry standard" flanges that in some circumstance may be used on ASME work. The product range includes SORF, SOFF, BLRF, BLFF, WNRF (XS, XXS, STD and Schedule 20, 40, 80), WNFF (XS, XXS, STD and Schedule 20, 40, 80), SWRF (XS and STD), SWFF (XS and STD), Threaded RF, Threaded FF and LJ, with sizes from 1/2" to 16". The bolting material used for flange connection is stud bolts mated with two nut (washer when required). In petrochemical industries, ASTM A193 B7 STUD and ASTM A193 B16 stud bolts are used as these have high tensile strength. European dimensions (EN / DIN) Hygienic Flange STC DIN11853-2 Most countries in Europe mainly install flanges according to standard DIN EN 1092-1 (forged stainless or steel flanges). Similar to the ASME flange standard, the EN 1092-1 standard has the basic flange forms, such as weld neck flange, blind flange, lapped flange, threaded flange (thread ISO7-1 instead of NPT), weld on collar, pressed collars, and adapter flange such as flange coupling GD press fittings. The different forms of flanges within the EN 1092-1 (European Norm/Euronorm) is indicated within the flange name through the type. Similar to ASME flanges, EN1092-1 steel and stainless flanges, have several different versions of raised or none raised faces. According to the European form the seals are indicated by different form: Furthermore, for sanitary applications such as in the food and beverage and pharmaceutical industries, sanitary flanges according to DIN 11853-2 STC are utilized. The primary distinction between sanitary flanges according to DIN 11853-2 and DIN/EN flanges lies in the restricted dead-room and the interior polishing according to hygienic levels of H1 to H4. Usually the flange traders that hold the Standard DIN EN 1092-1 such as Hage Fittings, do not hold Sanitary flanges as the storage requirements are different. Sanitary flanges are more delicate and need to stay clean. Usually the O-Rings, according to DIn 11853, are made out of FPM or EPDM. Other countries Flanges in the rest of the world are manufactured according to the ISO standards for materials, pressure ratings, etc. to which local standards including DIN, BS, and others, have been aligned. Compact flanges As the size of a compact flange increases it becomes relatively increasingly heavy and complex resulting in high procurement, installation and maintenance costs. Large flange diameters in particular are difficult to work with, and inevitably require more space and have a more challenging handling and installation procedure, particularly on remote installations such as oil rigs. The design of the flange face includes two independent seals. The first seal is created by application of seal seating stress at the flange heel, but it is not straight forward to ensure the function of this seal. Theoretically, the heel contact will be maintained for pressure values up to 1.8 times the flange rating at room temperature. Theoretically, the flange also remains in contact along its outer circumference at the flange faces for all allowable load levels that it is designed for. The main seal is the IX seal ring. The seal ring force is provided by the elastic stored energy in the stressed seal ring. Any heel leakage will give internal pressure acting on the seal ring inside intensifying the sealing action. This however requires the IX ring to be retained in the theoretical location in the ring groove which is difficult to ensure and verify during installation. The design aims at preventing exposure to oxygen and other corrosive agents. Thus, this prevents corrosion of the flange faces, the stressed length of the bolts and the seal ring. This however depends on the outer dust rim to remain in satisfactory contact and that the inside fluid is not corrosive in case of leaking into the bolt circle void. Applications of compact flanges The initial cost of the theoretical higher performance compact flange is inevitably higher than a regular flange due to the closer tolerances and significantly more sophisticated design and installation requirements. By way of example, compact flanges are often used across the following applications: subsea oil and gas or riser, cold work and cryogenics, gas injection, high temperature, and nuclear applications. Train wheels Most trains and trams stay on their tracks primarily due to the conical geometry of their wheels. They also have a flange on one side to keep the wheels, and hence the train, running on the rails when the limits of the geometry-based alignment are reached, either due to some emergency or defect, or simply because the curve radius is so small that self-steering normally provided by the coned wheel tread is no longer effective. Vacuum flanges A vacuum flange is a flange at the end of a tube used to connect vacuum chambers, tubing and vacuum pumps to each other. Microwave In microwave telecommunications, a flange is a type of cable joint that allows different types of waveguide to connect. Several different microwave RF flange types exist, such as CAR, CBR, OPC, PAR, PBJ, PBR, PDR, UAR, UBR, UDR, icp and UPX. Ski boots Ski boots use flanges at the toe or heel to connect to the binding of the ski. The size and shape for flanges on alpine skiing boots is standardized in ISO 5355. Traditional telemark and cross country boots use the 75 mm Nordic Norm, but the toe flange is informally known as the "duckbill". New cross country bindings eliminate the flange entirely and use a steel bar embedded within the sole instead. See also Casing head Closet flange Victaulic Swivel References Further reading ASME B16.5: Standard Pipe Flanges up to and including 24 inches nominal ASME B16.47: Standard Pipe Flanges above 24 inches ASME Section II (Materials), Part A – Ferrous Material Specifications ASME B16.47 Standard Pipe Flanges Yaang Pipe Industry ANSI Flange Torque Lookup Tool Piping Plumbing Mechanical engineering Structural engineering Train wheels
Flange
[ "Physics", "Chemistry", "Engineering" ]
3,076
[ "Structural engineering", "Applied and interdisciplinary physics", "Building engineering", "Chemical engineering", "Plumbing", "Construction", "Civil engineering", "Mechanical engineering", "Piping" ]
45,337
https://en.wikipedia.org/wiki/Nash%20equilibrium
In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy (holding all other players' strategies fixed). The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly. If each player has chosen a strategy an action plan based on what has happened so far in the game and no one can increase one's own expected payoff by changing one's strategy while the other players keep theirs unchanged, then the current set of strategy choices constitutes a Nash equilibrium. If two players Alice and Bob choose strategies A and B, (A, B) is a Nash equilibrium if Alice has no other strategy available that does better than A at maximizing her payoff in response to Bob choosing B, and Bob has no other strategy available that does better than B at maximizing his payoff in response to Alice choosing A. In a game in which Carol and Dan are also players, (A, B, C, D) is a Nash equilibrium if A is Alice's best response to (B, C, D), B is Bob's best response to (A, C, D), and so forth. Nash showed that there is a Nash equilibrium, possibly in mixed strategies, for every finite game. Applications Game theorists use Nash equilibrium to analyze the outcome of the strategic interaction of several decision makers. In a strategic interaction, the outcome for each decision-maker depends on the decisions of the others as well as their own. The simple insight underlying Nash's idea is that one cannot predict the choices of multiple decision makers if one analyzes those decisions in isolation. Instead, one must ask what each player would do taking into account what the player expects the others to do. Nash equilibrium requires that one's choices be consistent: no players wish to undo their decision given what the others are deciding. The concept has been used to analyze hostile situations such as wars and arms races (see prisoner's dilemma), and also how conflict may be mitigated by repeated interaction (see tit-for-tat). It has also been used to study to what extent people with different preferences can cooperate (see battle of the sexes), and whether they will take risks to achieve a cooperative outcome (see stag hunt). It has been used to study the adoption of technical standards, and also the occurrence of bank runs and currency crises (see coordination game). Other applications include traffic flow (see Wardrop's principle), how to organize auctions (see auction theory), the outcome of efforts exerted by multiple parties in the education process, regulatory legislation such as environmental regulations (see tragedy of the commons), natural resource management, analysing strategies in marketing, penalty kicks in football (see matching pennies), robot navigation in crowds, energy systems, transportation systems, evacuation problems and wireless communications. History Nash equilibrium is named after American mathematician John Forbes Nash Jr. The same idea was used in a particular application in 1838 by Antoine Augustin Cournot in his theory of oligopoly. In Cournot's theory, each of several firms choose how much output to produce to maximize its profit. The best output for one firm depends on the outputs of the others. A Cournot equilibrium occurs when each firm's output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium. Cournot did not use the idea in any other applications, however, or define it generally. The modern concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible pure strategies (which might put 100% of the probability on one pure strategy; such pure strategies are a subset of mixed strategies). The concept of a mixed-strategy equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games and Economic Behavior, but their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any zero-sum game with a finite set of actions. The contribution of Nash in his 1951 article "Non-Cooperative Games" was to define a mixed-strategy Nash equilibrium for any game with a finite set of actions and prove that at least one (mixed-strategy) Nash equilibrium must exist in such a game. The key to Nash's ability to prove existence far more generally than von Neumann lay in his definition of equilibrium. According to Nash, "an equilibrium point is an n-tuple such that each player's mixed strategy maximizes [their] payoff if the strategies of the others are held fixed. Thus each player's strategy is optimal against those of the others." Putting the problem in this framework allowed Nash to employ the Kakutani fixed-point theorem in his 1950 paper to prove existence of equilibria. His 1951 paper used the simpler Brouwer fixed-point theorem for the same purpose. Game theorists have discovered that in some circumstances Nash equilibrium makes invalid predictions or fails to make a unique prediction. They have proposed many solution concepts ('refinements' of Nash equilibria) designed to rule out implausible Nash equilibria. One particularly important issue is that some Nash equilibria may be based on threats that are not 'credible'. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats. Other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. However, subsequent refinements and extensions of Nash equilibrium share the main insight on which Nash's concept rests: the equilibrium is a set of strategies such that each player's strategy is optimal given the choices of the others. Definitions Nash equilibrium A strategy profile is a set of strategies, one for each player. Informally, a strategy profile is a Nash equilibrium if no player can do better by unilaterally changing their strategy. To see what this means, imagine that each player is told the strategies of the others. Suppose then that each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, can I benefit by changing my strategy?" For instance if a player prefers "Yes", then that set of strategies is not a Nash equilibrium. But if every player prefers not to switch (or is indifferent between switching and not) then the strategy profile is a Nash equilibrium. Thus, each strategy in a Nash equilibrium is a best response to the other players' strategies in that equilibrium. Formally, let be the set of all possible strategies for player , where . Let be a strategy profile, a set consisting of one strategy for each player, where denotes the strategies of all the players except . Let be player is payoff as a function of the strategies. The strategy profile is a Nash equilibrium if A game can have more than one Nash equilibrium. Even if the equilibrium is unique, it might be weak: a player might be indifferent among several strategies given the other players' choices. It is unique and called a strict Nash equilibrium if the inequality is strict so one strategy is the unique best response: The strategy set can be different for different players, and its elements can be a variety of mathematical objects. Most simply, a player might choose between two strategies, e.g. Or the strategy set might be a finite set of conditional strategies responding to other players, e.g. Or it might be an infinite set, a continuum or unbounded, e.g. such that is a non-negative real number. Nash's existing proofs assume a finite strategy set, but the concept of Nash equilibrium does not require it. Variants Pure/mixed equilibrium A game can have a pure-strategy or a mixed-strategy Nash equilibrium. In the latter, not every player always plays the same strategy. Instead, there is a probability distribution over different strategies. Strict/non-strict equilibrium Suppose that in the Nash equilibrium, each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, would I suffer a loss by changing my strategy?" If every player's answer is "Yes", then the equilibrium is classified as a strict Nash equilibrium. If instead, for some player, there is exact equality between the strategy in Nash equilibrium and some other strategy that gives exactly the same payout (i.e. the player is indifferent between switching and not), then the equilibrium is classified as a weak or non-strict Nash equilibrium. Equilibria for coalitions The Nash equilibrium defines stability only in terms of individual player deviations. In cooperative games such a concept is not convincing enough. Strong Nash equilibrium allows for deviations by every conceivable coalition. Formally, a strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can cooperatively deviate in a way that benefits all of its members. However, the strong Nash concept is sometimes perceived as too "strong" in that the environment allows for unlimited private communication. In fact, strong Nash equilibrium has to be Pareto efficient. As a result of these requirements, strong Nash is too rare to be useful in many branches of game theory. However, in games such as elections with many more players than possible outcomes, it can be more common than a stable equilibrium. A refined Nash equilibrium known as coalition-proof Nash equilibrium (CPNE) occurs when players cannot do better even if they are allowed to communicate and make "self-enforcing" agreement to deviate. Every correlated strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE. Further, it is possible for a game to have a Nash equilibrium that is resilient against coalitions less than a specified size, k. CPNE is related to the theory of the core. Existence Nash's existence theorem Nash proved that if mixed strategies (where a player chooses probabilities of using various pure strategies) are allowed, then every game with a finite number of players in which each player can choose from finitely many pure strategies has at least one Nash equilibrium, which might be a pure strategy for each player or might be a probability distribution over strategies for each player. Nash equilibria need not exist if the set of choices is infinite and non-compact. For example: A game where two players simultaneously name a number and the player naming the larger number wins does not have a NE, as the set of choices is not compact because it is unbounded. Each of two players chooses a real number strictly less than 5 and the winner is whoever has the biggest number; no biggest number strictly less than 5 exists (if the number could equal 5, the Nash equilibrium would have both players choosing 5 and tying the game). Here, the set of choices is not compact because it is not closed. However, a Nash equilibrium exists if the set of choices is compact with each player's payoff continuous in the strategies of all the players. Rosen's existence theorem Rosen extended Nash's existence theorem in several ways. He considers an n-player game, in which the strategy of each player i is a vector si in the Euclidean space Rmi. Denote m:=m1+...+mn; so a strategy-tuple is a vector in Rm. Part of the definition of a game is a subset S of Rm such that the strategy-tuple must be in S. This means that the actions of players may potentially be constrained based on actions of other players. A common special case of the model is when S is a Cartesian product of convex sets S1,...,Sn, such that the strategy of player i must be in Si. This represents the case that the actions of each player i are constrained independently of other players' actions. If the following conditions hold: T is convex, closed and bounded; Each payoff function ui is continuous in the strategies of all players, and concave in si for every fixed value of s−i. Then a Nash equilibrium exists. The proof uses the Kakutani fixed-point theorem. Rosen also proves that, under certain technical conditions which include strict concavity, the equilibrium is unique. Nash's result refers to the special case in which each Si is a simplex (representing all possible mixtures of pure strategies), and the payoff functions of all players are bilinear functions of the strategies. Rationality The Nash equilibrium may sometimes appear non-rational in a third-person perspective. This is because a Nash equilibrium is not necessarily Pareto optimal. Nash equilibrium may also have non-rational consequences in sequential games because players may "threaten" each other with threats they would not actually carry out. For such games the subgame perfect Nash equilibrium may be more meaningful as a tool of analysis. Examples Coordination game The coordination game is a classic two-player, two-strategy game, as shown in the example payoff matrix to the right. There are two pure-strategy equilibria, (A,A) with payoff 4 for each player and (B,B) with payoff 2 for each. The combination (B,B) is a Nash equilibrium because if either player unilaterally changes their strategy from B to A, their payoff will fall from 2 to 1. A famous example of a coordination game is the stag hunt. Two players may choose to hunt a stag or a rabbit, the stag providing more meat (4 utility units, 2 for each player) than the rabbit (1 utility unit). The caveat is that the stag must be cooperatively hunted, so if one player attempts to hunt the stag, while the other hunts the rabbit, the stag hunter will totally fail, for a payoff of 0, whereas the rabbit hunter will succeed, for a payoff of 1. The game has two equilibria, (stag, stag) and (rabbit, rabbit), because a player's optimal strategy depends on their expectation on what the other player will do. If one hunter trusts that the other will hunt the stag, they should hunt the stag; however if they think the other will hunt the rabbit, they too will hunt the rabbit. This game is used as an analogy for social cooperation, since much of the benefit that people gain in society depends upon people cooperating and implicitly trusting one another to act in a manner corresponding with cooperation. Driving on a road against an oncoming car, and having to choose either to swerve on the left or to swerve on the right of the road, is also a coordination game. For example, with payoffs 10 meaning no crash and 0 meaning a crash, the coordination game can be defined with the following payoff matrix: In this case there are two pure-strategy Nash equilibria, when both choose to either drive on the left or on the right. If we admit mixed strategies (where a pure strategy is chosen at random, subject to some fixed probability), then there are three Nash equilibria for the same case: two we have seen from the pure-strategy form, where the probabilities are (0%, 100%) for player one, (0%, 100%) for player two; and (100%, 0%) for player one, (100%, 0%) for player two respectively. We add another where the probabilities for each player are (50%, 50%). Network traffic An application of Nash equilibria is in determining the expected flow of traffic in a network. Consider the graph on the right. If we assume that there are "cars" traveling from to , what is the expected distribution of traffic in the network? This situation can be modeled as a "game", where every traveler has a choice of 3 strategies and where each strategy is a route from to (one of , , or ). The "payoff" of each strategy is the travel time of each route. In the graph on the right, a car travelling via experiences travel time of , where is the number of cars traveling on edge . Thus, payoffs for any given strategy depend on the choices of the other players, as is usual. However, the goal, in this case, is to minimize travel time, not maximize it. Equilibrium will occur when the time on all paths is exactly the same. When that happens, no single driver has any incentive to switch routes, since it can only add to their travel time. For the graph on the right, if, for example, 100 cars are travelling from to , then equilibrium will occur when 25 drivers travel via , 50 via , and 25 via . Every driver now has a total travel time of 3.75 (to see this, a total of 75 cars take the edge, and likewise, 75 cars take the edge). Notice that this distribution is not, actually, socially optimal. If the 100 cars agreed that 50 travel via and the other 50 through , then travel time for any single car would actually be 3.5, which is less than 3.75. This is also the Nash equilibrium if the path between and is removed, which means that adding another possible route can decrease the efficiency of the system, a phenomenon known as Braess's paradox. Competition game This can be illustrated by a two-player game in which both players simultaneously choose an integer from 0 to 3 and they both win the smaller of the two numbers in points. In addition, if one player chooses a larger number than the other, then they have to give up two points to the other. This game has a unique pure-strategy Nash equilibrium: both players choosing 0 (highlighted in light red). Any other strategy can be improved by a player switching their number to one less than that of the other player. In the adjacent table, if the game begins at the green square, it is in player 1's interest to move to the purple square and it is in player 2's interest to move to the blue square. Although it would not fit the definition of a competition game, if the game is modified so that the two players win the named amount if they both choose the same number, and otherwise win nothing, then there are 4 Nash equilibria: (0,0), (1,1), (2,2), and (3,3). Nash equilibria in a payoff matrix There is an easy numerical way to identify Nash equilibria on a payoff matrix. It is especially helpful in two-person games where players have more than two strategies. In this case formal analysis may become too long. This rule does not apply to the case where mixed (stochastic) strategies are of interest. The rule goes as follows: if the first payoff number, in the payoff pair of the cell, is the maximum of the column of the cell and if the second number is the maximum of the row of the cell then the cell represents a Nash equilibrium. We can apply this rule to a 3×3 matrix: Using the rule, we can very quickly (much faster than with formal analysis) see that the Nash equilibria cells are (B,A), (A,B), and (C,C). Indeed, for cell (B,A), 40 is the maximum of the first column and 25 is the maximum of the second row. For (A,B), 25 is the maximum of the second column and 40 is the maximum of the first row; the same applies for cell (C,C). For other cells, either one or both of the duplet members are not the maximum of the corresponding rows and columns. This said, the actual mechanics of finding equilibrium cells is obvious: find the maximum of a column and check if the second member of the pair is the maximum of the row. If these conditions are met, the cell represents a Nash equilibrium. Check all columns this way to find all NE cells. An N×N matrix may have between 0 and N×N pure-strategy Nash equilibria. Stability The concept of stability, useful in the analysis of many kinds of equilibria, can also be applied to Nash equilibria. A Nash equilibrium for a mixed-strategy game is stable if a small change (specifically, an infinitesimal change) in probabilities for one player leads to a situation where two conditions hold: the player who did not change has no better strategy in the new circumstance the player who did change is now playing with a strictly worse strategy. If these cases are both met, then a player with the small change in their mixed strategy will return immediately to the Nash equilibrium. The equilibrium is said to be stable. If condition one does not hold then the equilibrium is unstable. If only condition one holds then there are likely to be an infinite number of optimal strategies for the player who changed. In the "driving game" example above there are both stable and unstable equilibria. The equilibria involving mixed strategies with 100% probabilities are stable. If either player changes their probabilities slightly, they will be both at a disadvantage, and their opponent will have no reason to change their strategy in turn. The (50%,50%) equilibrium is unstable. If either player changes their probabilities (which would neither benefit or damage the expectation of the player who did the change, if the other player's mixed strategy is still (50%,50%)), then the other player immediately has a better strategy at either (0%, 100%) or (100%, 0%). Stability is crucial in practical applications of Nash equilibria, since the mixed strategy of each player is not perfectly known, but has to be inferred from statistical distribution of their actions in the game. In this case unstable equilibria are very unlikely to arise in practice, since any minute change in the proportions of each strategy seen will lead to a change in strategy and the breakdown of the equilibrium. Finally in the eighties, building with great depth on such ideas Mertens-stable equilibria were introduced as a solution concept. Mertens stable equilibria satisfy both forward induction and backward induction. In a game theory context stable equilibria now usually refer to Mertens stable equilibria. Occurrence If a game has a unique Nash equilibrium and is played among players under certain conditions, then the NE strategy set will be adopted. Sufficient conditions to guarantee that the Nash equilibrium is played are: The players all will do their utmost to maximize their expected payoff as described by the game. The players are flawless in execution. The players have sufficient intelligence to deduce the solution. The players know the planned equilibrium strategy of all of the other players. The players believe that a deviation in their own strategy will not cause deviations by any other players. There is common knowledge that all players meet these conditions, including this one. So, not only must each player know the other players meet the conditions, but also they must know that they all know that they meet them, and know that they know that they know that they meet them, and so on. Where the conditions are not met Examples of game theory problems in which these conditions are not met: The first condition is not met if the game does not correctly describe the quantities a player wishes to maximize. In this case there is no particular reason for that player to adopt an equilibrium strategy. For instance, the prisoner's dilemma is not a dilemma if either player is happy to be jailed indefinitely. Intentional or accidental imperfection in execution. For example, a computer capable of flawless logical play facing a second flawless computer will result in equilibrium. Introduction of imperfection will lead to its disruption either through loss to the player who makes the mistake, or through negation of the common knowledge criterion leading to possible victory for the player. (An example would be a player suddenly putting the car into reverse in the game of chicken, ensuring a no-loss no-win scenario). In many cases, the third condition is not met because, even though the equilibrium must exist, it is unknown due to the complexity of the game, for instance in Chinese chess. Or, if known, it may not be known to all players, as when playing tic-tac-toe with a small child who desperately wants to win (meeting the other criteria). The criterion of common knowledge may not be met even if all players do, in fact, meet all the other criteria. Players wrongly distrusting each other's rationality may adopt counter-strategies to expected irrational play on their opponents’ behalf. This is a major consideration in "chicken" or an arms race, for example. Where the conditions are met In his Ph.D. dissertation, John Nash proposed two interpretations of his equilibrium concept, with the objective of showing how equilibrium points can be connected with observable phenomenon. This idea was formalized by R. Aumann and A. Brandenburger, 1995, Epistemic Conditions for Nash Equilibrium, Econometrica, 63, 1161-1180 who interpreted each player's mixed strategy as a conjecture about the behaviour of other players and have shown that if the game and the rationality of players is mutually known and these conjectures are commonly known, then the conjectures must be a Nash equilibrium (a common prior assumption is needed for this result in general, but not in the case of two players. In this case, the conjectures need only be mutually known). A second interpretation, that Nash referred to by the mass action interpretation, is less demanding on players: For a formal result along these lines, see Kuhn, H. and et al., 1996, "The Work of John Nash in Game Theory", Journal of Economic Theory, 69, 153–185. Due to the limited conditions in which NE can actually be observed, they are rarely treated as a guide to day-to-day behaviour, or observed in practice in human negotiations. However, as a theoretical concept in economics and evolutionary biology, the NE has explanatory power. The payoff in economics is utility (or sometimes money), and in evolutionary biology is gene transmission; both are the fundamental bottom line of survival. Researchers who apply games theory in these fields claim that strategies failing to maximize these for whatever reason will be competed out of the market or environment, which are ascribed the ability to test all strategies. This conclusion is drawn from the "stability" theory above. In these situations the assumption that the strategy observed is actually a NE has often been borne out by research. NE and non-credible threats The Nash equilibrium is a superset of the subgame perfect Nash equilibrium. The subgame perfect equilibrium in addition to the Nash equilibrium requires that the strategy also is a Nash equilibrium in every subgame of that game. This eliminates all non-credible threats, that is, strategies that contain non-rational moves in order to make the counter-player change their strategy. The image to the right shows a simple sequential game that illustrates the issue with subgame imperfect Nash equilibria. In this game player one chooses left(L) or right(R), which is followed by player two being called upon to be kind (K) or unkind (U) to player one, However, player two only stands to gain from being unkind if player one goes left. If player one goes right the rational player two would de facto be kind to her/him in that subgame. However, The non-credible threat of being unkind at 2(2) is still part of the blue (L, (U,U)) Nash equilibrium. Therefore, if rational behavior can be expected by both parties the subgame perfect Nash equilibrium may be a more meaningful solution concept when such dynamic inconsistencies arise. Proof of existence Proof using the Kakutani fixed-point theorem Nash's original proof (in his thesis) used Brouwer's fixed-point theorem (e.g., see below for a variant). This section presents a simpler proof via the Kakutani fixed-point theorem, following Nash's 1950 paper (he credits David Gale with the observation that such a simplification is possible). To prove the existence of a Nash equilibrium, let be the best response of player i to the strategies of all other players. Here, , where , is a mixed-strategy profile in the set of all mixed strategies and is the payoff function for player i. Define a set-valued function such that . The existence of a Nash equilibrium is equivalent to having a fixed point. Kakutani's fixed point theorem guarantees the existence of a fixed point if the following four conditions are satisfied. is compact, convex, and nonempty. is nonempty. is upper hemicontinuous is convex. Condition 1. is satisfied from the fact that is a simplex and thus compact. Convexity follows from players' ability to mix strategies. is nonempty as long as players have strategies. Condition 2. and 3. are satisfied by way of Berge's maximum theorem. Because is continuous and compact, is non-empty and upper hemicontinuous. Condition 4. is satisfied as a result of mixed strategies. Suppose , then . i.e. if two strategies maximize payoffs, then a mix between the two strategies will yield the same payoff. Therefore, there exists a fixed point in and a Nash equilibrium. When Nash made this point to John von Neumann in 1949, von Neumann famously dismissed it with the words, "That's trivial, you know. That's just a fixed-point theorem." (See Nasar, 1998, p. 94.) Alternate proof using the Brouwer fixed-point theorem We have a game where is the number of players and is the action set for the players. All of the action sets are finite. Let denote the set of mixed strategies for the players. The finiteness of the s ensures the compactness of . We can now define the gain functions. For a mixed strategy , we let the gain for player on action be The gain function represents the benefit a player gets by unilaterally changing their strategy. We now define where for . We see that Next we define: It is easy to see that each is a valid mixed strategy in . It is also easy to check that each is a continuous function of , and hence is a continuous function. As the cross product of a finite number of compact convex sets, is also compact and convex. Applying the Brouwer fixed point theorem to and we conclude that has a fixed point in , call it . We claim that is a Nash equilibrium in . For this purpose, it suffices to show that This simply states that each player gains no benefit by unilaterally changing their strategy, which is exactly the necessary condition for a Nash equilibrium. Now assume that the gains are not all zero. Therefore, and such that . Then So let Also we shall denote as the gain vector indexed by actions in . Since is the fixed point we have: Since we have that is some positive scaling of the vector . Now we claim that To see this, first if then this is true by definition of the gain function. Now assume that . By our previous statements we have that and so the left term is zero, giving us that the entire expression is as needed. So we finally have that where the last inequality follows since is a non-zero vector. But this is a clear contradiction, so all the gains must indeed be zero. Therefore, is a Nash equilibrium for as needed. Computing Nash equilibria If a player A has a dominant strategy then there exists a Nash equilibrium in which A plays . In the case of two players A and B, there exists a Nash equilibrium in which A plays and B plays a best response to . If is a strictly dominant strategy, A plays in all Nash equilibria. If both A and B have strictly dominant strategies, there exists a unique Nash equilibrium in which each plays their strictly dominant strategy. In games with mixed-strategy Nash equilibria, the probability of a player choosing any particular (so pure) strategy can be computed by assigning a variable to each strategy that represents a fixed probability for choosing that strategy. In order for a player to be willing to randomize, their expected payoff for each (pure) strategy should be the same. In addition, the sum of the probabilities for each strategy of a particular player should be 1. This creates a system of equations from which the probabilities of choosing each strategy can be derived. Examples In the matching pennies game, player A loses a point to B if A and B play the same strategy and wins a point from B if they play different strategies. To compute the mixed-strategy Nash equilibrium, assign A the probability of playing H and of playing T, and assign B the probability of playing H and of playing T. Thus, a mixed-strategy Nash equilibrium in this game is for each player to randomly choose H or T with and . Oddness of equilibrium points In 1971, Robert Wilson came up with the "oddness theorem", which says that "almost all" finite games have a finite and odd number of Nash equilibria. In 1993, Harsanyi published an alternative proof of the result. "Almost all" here means that any game with an infinite or even number of equilibria is very special in the sense that if its payoffs were even slightly randomly perturbed, with probability one it would have an odd number of equilibria instead. The prisoner's dilemma, for example, has one equilibrium, while the battle of the sexes has three—two pure and one mixed, and this remains true even if the payoffs change slightly. The free money game is an example of a "special" game with an even number of equilibria. In it, two players have to both vote "yes" rather than "no" to get a reward and the votes are simultaneous. There are two pure-strategy Nash equilibria, (yes, yes) and (no, no), and no mixed strategy equilibria, because the strategy "yes" weakly dominates "no". "Yes" is as good as "no" regardless of the other player's action, but if there is any chance the other player chooses "yes" then "yes" is the best reply. Under a small random perturbation of the payoffs, however, the probability that any two payoffs would remain tied, whether at 0 or some other number, is vanishingly small, and the game would have either one or three equilibria instead. See also Notes References Bibliography Game theory textbooks . Dixit, Avinash, Susan Skeath and David Reiley. Games of Strategy. W.W. Norton & Company. (Third edition in 2009.) An undergraduate text. . Suitable for undergraduate and business students. Fudenberg, Drew and Jean Tirole (1991) Game Theory MIT Press. . Lucid and detailed introduction to game theory in an explicitly economic context. Morgenstern, Oskar and John von Neumann (1947) The Theory of Games and Economic Behavior Princeton University Press. . . A modern introduction at the graduate level. . A comprehensive reference from a computational perspective; see Chapter 3. Downloadable free online. Original Nash papers Nash, John (1950) "Equilibrium points in n-person games" Proceedings of the National Academy of Sciences 36(1):48-49. Nash, John (1951) "Non-Cooperative Games" The Annals of Mathematics 54(2):286-295. Other references Mehlmann, A. (2000) The Game's Afoot! Game Theory in Myth and Paradox, American Mathematical Society. Nasar, Sylvia (1998), A Beautiful Mind, Simon & Schuster. Aviad Rubinstein: "Hardness of Approximation Between P and NP", ACM, ISBN 978-1-947487-23-9 (May 2019), DOI: https://doi.org/10.1145/3241304 . # Explains the Nash Equilibrium is a hard problem in computation. External links Complete Proof of Existence of Nash Equilibria Simplified Form and Related Results Game theory equilibrium concepts Fixed points (mathematics) 1951 in economic history
Nash equilibrium
[ "Mathematics" ]
7,562
[ "Mathematical analysis", "Fixed points (mathematics)", "Game theory", "Topology", "Game theory equilibrium concepts", "Dynamical systems" ]
45,705
https://en.wikipedia.org/wiki/Inverse%20transform%20sampling
Inverse transform sampling (also known as inversion sampling, the inverse probability integral transform, the inverse transformation method, or the Smirnov transform) is a basic method for pseudo-random number sampling, i.e., for generating sample numbers at random from any probability distribution given its cumulative distribution function. Inverse transformation sampling takes uniform samples of a number between 0 and 1, interpreted as a probability, and then returns the smallest number such that for the cumulative distribution function of a random variable. For example, imagine that is the standard normal distribution with mean zero and standard deviation one. The table below shows samples taken from the uniform distribution and their representation on the standard normal distribution. We are randomly choosing a proportion of the area under the curve and returning the number in the domain such that exactly this proportion of the area occurs to the left of that number. Intuitively, we are unlikely to choose a number in the far end of tails because there is very little area in them which would require choosing a number very close to zero or one. Computationally, this method involves computing the quantile function of the distribution — in other words, computing the cumulative distribution function (CDF) of the distribution (which maps a number in the domain to a probability between 0 and 1) and then inverting that function. This is the source of the term "inverse" or "inversion" in most of the names for this method. Note that for a discrete distribution, computing the CDF is not in general too difficult: we simply add up the individual probabilities for the various points of the distribution. For a continuous distribution, however, we need to integrate the probability density function (PDF) of the distribution, which is impossible to do analytically for most distributions (including the normal distribution). As a result, this method may be computationally inefficient for many distributions and other methods are preferred; however, it is a useful method for building more generally applicable samplers such as those based on rejection sampling. For the normal distribution, the lack of an analytical expression for the corresponding quantile function means that other methods (e.g. the Box–Muller transform) may be preferred computationally. It is often the case that, even for simple distributions, the inverse transform sampling method can be improved on: see, for example, the ziggurat algorithm and rejection sampling. On the other hand, it is possible to approximate the quantile function of the normal distribution extremely accurately using moderate-degree polynomials, and in fact the method of doing this is fast enough that inversion sampling is now the default method for sampling from a normal distribution in the statistical package R. Formal statement For any random variable , the random variable has the same distribution as , where is the generalized inverse of the cumulative distribution function of and is uniform on . For continuous random variables, the inverse probability integral transform is indeed the inverse of the probability integral transform, which states that for a continuous random variable with cumulative distribution function , the random variable is uniform on . Intuition From , we want to generate with CDF We assume to be a continuous, strictly increasing function, which provides good intuition. We want to see if we can find some strictly monotone transformation , such that . We will have where the last step used that when is uniform on . So we got to be the inverse function of , or, equivalently Therefore, we can generate from The method The problem that the inverse transform sampling method solves is as follows: Let be a random variable whose distribution can be described by the cumulative distribution function . We want to generate values of which are distributed according to this distribution. The inverse transform sampling method works as follows: Generate a random number from the standard uniform distribution in the interval , i.e. from Find the generalized inverse of the desired CDF, i.e. . Compute . The computed random variable has distribution and thereby the same law as . Expressed differently, given a cumulative distribution function and a uniform variable , the random variable has the distribution . In the continuous case, a treatment of such inverse functions as objects satisfying differential equations can be given. Some such differential equations admit explicit power series solutions, despite their non-linearity. Examples As an example, suppose we have a random variable and a cumulative distribution function In order to perform an inversion we want to solve for From here we would perform steps one, two and three. As another example, we use the exponential distribution with for x ≥ 0 (and 0 otherwise). By solving y=F(x) we obtain the inverse function It means that if we draw some from a and compute This has exponential distribution. The idea is illustrated in the following graph: Note that the distribution does not change if we start with 1-y instead of y. For computational purposes, it therefore suffices to generate random numbers y in [0, 1] and then simply calculate Proof of correctness Let be a cumulative distribution function, and let be its generalized inverse function (using the infimum because CDFs are weakly monotonic and right-continuous): Claim: If is a uniform random variable on then has as its CDF. Proof: Truncated distribution Inverse transform sampling can be simply extended to cases of truncated distributions on the interval without the cost of rejection sampling: the same algorithm can be followed, but instead of generating a random number uniformly distributed between 0 and 1, generate uniformly distributed between and , and then again take . Reduction of the number of inversions In order to obtain a large number of samples, one needs to perform the same number of inversions of the distribution. One possible way to reduce the number of inversions while obtaining a large number of samples is the application of the so-called Stochastic Collocation Monte Carlo sampler (SCMC sampler) within a polynomial chaos expansion framework. This allows us to generate any number of Monte Carlo samples with only a few inversions of the original distribution with independent samples of a variable for which the inversions are analytically available, for example the standard normal variable. Software implementations There are software implementations available for applying the inverse sampling method by using numerical approximations of the inverse in the case that it is not available in closed form. For example, an approximation of the inverse can be computed if the user provides some information about the distributions such as the PDF or the CDF. C library UNU.RAN R library Runuran Python subpackage sampling in scipy.stats See also Probability integral transform Copula, defined by means of probability integral transform. Quantile function, for the explicit construction of inverse CDFs. Inverse distribution function for a precise mathematical definition for distributions with discrete components. Rejection sampling is another common technique to generate random variates that does not rely on inversion of the CDF. References Monte Carlo methods Non-uniform random numbers
Inverse transform sampling
[ "Physics" ]
1,378
[ "Monte Carlo methods", "Computational physics" ]
45,708
https://en.wikipedia.org/wiki/Coordinate%20covalent%20bond
In coordination chemistry, a coordinate covalent bond, also known as a dative bond, dipolar bond, or coordinate bond is a kind of two-center, two-electron covalent bond in which the two electrons derive from the same atom. The bonding of metal ions to ligands involves this kind of interaction. This type of interaction is central to Lewis acid–base theory. Coordinate bonds are commonly found in coordination compounds. Examples Coordinate covalent bonding is ubiquitous. In all metal aquo-complexes [M(H2O)n]m+, the bonding between water and the metal cation is described as a coordinate covalent bond. Metal-ligand interactions in most organometallic compounds and most coordination compounds are described similarly. The term dipolar bond is used in organic chemistry for compounds such as amine oxides for which the electronic structure can be described in terms of the basic amine donating two electrons to an oxygen atom. → O The arrow → indicates that both electrons in the bond originate from the amine moiety. In a standard covalent bond each atom contributes one electron. Therefore, an alternative description is that the amine gives away one electron to the oxygen atom, which is then used, with the remaining unpaired electron on the nitrogen atom, to form a standard covalent bond. The process of transferring the electron from nitrogen to oxygen creates formal charges, so the electronic structure may also be depicted as This electronic structure has an electric dipole, hence the name polar bond. In reality, the atoms carry partial charges; the more electronegative atom of the two involved in the bond will usually carry a partial negative charge. One exception to this is carbon monoxide. In this case, the carbon atom carries the partial negative charge although it is less electronegative than oxygen. An example of a dative covalent bond is provided by the interaction between a molecule of ammonia, a Lewis base with a lone pair of electrons on the nitrogen atom, and boron trifluoride, a Lewis acid by virtue of the boron atom having an incomplete octet of electrons. In forming the adduct, the boron atom attains an octet configuration. The electronic structure of a coordination complex can be described in terms of the set of ligands each donating a pair of electrons to a metal centre. For example, in hexamminecobalt(III) chloride, each ammonia ligand donates its lone pair of electrons to the cobalt(III) ion. In this case, the bonds formed are described as coordinate bonds. In the Covalent Bond Classification (CBC) method, ligands that form coordinate covalent bonds with a central atom are classed as L-type, while those that form normal covalent bonds are classed as X-type. Comparison with other electron-sharing modes In all cases, the bond, whether dative or "normal" electron-sharing, is a covalent bond. In common usage, the prefix dipolar, dative or coordinate merely serves to indicate the origin of the electrons used in creating the bond. For example, F3B ← O(C2H5)2 ("boron trifluoride (diethyl) etherate") is prepared from BF3 and :O(C2H5)2, as opposed to the radical species [•BF3]– and [•O(C2H5)2]+. The dative bond is also a convenience in terms of notation, as formal charges are avoided: we can write D: + []A ⇌ D → A rather than D+–A– (here : and [] represent the lone-pair and empty orbital on the electron-pair donor D and acceptor A, respectively). The notation is sometimes used even when the Lewis acid-base reaction involved is only notional (e.g., the sulfoxide R2S → O is rarely if ever made by reacting the sulfide R2S with atomic oxygen O). Thus, most chemists do not make any claim with respect to the properties of the bond when choosing one notation over the other (formal charges vs. arrow bond). It is generally true, however, that bonds depicted this way are polar covalent, sometimes strongly so, and some authors claim that there are genuine differences in the properties of a dative bond and electron-sharing bond and suggest that showing a dative bond is more appropriate in particular situations. As far back as 1989, Haaland characterized dative bonds as bonds that are (i) weak and long; (ii) with only a small degree of charge-transfer taking place during bond formation; and (iii) whose preferred mode of dissociation in the gas phase (or low ε inert solvent) is heterolytic rather than homolytic. The ammonia-borane adduct (H3N → BH3) is given as a classic example: the bond is weak, with a dissociation energy of 31 kcal/mol (cf. 90 kcal/mol for ethane), and long, at 166 pm (cf. 153 pm for ethane), and the molecule possesses a dipole moment of 5.2 D that implies a transfer of only 0.2 e– from nitrogen to boron. The heterolytic dissociation of H3N → BH3 is estimated to require 27 kcal/mol, confirming that heterolysis into ammonia and borane is more favorable than homolysis into radical cation and radical anion. However, aside from clear-cut examples, there is considerable dispute as to when a particular compound qualifies and, thus, the overall prevalence of dative bonding (with respect to an author's preferred definition). Computational chemists have suggested quantitative criteria to distinguish between the two "types" of bonding. Some non-obvious examples where dative bonding is claimed to be important include carbon suboxide (O≡C → C0 ← C≡O), tetraaminoallenes (described using dative bond language as "carbodicarbenes"; (R2N)2C → C0 ← C(NR2)2), the Ramirez carbodiphosphorane (Ph3P → C0 ← PPh3), and bis(triphenylphosphine)iminium cation (Ph3P → N+ ← PPh3), all of which exhibit considerably bent equilibrium geometries, though with a shallow barrier to bending. Simple application of the normal rules for drawing Lewis structures by maximizing bonding (using electron-sharing bonds) and minimizing formal charges would predict heterocumulene structures, and therefore linear geometries, for each of these compounds. Thus, these molecules are claimed to be better modeled as coordination complexes of :C: (carbon(0) or carbone) or :N:+ (mononitrogen cation) with CO, PPh3, or N-heterocycliccarbenes as ligands, the lone-pairs on the central atom accounting for the bent geometry. However, the usefulness of this view is disputed. References Chemical bonding Acid–base chemistry Coordination chemistry
Coordinate covalent bond
[ "Physics", "Chemistry", "Materials_science" ]
1,490
[ "Acid–base chemistry", "Coordination chemistry", "Equilibrium chemistry", "Condensed matter physics", "nan", "Chemical bonding" ]
45,783
https://en.wikipedia.org/wiki/Genotype%E2%80%93phenotype%20distinction
The genotype–phenotype distinction is drawn in genetics. The "genotype" is an organism's full hereditary information. The "phenotype" is an organism's actual observed properties, such as morphology, development, or behavior. This distinction is fundamental in the study of inheritance of traits and their evolution. Overview The terms "genotype" and "phenotype" were created by Wilhelm Johannsen in 1911, although the meaning of the terms and the significance of the distinction have evolved since they were introduced. It is the organism's physical properties that directly determine its chances of survival and reproductive output, but the inheritance of physical properties is dependent on the inheritance of genes. Therefore, understanding the theory of evolution via natural selection requires understanding the genotype–phenotype distinction. The genes contribute to a trait, and the phenotype is the observable manifestation of the genes (and therefore the genotype that affects the trait). If a white mouse had recessive genes that caused the genes responsible for color to be inactive, its genotype would be responsible for its phenotype (the white color). The mapping of a set of genotypes to a set of phenotypes is sometimes referred to as the genotype–phenotype map. An organism's genotype is a major (the largest by far for morphology) influencing factor in the development of its phenotype, but it is not the only one. Even two organisms with identical genotypes may differ in their phenotypes, due to phenotypic plasticity. To what extent a particular genotype influences a phenotype depends on the relative dominance, penetrance, and expresivity of the alleles in question. One experiences this in everyday life with monozygous (i.e. identical) twins. Identical twins share the same genotype, since their genomes are identical; but they never have the same phenotype, although their phenotypes may be very similar. This is apparent in the fact that close relations can always tell them apart, even though others might not be able to see the subtle differences. Further, identical twins can be distinguished by their fingerprints, which are never completely identical. Phenotypic plasticity The concept of phenotypic plasticity defines the degree to which an organism's phenotype is determined by its genotype. A high level of plasticity means that environmental factors have a strong influence on the particular phenotype that develops. If there is little plasticity, the phenotype of an organism can be reliably predicted from knowledge of the genotype, regardless of environmental peculiarities during development. An example of high plasticity can be observed in larval newts1: when these larvae sense the presence of predators such as dragonflies, they develop larger heads and tails relative to their body size and display darker pigmentation. Larvae with these traits have a higher chance of survival when exposed to the predators, but grow more slowly than other phenotypes. Genetic canalization In contrast to phenotypic plasticity, the concept of genetic canalization addresses the extent to which an organism's phenotype allows conclusions about its genotype. A phenotype is said to be canalized if mutations (changes in the genome) do not noticeably affect the physical properties of the organism. This means that a canalized phenotype may form from a large variety of different genotypes, in which case it is not possible to exactly predict the genotype from knowledge of the phenotype (i.e. the genotype–phenotype map is not invertible). If canalization is not present, small changes in the genome have an immediate effect on the phenotype that develops. Importance to evolutionary biology According to Lewontin, the theoretical task for population genetics is a process in two spaces: a "genotypic space" and a "phenotypic space". The challenge of a complete theory of population genetics is to provide a set of laws that predictably map a population of genotypes (G1) to a phenotype space (P1), where selection takes place, and another set of laws that map the resulting population (P2) back to genotype space (G2) where Mendelian genetics can predict the next generation of genotypes, thus completing the cycle. Even if non-Mendelian aspects of molecular genetics are ignored, this is a gargantuan task. Visualizing the transformation schematically: (adapted from Lewontin 1974, p. 12). T1 represents the genetic and epigenetic laws, the aspects of functional biology, or development, that transform a genotype into phenotype. This is the "genotype–phenotype map". T2 is the transformation due to natural selection, T3 are epigenetic relations that predict genotypes based on the selected phenotypes and finally T4 the rules of Mendelian genetics. In practice, there are two bodies of evolutionary theory that exist in parallel, traditional population genetics operating in the genotype space and the biometric theory used in plant and animal breeding, operating in phenotype space. The missing part is the mapping between the genotype and phenotype space. This leads to a "sleight of hand" (as Lewontin terms it) whereby variables in the equations of one domain, are considered parameters or constants, where, in a full-treatment, they would be transformed themselves by the evolutionary process and are functions of the state variables in the other domain. The "sleight of hand" is assuming that the mapping is known. Proceeding as if it is understood is enough to analyze many cases of interest. For example, if the phenotype is almost one-to-one with genotype (sickle-cell disease) or the time-scale is sufficiently short, the "constants" can be treated as such; however, there are also many situations where that assumption does not hold. References External links Stanford Encyclopedia of Philosophy entry "Wilhelm Johannsen's Genotype-Phenotype Distinction" at the Embryo Project Encyclopedia Genetics
Genotype–phenotype distinction
[ "Biology" ]
1,271
[ "Genetics" ]
45,784
https://en.wikipedia.org/wiki/Biomimetics
Biomimetics or biomimicry is the emulation of the models, systems, and elements of nature for the purpose of solving complex human problems. The terms "biomimetics" and "biomimicry" are derived from (bios), life, and μίμησις (mīmēsis), imitation, from μιμεῖσθαι (mīmeisthai), to imitate, from μῖμος (mimos), actor. A closely related field is bionics. Nature has gone through evolution over the 3.8 billion years since life is estimated to have appeared on the Earth. It has evolved species with high performance using commonly found materials. Surfaces of solids interact with other surfaces and the environment and derive the properties of materials. Biological materials are highly organized from the molecular to the nano-, micro-, and macroscales, often in a hierarchical manner with intricate nanoarchitecture that ultimately makes up a myriad of different functional elements. Properties of materials and surfaces result from a complex interplay between surface structure and morphology and physical and chemical properties. Many materials, surfaces, and objects in general provide multifunctionality. Various materials, structures, and devices have been fabricated for commercial interest by engineers, material scientists, chemists, and biologists, and for beauty, structure, and design by artists and architects. Nature has solved engineering problems such as self-healing abilities, environmental exposure tolerance and resistance, hydrophobicity, self-assembly, and harnessing solar energy. Economic impact of bioinspired materials and surfaces is significant, on the order of several hundred billion dollars per year worldwide. History One of the early examples of biomimicry was the study of birds to enable human flight. Although never successful in creating a "flying machine", Leonardo da Vinci (1452–1519) was a keen observer of the anatomy and flight of birds, and made numerous notes and sketches on his observations as well as sketches of "flying machines". The Wright Brothers, who succeeded in flying the first heavier-than-air aircraft in 1903, allegedly derived inspiration from observations of pigeons in flight. During the 1950s the American biophysicist and polymath Otto Schmitt developed the concept of "biomimetics". During his doctoral research he developed the Schmitt trigger by studying the nerves in squid, attempting to engineer a device that replicated the biological system of nerve propagation. He continued to focus on devices that mimic natural systems and by 1957 he had perceived a converse to the standard view of biophysics at that time, a view he would come to call biomimetics. In 1960 Jack E. Steele coined a similar term, bionics, at Wright-Patterson Air Force Base in Dayton, Ohio, where Otto Schmitt also worked. Steele defined bionics as "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues". During a later meeting in 1963 Schmitt stated, In 1969, Schmitt used the term "biomimetic" in the title one of his papers, and by 1974 it had found its way into Webster's Dictionary. Bionics entered the same dictionary earlier in 1960 as "a science concerned with the application of data about the functioning of biological systems to the solution of engineering problems". Bionic took on a different connotation when Martin Caidin referenced Jack Steele and his work in the novel Cyborg which later resulted in the 1974 television series The Six Million Dollar Man and its spin-offs. The term bionic then became associated with "the use of electronically operated artificial body parts" and "having ordinary human powers increased by or as if by the aid of such devices". Because the term bionic took on the implication of supernatural strength, the scientific community in English speaking countries largely abandoned it. The term biomimicry appeared as early as 1982. Biomimicry was popularized by scientist and author Janine Benyus in her 1997 book Biomimicry: Innovation Inspired by Nature. Biomimicry is defined in the book as a "new science that studies nature's models and then imitates or takes inspiration from these designs and processes to solve human problems". Benyus suggests looking to Nature as a "Model, Measure, and Mentor" and emphasizes sustainability as an objective of biomimicry. One of the latest examples of biomimicry has been created by Johannes-Paul Fladerer and Ernst Kurzmann by the description of "managemANT". This term (a combination of the words "management" and "ant"), describes the usage of behavioural strategies of ants in economic and management strategies. The potential long-term impacts of biomimicry were quantified in a 2013 Fermanian Business & Economic Institute Report commissioned by the San Diego Zoo. The findings demonstrated the potential economic and environmental benefits of biomimicry, which can be further seen in Johannes-Paul Fladerer and Ernst Kurzmann's "managemANT" approach. This approach utilizes the behavioral strategies of ants in economic and management strategies. Bio-inspired technologies Biomimetics could in principle be applied in many fields. Because of the diversity and complexity of biological systems, the number of features that might be imitated is large. Biomimetic applications are at various stages of development from technologies that might become commercially usable to prototypes. Murray's law, which in conventional form determined the optimum diameter of blood vessels, has been re-derived to provide simple equations for the pipe or tube diameter which gives a minimum mass engineering system. Locomotion Aircraft wing design and flight techniques are being inspired by birds and bats. The aerodynamics of streamlined design of improved Japanese high speed train Shinkansen 500 Series were modelled after the beak of Kingfisher bird. Biorobots based on the physiology and methods of locomotion of animals include BionicKangaroo which moves like a kangaroo, saving energy from one jump and transferring it to its next jump; Kamigami Robots, a children's toy, mimic cockroach locomotion to run quickly and efficiently over indoor and outdoor surfaces, and Pleobot, a shrimp-inspired robot to study metachronal swimming and the ecological impacts of this propulsive gait on the environment. Biomimetic flying robots (BFRs) BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs. Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments. Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype. Examples of bat inspired BFRs include Bat Bot and the DALER. Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings. Alternatively, the BFR can pitch up and increase the amount of drag it experiences. By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented. Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling. The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait. An example of a raptor inspired BFR is the prototype by Savastano et al. The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal. Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park, and a dragonfly inspired BFR is the prototype by Hu et al. The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of the aerodynamics of insect flight. Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments. The prototype by Phan and Park took inspiration from the rhinoceros beetle, so it can successfully continue flight even after a collision by deforming its hindwings. Biomimetic architecture Living beings have adapted to a constantly changing environment during evolution through mutation, recombination, and selection. The core idea of the biomimetic philosophy is that nature's inhabitants including animals, plants, and microbes have the most experience in solving problems and have already found the most appropriate ways to last on planet Earth. Similarly, biomimetic architecture seeks solutions for building sustainability present in nature. While nature serves as a model, there are few examples of biomimetic architecture that aim to be nature positive. The 21st century has seen a ubiquitous waste of energy due to inefficient building designs, in addition to the over-utilization of energy during the operational phase of its life cycle. In parallel, recent advancements in fabrication techniques, computational imaging, and simulation tools have opened up new possibilities to mimic nature across different architectural scales. As a result, there has been a rapid growth in devising innovative design approaches and solutions to counter energy problems. Biomimetic architecture is one of these multi-disciplinary approaches to sustainable design that follows a set of principles rather than stylistic codes, going beyond using nature as inspiration for the aesthetic components of built form but instead seeking to use nature to solve problems of the building's functioning and saving energy. Characteristics The term biomimetic architecture refers to the study and application of construction principles which are found in natural environments and species, and are translated into the design of sustainable solutions for architecture. Biomimetic architecture uses nature as a model, measure and mentor for providing architectural solutions across scales, which are inspired by natural organisms that have solved similar problems in nature. Using nature as a measure refers to using an ecological standard of measuring sustainability, and efficiency of man-made innovations, while the term mentor refers to learning from natural principles and using biology as an inspirational source. Biomorphic architecture, also referred to as bio-decoration, on the other hand, refers to the use of formal and geometric elements found in nature, as a source of inspiration for aesthetic properties in designed architecture, and may not necessarily have non-physical, or economic functions. A historic example of biomorphic architecture dates back to Egyptian, Greek and Roman cultures, using tree and plant forms in the ornamentation of structural columns. Procedures Within biomimetic architecture, two basic procedures can be identified, namely, the bottom-up approach (biology push) and top-down approach (technology pull). The boundary between the two approaches is blurry with the possibility of transition between the two, depending on each individual case. Biomimetic architecture is typically carried out in interdisciplinary teams in which biologists and other natural scientists work in collaboration with engineers, material scientists, architects, designers, mathematicians and computer scientists. In the bottom-up approach, the starting point is a new result from basic biological research promising for biomimetic implementation. For example, developing a biomimetic material system after the quantitative analysis of the mechanical, physical, and chemical properties of a biological system. In the top-down approach, biomimetic innovations are sought for already existing developments that have been successfully established on the market. The cooperation focuses on the improvement or further development of an existing product. Examples Researchers studied the termite's ability to maintain virtually constant temperature and humidity in their termite mounds in Africa despite outside temperatures that vary from . Researchers initially scanned a termite mound and created 3-D images of the mound structure, which revealed construction that could influence human building design. The Eastgate Centre, a mid-rise office complex in Harare, Zimbabwe, stays cool via a passive cooling architecture that uses only 10% of the energy of a conventional building of the same size. Researchers in the Sapienza University of Rome were inspired by the natural ventilation in termite mounds and designed a double façade that significantly cuts down over lit areas in a building. Scientists have imitated the porous nature of mound walls by designing a facade with double panels that was able to reduce heat gained by radiation and increase heat loss by convection in cavity between the two panels. The overall cooling load on the building's energy consumption was reduced by 15%. A similar inspiration was drawn from the porous walls of termite mounds to design a naturally ventilated façade with a small ventilation gap. This design of façade is able to induce air flow due to the Venturi effect and continuously circulates rising air in the ventilation slot. Significant transfer of heat between the building's external wall surface and the air flowing over it was observed. The design is coupled with greening of the façade. Green wall facilitates additional natural cooling via evaporation, respiration and transpiration in plants. The damp plant substrate further support the cooling effect. Scientists in Shanghai University were able to replicate the complex microstructure of clay-made conduit network in the mound to mimic the excellent humidity control in mounds. They proposed a porous humidity control material (HCM) using sepiolite and calcium chloride with water vapor adsorption-desorption content at 550 grams per meter squared. Calcium chloride is a desiccant and improves the water vapor adsorption-desorption property of the Bio-HCM. The proposed bio-HCM has a regime of interfiber mesopores which acts as a mini reservoir. The flexural strength of the proposed material was estimated to be 10.3 MPa using computational simulations. In structural engineering, the Swiss Federal Institute of Technology (EPFL) has incorporated biomimetic characteristics in an adaptive deployable "tensegrity" bridge. The bridge can carry out self-diagnosis and self-repair. The arrangement of leaves on a plant has been adapted for better solar power collection. Analysis of the elastic deformation happening when a pollinator lands on the sheath-like perch part of the flower Strelitzia reginae (known as bird-of-paradise flower) has inspired architects and scientists from the University of Freiburg and University of Stuttgart to create hingeless shading systems that can react to their environment. These bio-inspired products are sold under the name Flectofin. Other hingeless bioinspired systems include Flectofold. Flectofold has been inspired from the trapping system developed by the carnivorous plant Aldrovanda vesiculosa. Structural materials There is a great need for new structural materials that are light weight but offer exceptional combinations of stiffness, strength, and toughness. Such materials would need to be manufactured into bulk materials with complex shapes at high volume and low cost and would serve a variety of fields such as construction, transportation, energy storage and conversion. In a classic design problem, strength and toughness are more likely to be mutually exclusive, i.e., strong materials are brittle and tough materials are weak. However, natural materials with complex and hierarchical material gradients that span from nano- to macro-scales are both strong and tough. Generally, most natural materials utilize limited chemical components but complex material architectures that give rise to exceptional mechanical properties. Understanding the highly diverse and multi functional biological materials and discovering approaches to replicate such structures will lead to advanced and more efficient technologies. Bone, nacre (abalone shell), teeth, the dactyl clubs of stomatopod shrimps and bamboo are great examples of damage tolerant materials. The exceptional resistance to fracture of bone is due to complex deformation and toughening mechanisms that operate at spanning different size scales — nanoscale structure of protein molecules to macroscopic physiological scale. Nacre exhibits similar mechanical properties however with rather simpler structure. Nacre shows a brick and mortar like structure with thick mineral layer (0.2–0.9 μm) of closely packed aragonite structures and thin organic matrix (~20 nm). While thin films and micrometer sized samples that mimic these structures are already produced, successful production of bulk biomimetic structural materials is yet to be realized. However, numerous processing techniques have been proposed for producing nacre like materials. Pavement cells, epidermal cells on the surface of plant leaves and petals, often form wavy interlocking patterns resembling jigsaw puzzle pieces and are shown to enhance the fracture toughness of leaves, key to plant survival. Their pattern, replicated in laser-engraved Poly(methyl methacrylate) samples, was also demonstrated to lead to increased fracture toughness. It is suggested that the arrangement and patterning of cells play a role in managing crack propagation in tissues. Biomorphic mineralization is a technique that produces materials with morphologies and structures resembling those of natural living organisms by using bio-structures as templates for mineralization. Compared to other methods of material production, biomorphic mineralization is facile, environmentally benign and economic. Freeze casting (ice templating), an inexpensive method to mimic natural layered structures, was employed by researchers at Lawrence Berkeley National Laboratory to create alumina-Al-Si and IT HAP-epoxy layered composites that match the mechanical properties of bone with an equivalent mineral/organic content. Various further studies also employed similar methods to produce high strength and high toughness composites involving a variety of constituent phases. Recent studies demonstrated production of cohesive and self supporting macroscopic tissue constructs that mimic living tissues by printing tens of thousands of heterologous picoliter droplets in software-defined, 3D millimeter-scale geometries. Efforts are also taken up to mimic the design of nacre in artificial composite materials using fused deposition modelling and the helicoidal structures of stomatopod clubs in the fabrication of high performance carbon fiber-epoxy composites. Various established and novel additive manufacturing technologies like PolyJet printing, direct ink writing, 3D magnetic printing, multi-material magnetically assisted 3D printing and magnetically assisted slip casting have also been utilized to mimic the complex micro-scale architectures of natural materials and provide huge scope for future research. Spider silk is tougher than Kevlar used in bulletproof vests. Engineers could in principle use such a material, if it could be reengineered to have a long enough life, for parachute lines, suspension bridge cables, artificial ligaments for medicine, and other purposes. The self-sharpening teeth of many animals have been copied to make better cutting tools. New ceramics that exhibit giant electret hysteresis have also been realized. Neuronal computers Neuromorphic computers and sensors are electrical devices that copy the structure and function of biological neurons in order to compute. One example of this is the event camera in which only the pixels that receive a new signal update to a new state. All other pixels do not update until a signal is received. Self healing-materials In some biological systems, self-healing occurs via chemical releases at the site of fracture, which initiate a systemic response to transport repairing agents to the fracture site. This promotes autonomic healing. To demonstrate the use of micro-vascular networks for autonomic healing, researchers developed a microvascular coating–substrate architecture that mimics human skin. Bio-inspired self-healing structural color hydrogels that maintain the stability of an inverse opal structure and its resultant structural colors were developed. A self-repairing membrane inspired by rapid self-sealing processes in plants was developed for inflatable lightweight structures such as rubber boats or Tensairity constructions. The researchers applied a thin soft cellular polyurethane foam coating on the inside of a fabric substrate, which closes the crack if the membrane is punctured with a spike. Self-healing materials, polymers and composite materials capable of mending cracks have been produced based on biological materials. The self-healing properties may also be achieved by the breaking and reforming of hydrogen bonds upon cyclical stress of the material. Surfaces Surfaces that recreate the properties of shark skin are intended to enable more efficient movement through water. Efforts have been made to produce fabric that emulates shark skin. Surface tension biomimetics are being researched for technologies such as hydrophobic or hydrophilic coatings and microactuators. Adhesion Wet adhesion Some amphibians, such as tree and torrent frogs and arboreal salamanders, are able to attach to and move over wet or even flooded environments without falling. This kind of organisms have toe pads which are permanently wetted by mucus secreted from glands that open into the channels between epidermal cells. They attach to mating surfaces by wet adhesion and they are capable of climbing on wet rocks even when water is flowing over the surface. Tire treads have also been inspired by the toe pads of tree frogs. 3D printed hierarchical surface models, inspired from tree and torrent frogs toe pad design, have been observed to produce better wet traction than conventional tire design. Marine mussels can stick easily and efficiently to surfaces underwater under the harsh conditions of the ocean. Mussels use strong filaments to adhere to rocks in the inter-tidal zones of wave-swept beaches, preventing them from being swept away in strong sea currents. Mussel foot proteins attach the filaments to rocks, boats and practically any surface in nature including other mussels. These proteins contain a mix of amino acid residues which has been adapted specifically for adhesive purposes. Researchers from the University of California Santa Barbara borrowed and simplified chemistries that the mussel foot uses to overcome this engineering challenge of wet adhesion to create copolyampholytes, and one-component adhesive systems with potential for employment in nanofabrication protocols. Other research has proposed adhesive glue from mussels. Dry adhesion Leg attachment pads of several animals, including many insects (e.g., beetles and flies), spiders and lizards (e.g., geckos), are capable of attaching to a variety of surfaces and are used for locomotion, even on vertical walls or across ceilings. Attachment systems in these organisms have similar structures at their terminal elements of contact, known as setae. Such biological examples have offered inspiration in order to produce climbing robots, boots and tape. Synthetic setae have also been developed for the production of dry adhesives. Liquid repellency Superliquiphobicity refers to a remarkable surface property where a solid surface exhibits an extreme aversion to liquids, causing droplets to bead up and roll off almost instantaneously upon contact. This behavior arises from intricate surface textures and interactions at the nanoscale, effectively preventing liquids from wetting or adhering to the surface. The term "superliquiphobic" is derived from "superhydrophobic," which describes surfaces highly resistant to water. Superliquiphobic surfaces go beyond water repellency and display repellent characteristics towards a wide range of liquids, including those with very low surface tension or containing surfactants. Superliquiphobicity emerges when a solid surface possesses minute roughness, forming interfaces with droplets through wetting while altering contact angles. This behavior hinges on the roughness factor (Rf), defining the ratio of solid-liquid area to its projection, influencing contact angles. On rough surfaces, non-wetting liquids give rise to composite solid-liquid-air interfaces, their contact angles determined by the distribution of wet and air-pocket areas. The achievement of superliquiphobicity involves increasing the fractional flat geometrical area (fLA) and Rf, leading to surfaces that actively repel liquids. The inspiration for crafting such surfaces draws from nature's ingenuity, illustrated by the "lotus effect". Leaves of water-repellent plants, like the lotus, exhibit inherent hierarchical structures featuring nanoscale wax-coated formations. Other natural surfaces with these capabilities can include Beetle carapaces and cacti spines, which may exhibit rough features at multiple size scales. These structures lead to superhydrophobicity, where water droplets perch on trapped air bubbles, resulting in high contact angles and minimal contact angle hysteresis. This natural example guides the development of superliquiphobic surfaces, capitalizing on re-entrant geometries that can repel low surface tension liquids and achieve near-zero contact angles. Creating superliquiphobic surfaces involves pairing re-entrant geometries with low surface energy materials, such as fluorinated substances or liquid-like silocones . These geometries include overhangs that widen beneath the surface, enabling repellency even for minimal contact angles. These surfaces find utility in self-cleaning, anti-icing, anti-fogging, antifouling, enhanced condensation, and more, presenting innovative solutions to challenges in biomedicine, desalination, atmospheric water harvesting, and energy conversion. In essence, superliquiphobicity, inspired by natural models like the lotus leaf, capitalizes on re-entrant geometries and surface properties to create interfaces that actively repel liquids. These surfaces hold immense promise across a range of applications, promising enhanced functionality and performance in various technological and industrial contexts. Optics Biomimetic materials are gaining increasing attention in the field of optics and photonics. There are still little known bioinspired or biomimetic products involving the photonic properties of plants or animals. However, understanding how nature designed such optical materials from biological resources is a current field of research. Inspiration from fruits and plants One source of biomimetic inspiration is from plants. Plants have proven to be concept generations for the following functions; re(action)-coupling, self (adaptability), self-repair, and energy-autonomy. As plants do not have a centralized decision making unit (i.e. a brain), most plants have a decentralized autonomous system in various organs and tissues of the plant. Therefore, they react to multiple stimulus such as light, heat, and humidity. One example is the carnivorous plant species Dionaea muscipula (Venus flytrap). For the last 25 years, there has been research focus on the motion principles of the plant to develop AVFT (artificial Venus flytrap robots). Through the movement during prey capture, the plant inspired soft robotic motion systems. The fast snap buckling (within 100–300 ms) of the trap closure movement is initiated when prey triggers the hairs of the plant within a certain time (twice within 20 s). AVFT systems exist, in which the trap closure movements are actuated via magnetism, electricity, pressurized air, and temperature changes. Another example of mimicking plants, is the Pollia condensata, also known as the marble berry. The chiral self-assembly of cellulose inspired by the Pollia condensata berry has been exploited to make optically active films. Such films are made from cellulose which is a biodegradable and biobased resource obtained from wood or cotton. The structural colours can potentially be everlasting and have more vibrant colour than the ones obtained from chemical absorption of light. Pollia condensata is not the only fruit showing a structural coloured skin; iridescence is also found in berries of other species such as Margaritaria nobilis. These fruits show iridescent colors in the blue-green region of the visible spectrum which gives the fruit a strong metallic and shiny visual appearance. The structural colours come from the organisation of cellulose chains in the fruit's epicarp, a part of the fruit skin. Each cell of the epicarp is made of a multilayered envelope that behaves like a Bragg reflector. However, the light which is reflected from the skin of these fruits is not polarised unlike the one arising from man-made replicates obtained from the self-assembly of cellulose nanocrystals into helicoids, which only reflect left-handed circularly polarised light. The fruit of Elaeocarpus angustifolius also show structural colour that come arises from the presence of specialised cells called iridosomes which have layered structures. Similar iridosomes have also been found in Delarbrea michieana fruits. In plants, multi layer structures can be found either at the surface of the leaves (on top of the epidermis), such as in Selaginella willdenowii or within specialized intra-cellular organelles, the so-called iridoplasts, which are located inside the cells of the upper epidermis. For instance, the rain forest plants Begonia pavonina have iridoplasts located inside the epidermal cells. Structural colours have also been found in several algae, such as in the red alga Chondrus crispus (Irish Moss). Inspiration from animals Structural coloration produces the rainbow colours of soap bubbles, butterfly wings and many beetle scales. Phase-separation has been used to fabricate ultra-white scattering membranes from polymethylmethacrylate, mimicking the beetle Cyphochilus. LED lights can be designed to mimic the patterns of scales on fireflies' abdomens, improving their efficiency. Morpho butterfly wings are structurally coloured to produce a vibrant blue that does not vary with angle. This effect can be mimicked by a variety of technologies. Lotus Cars claim to have developed a paint that mimics the Morpho butterfly's structural blue colour. In 2007, Qualcomm commercialised an interferometric modulator display technology, "Mirasol", using Morpho-like optical interference. In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of Morpho butterfly wing scales. Canon Inc.'s SubWavelength structure Coating uses wedge-shaped structures the size of the wavelength of visible light. The wedge-shaped structures cause a continuously changing refractive index as light travels through the coating, significantly reducing lens flare. This imitates the structure of a moth's eye. Notable figures such as the Wright Brothers and Leonardo da Vinci attempted to replicate the flight observed in birds. In an effort to reduce aircraft noise researchers have looked to the leading edge of owl feathers, which have an array of small finlets or rachis adapted to disperse aerodynamic pressure and provide nearly silent flight to the bird. Agricultural systems Holistic planned grazing, using fencing and/or herders, seeks to restore grasslands by carefully planning movements of large herds of livestock to mimic the vast herds found in nature. The natural system being mimicked and used as a template is grazing animals concentrated by pack predators that must move on after eating, trampling, and manuring an area, and returning only after it has fully recovered. Its founder Allan Savory and some others have claimed potential in building soil, increasing biodiversity, and reversing desertification. However, many researchers have disputed Savory's claim. Studies have often found that the method increases desertification instead of reducing it. Other uses Some air conditioning systems use biomimicry in their fans to increase airflow while reducing power consumption. Technologists like Jas Johl have speculated that the functionality of vacuole cells could be used to design highly adaptable security systems. "The functionality of a vacuole, a biological structure that guards and promotes growth, illuminates the value of adaptability as a guiding principle for security." The functions and significance of vacuoles are fractal in nature, the organelle has no basic shape or size; its structure varies according to the requirements of the cell. Vacuoles not only isolate threats, contain what's necessary, export waste, maintain pressure—they also help the cell scale and grow. Johl argues these functions are necessary for any security system design. The 500 Series Shinkansen used biomimicry to reduce energy consumption and noise levels while increasing passenger comfort. With reference to space travel, NASA and other firms have sought to develop swarm-type space drones inspired by bee behavioural patterns, and oxtapod terrestrial drones designed with reference to desert spiders. Other technologies Protein folding has been used to control material formation for self-assembled functional nanostructures. Polar bear fur has inspired the design of thermal collectors and clothing. The light refractive properties of the moth's eye has been studied to reduce the reflectivity of solar panels. The Bombardier beetle's powerful repellent spray inspired a Swedish company to develop a "micro mist" spray technology, which is claimed to have a low carbon impact (compared to aerosol sprays). The beetle mixes chemicals and releases its spray via a steerable nozzle at the end of its abdomen, stinging and confusing the victim. Most viruses have an outer capsule 20 to 300 nm in diameter. Virus capsules are remarkably robust and capable of withstanding temperatures as high as 60 °C; they are stable across the pH range 2–10. Viral capsules can be used to create nano device components such as nanowires, nanotubes, and quantum dots. Tubular virus particles such as the tobacco mosaic virus (TMV) can be used as templates to create nanofibers and nanotubes, since both the inner and outer layers of the virus are charged surfaces which can induce nucleation of crystal growth. This was demonstrated through the production of platinum and gold nanotubes using TMV as a template. Mineralized virus particles have been shown to withstand various pH values by mineralizing the viruses with different materials such as silicon, PbS, and CdS and could therefore serve as a useful carriers of material. A spherical plant virus called cowpea chlorotic mottle virus (CCMV) has interesting expanding properties when exposed to environments of pH higher than 6.5. Above this pH, 60 independent pores with diameters about 2 nm begin to exchange substance with the environment. The structural transition of the viral capsid can be utilized in Biomorphic mineralization for selective uptake and deposition of minerals by controlling the solution pH. Possible applications include using the viral cage to produce uniformly shaped and sized quantum dot semiconductor nanoparticles through a series of pH washes. This is an alternative to the apoferritin cage technique currently used to synthesize uniform CdSe nanoparticles. Such materials could also be used for targeted drug delivery since particles release contents upon exposure to specific pH levels. See also Artificial photosynthesis Artificial enzyme Bio-inspired computing Bioinspiration & Biomimetics Biomimetic synthesis Carbon sequestration Reverse engineering Synthetic biology References Further reading Benyus, J. M. (2001). Along Came a Spider. Sierra, 86(4), 46–47. Hargroves, K. D. & Smith, M. H. (2006). Innovation inspired by nature Biomimicry. Ecos, (129), 27–28. Marshall, A. (2009). Wild Design: The Ecomimicry Project, North Atlantic Books: Berkeley. Passino, Kevin M. (2004). Biomimicry for Optimization, Control, and Automation. Springer. Pyper, W. (2006). Emulating nature: The rise of industrial ecology. Ecos, (129), 22–26. Smith, J. (2007). It's only natural. The Ecologist, 37(8), 52–55. Thompson, D'Arcy W., On Growth and Form. Dover 1992 reprint of 1942 2nd ed. (1st ed., 1917). Vogel, S. (2000). Cats' Paws and Catapults: Mechanical Worlds of Nature and People. Norton. External links Biomimetics MIT Sex, Velcro and Biomimicry with Janine Benyus Janine Benyus: Biomimicry in Action from TED 2009 Design by Nature - National Geographic Michael Pawlyn: Using nature's genius in architecture from TED 2010 Robert Full shows how human engineers can learn from animals' tricks from TED 2002 The Fast Draw: Biomimicry from CBS News Evolutionary biology Biotechnology Bioinformatics Biological engineering Biophysics Industrial ecology Bionics Water conservation Renewable energy Sustainable transport
Biomimetics
[ "Physics", "Chemistry", "Engineering", "Biology" ]
7,658
[ "Evolutionary biology", "Biological engineering", "Applied and interdisciplinary physics", "Bionics", "Industrial engineering", "Biotechnology", "Physical systems", "Transport", "Sustainable transport", "Bioinformatics", "Biophysics", "nan", "Environmental engineering", "Industrial ecology...
45,829
https://en.wikipedia.org/wiki/Structural%20engineering
Structural engineering is a sub-discipline of civil engineering in which structural engineers are trained to design the 'bones and joints' that create the form and shape of human-made structures. Structural engineers also must understand and calculate the stability, strength, rigidity and earthquake-susceptibility of built structures for buildings and nonbuilding structures. The structural designs are integrated with those of other designers such as architects and building services engineer and often supervise the construction of projects by contractors on site. They can also be involved in the design of machinery, medical equipment, and vehicles where structural integrity affects functioning and safety. See glossary of structural engineering. Structural engineering theory is based upon applied physical laws and empirical knowledge of the structural performance of different materials and geometries. Structural engineering design uses a number of relatively simple structural concepts to build complex structural systems. Structural engineers are responsible for making creative and efficient use of funds, structural elements and materials to achieve these goals. History Structural engineering dates back to 2700 B.C. when the step pyramid for Pharaoh Djoser was built by Imhotep, the first engineer in history known by name. Pyramids were the most common major structures built by ancient civilizations because the structural form of a pyramid is inherently stable and can be almost infinitely scaled (as opposed to most other structural forms, which cannot be linearly increased in size in proportion to increased loads). The structural stability of the pyramid, whilst primarily gained from its shape, relies also on the strength of the stone from which it is constructed, and its ability to support the weight of the stone above it. The limestone blocks were often taken from a quarry near the building site and have a compressive strength from 30 to 250 MPa (MPa = Pa × 106). Therefore, the structural strength of the pyramid stems from the material properties of the stones from which it was built rather than the pyramid's geometry. Throughout ancient and medieval history most architectural design and construction were carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. No theory of structures existed, and understanding of how structures stood up was extremely limited, and based almost entirely on empirical evidence of 'what had worked before' and intuition. Knowledge was retained by guilds and seldom supplanted by advances. Structures were repetitive, and increases in scale were incremental. No record exists of the first calculations of the strength of structural members or the behavior of structural material, but the profession of a structural engineer only really took shape with the Industrial Revolution and the re-invention of concrete (see History of Concrete). The physical sciences underlying structural engineering began to be understood in the Renaissance and have since developed into computer-based applications pioneered in the 1970s. Timeline 1452–1519 Leonardo da Vinci made many contributions. 1638: Galileo Galilei published the book Two New Sciences in which he examined the failure of simple structures. 1660: Hooke's law by Robert Hooke. 1687: Isaac Newton published Philosophiæ Naturalis Principia Mathematica, which contains his laws of motion. 1750: Euler–Bernoulli beam equation. 1700–1782: Daniel Bernoulli introduced the principle of virtual work. 1707–1783: Leonhard Euler developed the theory of buckling of columns. 1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures. 1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as the partial derivative of the strain energy. This theorem includes the method of "least work" as a special case. 1874: Otto Mohr formalized the idea of a statically indeterminate structure. 1922: Timoshenko corrects the Euler–Bernoulli beam equation. 1936: Hardy Cross' publication of the moment distribution method, an important innovation in the design of continuous frames. 1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice framework. 1942: Richard Courant divided a domain into finite subregions. 1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today. Structural failure The history of structural engineering contains many collapses and failures. Sometimes this is due to obvious negligence, as in the case of the Pétion-Ville school collapse, in which Rev. Fortin Augustin " constructed the building all by himself, saying he didn't need an engineer as he had good knowledge of construction" following a partial collapse of the three-story schoolhouse that sent neighbors fleeing. The final collapse killed 94 people, mostly children. In other cases structural failures require careful study, and the results of these inquiries have resulted in improved practices and a greater understanding of the science of structural engineering. Some such studies are the result of forensic engineering investigations where the original engineer seems to have done everything in accordance with the state of the profession and acceptable practice yet a failure still eventuated. A famous case of structural knowledge and practice being advanced in this manner can be found in a series of failures involving box girders which collapsed in Australia during the 1970s. Theory Structural engineering depends upon a detailed knowledge of applied mechanics, materials science, and applied mathematics to understand and predict how structures support and resist self-weight and imposed loads. To apply the knowledge successfully a structural engineer generally requires detailed knowledge of relevant empirical and theoretical design codes, the techniques of structural analysis, as well as some knowledge of the corrosion resistance of the materials and structures, especially when those structures are exposed to the external environment. Since the 1990s, specialist software has become available to aid in the design of structures, with the functionality to assist in the drawing, analyzing and designing of structures with maximum precision; examples include AutoCAD, StaadPro, ETABS, Prokon, Revit Structure, Inducta RCB, etc. Such software may also take into consideration environmental loads, such as earthquakes and winds. Profession Structural engineers are responsible for engineering design and structural analysis. Entry-level structural engineers may design the individual structural elements of a structure, such as the beams and columns of a building. More experienced engineers may be responsible for the structural design and integrity of an entire system, such as a building. Structural engineers often specialize in particular types of structures, such as buildings, bridges, pipelines, industrial, tunnels, vehicles, ships, aircraft, and spacecraft. Structural engineers who specialize in buildings may specialize in particular construction materials such as concrete, steel, wood, masonry, alloys and composites. Structural engineering has existed since humans first started to construct their structures. It became a more defined and formalized profession with the emergence of architecture as a distinct profession from engineering during the industrial revolution in the late 19th century. Until then, the architect and the structural engineer were usually one and the same thing – the master builder. Only with the development of specialized knowledge of structural theories that emerged during the 19th and early 20th centuries, did the professional structural engineers come into existence. The role of a structural engineer today involves a significant understanding of both static and dynamic loading and the structures that are available to resist them. The complexity of modern structures often requires a great deal of creativity from the engineer in order to ensure the structures support and resist the loads they are subjected to. A structural engineer will typically have a four or five-year undergraduate degree, followed by a minimum of three years of professional practice before being considered fully qualified. Structural engineers are licensed or accredited by different learned societies and regulatory bodies around the world (for example, the Institution of Structural Engineers in the UK). Depending on the degree course they have studied and/or the jurisdiction they are seeking licensure in, they may be accredited (or licensed) as just structural engineers, or as civil engineers, or as both civil and structural engineers. Another international organisation is IABSE(International Association for Bridge and Structural Engineering). The aim of that association is to exchange knowledge and to advance the practice of structural engineering worldwide in the service of the profession and society. Specializations Building structures Structural building engineering is primarily driven by the creative manipulation of materials and forms and the underlying mathematical and scientific ideas to achieve an end that fulfills its functional requirements and is structurally safe when subjected to all the loads it could reasonably be expected to experience. This is subtly different from architectural design, which is driven by the creative manipulation of materials and forms, mass, space, volume, texture, and light to achieve an end which is aesthetic, functional, and often artistic. The structural design for a building must ensure that the building can stand up safely, able to function without excessive deflections or movements which may cause fatigue of structural elements, cracking or failure of fixtures, fittings or partitions, or discomfort for occupants. It must account for movements and forces due to temperature, creep, cracking, and imposed loads. It must also ensure that the design is practically buildable within acceptable manufacturing tolerances of the materials. It must allow the architecture to work, and the building services to fit within the building and function (air conditioning, ventilation, smoke extract, electrics, lighting, etc.). The structural design of a modern building can be extremely complex and often requires a large team to complete. Structural engineering specialties for buildings include: Earthquake engineering Façade engineering Fire engineering Roof engineering Tower engineering Wind engineering Earthquake engineering structures Earthquake engineering structures are those engineered to withstand earthquakes. The main objectives of earthquake engineering are to understand the interaction of structures with the shaking ground, foresee the consequences of possible earthquakes, and design and construct the structures to perform during an earthquake. Earthquake-proof structures are not necessarily extremely strong like the El Castillo pyramid at Chichen Itza shown above. One important tool of earthquake engineering is base isolation, which allows the base of a structure to move freely with the ground. Civil engineering structures Civil structural engineering includes all structural engineering related to the built environment. It includes: The structural engineer is the lead designer on these structures, and often the sole designer. In the design of structures such as these, structural safety is of paramount importance (in the UK, designs for dams, nuclear power stations and bridges must be signed off by a chartered engineer). Civil engineering structures are often subjected to very extreme forces, such as large variations in temperature, dynamic loads such as waves or traffic, or high pressures from water or compressed gases. They are also often constructed in corrosive environments, such as at sea, in industrial facilities, or below ground. resisted and significant deflections of structures. The forces which parts of a machine are subjected to can vary significantly and can do so at a great rate. The forces which a boat or aircraft are subjected to vary enormously and will do so thousands of times over the structure's lifetime. The structural design must ensure that such structures can endure such loading for their entire design life without failing. These works can require mechanical structural engineering: Boilers and pressure vessels Coachworks and carriages Cranes Elevators Escalators Marine vessels and hulls Aerospace structures Aerospace structure types include launch vehicles, (Atlas, Delta, Titan), missiles (ALCM, Harpoon), Hypersonic vehicles (Space Shuttle), military aircraft (F-16, F-18) and commercial aircraft (Boeing 777, MD-11). Aerospace structures typically consist of thin plates with stiffeners for the external surfaces, bulkheads, and frames to support the shape and fasteners such as welds, rivets, screws, and bolts to hold the components together. Nanoscale structures A nanostructure is an object of intermediate size between molecular and microscopic (micrometer-sized) structures. In describing nanostructures it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometer range. The term 'nanostructure' is often used when referring to magnetic technology. Structural engineering for medical science Medical equipment (also known as armamentarium) is designed to aid in the diagnosis, monitoring or treatment of medical conditions. There are several basic types: diagnostic equipment includes medical imaging machines, used to aid in diagnosis; equipment includes infusion pumps, medical lasers, and LASIK surgical machines; medical monitors allow medical staff to measure a patient's medical state. Monitors may measure patient vital signs and other parameters including ECG, EEG, blood pressure, and dissolved gases in the blood; diagnostic medical equipment may also be used in the home for certain purposes, e.g. for the control of diabetes mellitus. A biomedical equipment technician (BMET) is a vital component of the healthcare delivery system. Employed primarily by hospitals, BMETs are the people responsible for maintaining a facility's medical equipment. Structural elements Any structure is essentially made up of only a small number of different types of elements: Columns Beams Plates Arches Shells Catenaries Many of these elements can be classified according to form (straight, plane / curve) and dimensionality (one-dimensional / two-dimensional): Columns Columns are elements that carry only axial force (compression) or both axial force and bending (which is technically called a beam-column but practically, just a column). The design of a column must check the axial capacity of the element and the buckling capacity. The buckling capacity is the capacity of the element to withstand the propensity to buckle. Its capacity depends upon its geometry, material, and the effective length of the column, which depends upon the restraint conditions at the top and bottom of the column. The effective length is where is the real length of the column and K is the factor dependent on the restraint conditions. The capacity of a column to carry axial load depends on the degree of bending it is subjected to, and vice versa. This is represented on an interaction chart and is a complex non-linear relationship. Beams A beam may be defined as an element in which one dimension is much greater than the other two and the applied loads are usually normal to the main axis of the element. Beams and columns are called line elements and are often represented by simple lines in structural modeling. cantilevered (supported at one end only with a fixed connection) simply supported (fixed against vertical translation at each end and horizontal translation at one end only, and able to rotate at the supports) fixed (supported in all directions for translation and rotation at each end) continuous (supported by three or more supports) a combination of the above (ex. supported at one end and in the middle) Beams are elements that carry pure bending only. Bending causes one part of the section of a beam (divided along its length) to go into compression and the other part into tension. The compression part must be designed to resist buckling and crushing, while the tension part must be able to adequately resist the tension. Trusses A truss is a structure comprising members and connection points or nodes. When members are connected at nodes and forces are applied at nodes members can act in tension or compression. Members acting in compression are referred to as compression members or struts while members acting in tension are referred to as tension members or ties. Most trusses use gusset plates to connect intersecting elements. Gusset plates are relatively flexible and unable to transfer bending moments. The connection is usually arranged so that the lines of force in the members are coincident at the joint thus allowing the truss members to act in pure tension or compression. Trusses are usually used in large-span structures, where it would be uneconomical to use solid beams. Plates Plates carry bending in two directions. A concrete flat slab is an example of a plate. Plates are understood by using continuum mechanics, but due to the complexity involved they are most often designed using a codified empirical approach, or computer analysis. They can also be designed with yield line theory, where an assumed collapse mechanism is analyzed to give an upper bound on the collapse load. This technique is used in practice but because the method provides an upper-bound (i.e. an unsafe prediction of the collapse load) for poorly conceived collapse mechanisms, great care is needed to ensure that the assumed collapse mechanism is realistic. Shells Shells derive their strength from their form and carry forces in compression in two directions. A dome is an example of a shell. They can be designed by making a hanging-chain model, which will act as a catenary in pure tension and inverting the form to achieve pure compression. Arches Arches carry forces in compression in one direction only, which is why it is appropriate to build arches out of masonry. They are designed by ensuring that the line of thrust of the force remains within the depth of the arch. It is mainly used to increase the bountifulness of any structure. Catenaries Catenaries derive their strength from their form and carry transverse forces in pure tension by deflecting (just as a tightrope will sag when someone walks on it). They are almost always cable or fabric structures. A fabric structure acts as a catenary in two directions. Materials Structural engineering depends on the knowledge of materials and their properties, in order to understand how different materials support and resist loads. It also involves a knowledge of Corrosion engineering to avoid for example galvanic coupling of dissimilar materials. Common structural materials are: Iron: wrought iron, cast iron Concrete: reinforced concrete, prestressed concrete Alloy: steel, stainless steel Masonry Timber: hardwood, softwood Aluminium Composite materials: plywood Other structural materials: adobe, bamboo, carbon fibre, fiber reinforced plastic, mudbrick, roofing materials See also Glossary of structural engineering Aircraft structures Architects Architectural engineering Building officials Building services engineering Civil engineering Construction engineering Corrosion engineering Earthquake engineering Forensic engineering Index of structural engineering articles List of bridge disasters List of structural engineers List of structural engineering software Mechanical engineering Nanostructure Prestressed structure Structurae Structural engineer Structural engineering software Structural fracture mechanics Structural failure Structural robustness Structural steel Structural testing Notes References Hibbeler, R. C. (2010). Structural Analysis. Prentice-Hall. Blank, Alan; McEvoy, Michael; Plank, Roger (1993). Architecture and Construction in Steel. Taylor & Francis. . Hewson, Nigel R. (2003). Prestressed Concrete Bridges: Design and Construction. Thomas Telford. . Heyman, Jacques (1999). The Science of Structural Engineering. Imperial College Press. . Hosford, William F. (2005). Mechanical Behavior of Materials. Cambridge University Press. . Further reading Blockley, David (2014). A Very Short Introduction to Structural Engineering. Oxford University Press . Bradley, Robert E.; Sandifer, Charles Edward (2007). Leonhard Euler: Life, Work, and Legacy. Elsevier. . Chapman, Allan. (2005). England's Leornardo: Robert Hooke and the Seventeenth Century's Scientific Revolution. CRC Press. . Dugas, René (1988). A History of Mechanics. Courier Dover Publications. . Feld, Jacob; Carper, Kenneth L. (1997). Construction Failure. John Wiley & Sons. . Galilei, Galileo. (translators: Crew, Henry; de Salvio, Alfonso) (1954). Dialogues Concerning Two New Sciences. Courier Dover Publications. Kirby, Richard Shelton (1990). Engineering in History. Courier Dover Publications. . Heyman, Jacques (1998). Structural Analysis: A Historical Approach. Cambridge University Press. . Labrum, E.A. (1994). Civil Engineering Heritage. Thomas Telford. . Lewis, Peter R. (2004). Beautiful Bridge of the Silvery Tay. Tempus. Mir, Ali (2001). Art of the Skyscraper: the Genius of Fazlur Khan. Rizzoli International Publications. . Rozhanskaya, Mariam; Levinova, I. S. (1996). "Statics" in Morelon, Régis & Rashed, Roshdi (1996). Encyclopedia of the History of Arabic Science, vol. 2–3, Routledge. Whitbeck, Caroline (1998). Ethics in Engineering Practice and Research. Cambridge University Press. . Hoogenboom P.C.J. (1998). "Discrete Elements and Nonlinearity in Design of Structural Concrete Walls", Section 1.3 Historical Overview of Structural Concrete Modelling, . Nedwell, P.J.; Swamy, R.N.(ed) (1994). Ferrocement:Proceedings of the Fifth International Symposium. Taylor & Francis. . External links Structural Engineering Association – International National Council of Structural Engineers Associations Structural Engineering Institute, an institute of the American Society of Civil Engineers Structurae database of structures Structural Engineering Association – International The EN Eurocodes are a series of 10 European Standards, EN 1990 – EN 1999, providing a common approach for the design of buildings and other civil engineering works and construction products Civil engineering Engineering disciplines
Structural engineering
[ "Engineering" ]
4,439
[ "Structural engineering", "Civil engineering", "Construction", "nan" ]
46,095
https://en.wikipedia.org/wiki/Russell%27s%20paradox
In mathematical logic, Russell's paradox (also known as Russell's antinomy) is a set-theoretic paradox published by the British philosopher and mathematician, Bertrand Russell, in 1901. Russell's paradox shows that every set theory that contains an unrestricted comprehension principle leads to contradictions. According to the unrestricted comprehension principle, for any sufficiently well-defined property, there is the set of all and only the objects that have that property. Let R be the set of all sets that are not members of themselves. (This set is sometimes called "the Russell set".) If R is not a member of itself, then its definition entails that it is a member of itself; yet, if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols: Russell also showed that a version of the paradox could be derived in the axiomatic system constructed by the German philosopher and mathematician Gottlob Frege, hence undermining Frege's attempt to reduce mathematics to logic and calling into question the logicist programme. Two influential ways of avoiding the paradox were both proposed in 1908: Russell's own type theory and the Zermelo set theory. In particular, Zermelo's axioms restricted the unlimited comprehension principle. With the additional contributions of Abraham Fraenkel, Zermelo set theory developed into the now-standard Zermelo–Fraenkel set theory (commonly known as ZFC when including the axiom of choice). The main difference between Russell's and Zermelo's solution to the paradox is that Zermelo modified the axioms of set theory while maintaining a standard logical language, while Russell modified the logical language itself. The language of ZFC, with the help of Thoralf Skolem, turned out to be that of first-order logic. The paradox had already been discovered independently in 1899 by the German mathematician Ernst Zermelo. However, Zermelo did not publish the idea, which remained known only to David Hilbert, Edmund Husserl, and other academics at the University of Göttingen. At the end of the 1890s, Georg Cantor – considered the founder of modern set theory – had already realized that his theory would lead to a contradiction, as he told Hilbert and Richard Dedekind by letter. Informal presentation Most sets commonly encountered are not members of themselves. Let us call a set "normal" if it is not a member of itself, and "abnormal" if it is a member of itself. Clearly every set must be either normal or abnormal. For example, consider the set of all squares in a plane. This set is not itself a square in the plane, thus it is not a member of itself and is therefore normal. In contrast, the complementary set that contains everything which is not a square in the plane is itself not a square in the plane, and so it is one of its own members and is therefore abnormal. Now we consider the set of all normal sets, R, and try to determine whether R is normal or abnormal. If R were normal, it would be contained in the set of all normal sets (itself), and therefore be abnormal; on the other hand if R were abnormal, it would not be contained in the set of all normal sets (itself), and therefore be normal. This leads to the conclusion that R is neither normal nor abnormal: Russell's paradox. Formal presentation The term "naive set theory" is used in various ways. In one usage, naive set theory is a formal theory, that is formulated in a first-order language with a binary non-logical predicate , and that includes the axiom of extensionality: and the axiom schema of unrestricted comprehension: for any predicate with as a free variable inside . Substitute for to get Then by existential instantiation (reusing the symbol ) and universal instantiation we have a contradiction. Therefore, this naive set theory is inconsistent. Philosophical implications Prior to Russell's paradox (and to other similar paradoxes discovered around the time, such as the Burali-Forti paradox), a common conception of the idea of set was the "extensional concept of set", as recounted by von Neumann and Morgenstern: In particular, there was no distinction between sets and proper classes as collections of objects. Additionally, the existence of each of the elements of a collection was seen as sufficient for the existence of the set of said elements. However, paradoxes such as Russell's and Burali-Forti's showed the impossibility of this conception of set, by examples of collections of objects that do not form sets, despite all said objects being existent. Set-theoretic responses From the principle of explosion of classical logic, any proposition can be proved from a contradiction. Therefore, the presence of contradictions like Russell's paradox in an axiomatic set theory is disastrous; since if any formula can be proved true it destroys the conventional meaning of truth and falsity. Further, since set theory was seen as the basis for an axiomatic development of all other branches of mathematics, Russell's paradox threatened the foundations of mathematics as a whole. This motivated a great deal of research around the turn of the 20th century to develop a consistent (contradiction-free) set theory. In 1908, Ernst Zermelo proposed an axiomatization of set theory that avoided the paradoxes of naive set theory by replacing arbitrary set comprehension with weaker existence axioms, such as his axiom of separation (Aussonderung). (Avoiding paradox was not Zermelo's original intention, but instead to document which assumptions he used in proving the well-ordering theorem.) Modifications to this axiomatic theory proposed in the 1920s by Abraham Fraenkel, Thoralf Skolem, and by Zermelo himself resulted in the axiomatic set theory called ZFC. This theory became widely accepted once Zermelo's axiom of choice ceased to be controversial, and ZFC has remained the canonical axiomatic set theory down to the present day. ZFC does not assume that, for every property, there is a set of all things satisfying that property. Rather, it asserts that given any set X, any subset of X definable using first-order logic exists. The object R defined by Russell's paradox above cannot be constructed as a subset of any set X, and is therefore not a set in ZFC. In some extensions of ZFC, notably in von Neumann–Bernays–Gödel set theory, objects like R are called proper classes. ZFC is silent about types, although the cumulative hierarchy has a notion of layers that resemble types. Zermelo himself never accepted Skolem's formulation of ZFC using the language of first-order logic. As José Ferreirós notes, Zermelo insisted instead that "propositional functions (conditions or predicates) used for separating off subsets, as well as the replacement functions, can be 'entirely arbitrary [ganz beliebig]"; the modern interpretation given to this statement is that Zermelo wanted to include higher-order quantification in order to avoid Skolem's paradox. Around 1930, Zermelo also introduced (apparently independently of von Neumann), the axiom of foundation, thus—as Ferreirós observes—"by forbidding 'circular' and 'ungrounded' sets, it [ZFC] incorporated one of the crucial motivations of TT [type theory]—the principle of the types of arguments". This 2nd order ZFC preferred by Zermelo, including axiom of foundation, allowed a rich cumulative hierarchy. Ferreirós writes that "Zermelo's 'layers' are essentially the same as the types in the contemporary versions of simple TT [type theory] offered by Gödel and Tarski. One can describe the cumulative hierarchy into which Zermelo developed his models as the universe of a cumulative TT in which transfinite types are allowed. (Once we have adopted an impredicative standpoint, abandoning the idea that classes are constructed, it is not unnatural to accept transfinite types.) Thus, simple TT and ZFC could now be regarded as systems that 'talk' essentially about the same intended objects. The main difference is that TT relies on a strong higher-order logic, while Zermelo employed second-order logic, and ZFC can also be given a first-order formulation. The first-order 'description' of the cumulative hierarchy is much weaker, as is shown by the existence of countable models (Skolem's paradox), but it enjoys some important advantages." In ZFC, given a set A, it is possible to define a set B that consists of exactly the sets in A that are not members of themselves. B cannot be in A by the same reasoning in Russell's Paradox. This variation of Russell's paradox shows that no set contains everything. Through the work of Zermelo and others, especially John von Neumann, the structure of what some see as the "natural" objects described by ZFC eventually became clear: they are the elements of the von Neumann universe, V, built up from the empty set by transfinitely iterating the power set operation. It is thus now possible again to reason about sets in a non-axiomatic fashion without running afoul of Russell's paradox, namely by reasoning about the elements of V. Whether it is appropriate to think of sets in this way is a point of contention among the rival points of view on the philosophy of mathematics. Other solutions to Russell's paradox, with an underlying strategy closer to that of type theory, include Quine's New Foundations and Scott–Potter set theory. Yet another approach is to define multiple membership relation with appropriately modified comprehension scheme, as in the Double extension set theory. History Russell discovered the paradox in May or June 1901. By his own account in his 1919 Introduction to Mathematical Philosophy, he "attempted to discover some flaw in Cantor's proof that there is no greatest cardinal". In a 1902 letter, he announced the discovery to Gottlob Frege of the paradox in Frege's 1879 Begriffsschrift and framed the problem in terms of both logic and set theory, and in particular in terms of Frege's definition of function: Russell would go on to cover it at length in his 1903 The Principles of Mathematics, where he repeated his first encounter with the paradox: Russell wrote to Frege about the paradox just as Frege was preparing the second volume of his Grundgesetze der Arithmetik. Frege responded to Russell very quickly; his letter dated 22 June 1902 appeared, with van Heijenoort's commentary in Heijenoort 1967:126–127. Frege then wrote an appendix admitting to the paradox, and proposed a solution that Russell would endorse in his Principles of Mathematics, but was later considered by some to be unsatisfactory. For his part, Russell had his work at the printers and he added an appendix on the doctrine of types. Ernst Zermelo in his (1908) A new proof of the possibility of a well-ordering (published at the same time he published "the first axiomatic set theory") laid claim to prior discovery of the antinomy in Cantor's naive set theory. He states: "And yet, even the elementary form that Russell9 gave to the set-theoretic antinomies could have persuaded them [J. König, Jourdain, F. Bernstein] that the solution of these difficulties is not to be sought in the surrender of well-ordering but only in a suitable restriction of the notion of set". Footnote 9 is where he stakes his claim: Frege sent a copy of his Grundgesetze der Arithmetik to Hilbert; as noted above, Frege's last volume mentioned the paradox that Russell had communicated to Frege. After receiving Frege's last volume, on 7 November 1903, Hilbert wrote a letter to Frege in which he said, referring to Russell's paradox, "I believe Dr. Zermelo discovered it three or four years ago". A written account of Zermelo's actual argument was discovered in the Nachlass of Edmund Husserl. In 1923, Ludwig Wittgenstein proposed to "dispose" of Russell's paradox as follows: The reason why a function cannot be its own argument is that the sign for a function already contains the prototype of its argument, and it cannot contain itself. For let us suppose that the function F(fx) could be its own argument: in that case there would be a proposition F(F(fx)), in which the outer function F and the inner function F must have different meanings, since the inner one has the form O(fx) and the outer one has the form Y(O(fx)). Only the letter 'F' is common to the two functions, but the letter by itself signifies nothing. This immediately becomes clear if instead of F(Fu) we write (do) : F(Ou) . Ou = Fu. That disposes of Russell's paradox. (Tractatus Logico-Philosophicus, 3.333) Russell and Alfred North Whitehead wrote their three-volume Principia Mathematica hoping to achieve what Frege had been unable to do. They sought to banish the paradoxes of naive set theory by employing a theory of types they devised for this purpose. While they succeeded in grounding arithmetic in a fashion, it is not at all evident that they did so by purely logical means. While Principia Mathematica avoided the known paradoxes and allows the derivation of a great deal of mathematics, its system gave rise to new problems. In any event, Kurt Gödel in 1930–31 proved that while the logic of much of Principia Mathematica, now known as first-order logic, is complete, Peano arithmetic is necessarily incomplete if it is consistent. This is very widely—though not universally—regarded as having shown the logicist program of Frege to be impossible to complete. In 2001, A Centenary International Conference celebrating the first hundred years of Russell's paradox was held in Munich and its proceedings have been published. Applied versions There are some versions of this paradox that are closer to real-life situations and may be easier to understand for non-logicians. For example, the barber paradox supposes a barber who shaves all men who do not shave themselves and only men who do not shave themselves. When one thinks about whether the barber should shave himself or not, a similar paradox begins to emerge. An easy refutation of the "layman's versions" such as the barber paradox seems to be that no such barber exists, or that the barber is not a man, and so can exist without paradox. The whole point of Russell's paradox is that the answer "such a set does not exist" means the definition of the notion of set within a given theory is unsatisfactory. Note the difference between the statements "such a set does not exist" and "it is an empty set". It is like the difference between saying "There is no bucket" and saying "The bucket is empty". A notable exception to the above may be the Grelling–Nelson paradox, in which words and meaning are the elements of the scenario rather than people and hair-cutting. Though it is easy to refute the barber's paradox by saying that such a barber does not (and cannot) exist, it is impossible to say something similar about a meaningfully defined word. One way that the paradox has been dramatised is as follows: Suppose that every public library has to compile a catalogue of all its books. Since the catalogue is itself one of the library's books, some librarians include it in the catalogue for completeness; while others leave it out as it being one of the library's books is self evident. Now imagine that all these catalogues are sent to the national library. Some of them include themselves in their listings, others do not. The national librarian compiles two master catalogues—one of all the catalogues that list themselves, and one of all those that do not. The question is: should these master catalogues list themselves? The 'catalogue of all catalogues that list themselves' is no problem. If the librarian does not include it in its own listing, it remains a true catalogue of those catalogues that do include themselves. If he does include it, it remains a true catalogue of those that list themselves. However, just as the librarian cannot go wrong with the first master catalogue, he is doomed to fail with the second. When it comes to the 'catalogue of all catalogues that do not list themselves', the librarian cannot include it in its own listing, because then it would include itself, and so belong in the other catalogue, that of catalogues that do include themselves. However, if the librarian leaves it out, the catalogue is incomplete. Either way, it can never be a true master catalogue of catalogues that do not list themselves. Applications and related topics Russell-like paradoxes As illustrated above for the barber paradox, Russell's paradox is not hard to extend. Take: A transitive verb , that can be applied to its substantive form. Form the sentence: The er that s all (and only those) who do not themselves, Sometimes the "all" is replaced by "all ers". An example would be "paint": The painter that paints all (and only those) that do not paint themselves. or "elect" The elector (representative), that elects all that do not elect themselves. In the Season 8 episode of The Big Bang Theory, "The Skywalker Intrusion", Sheldon Cooper analyzes the song "Play That Funky Music", concluding that the lyrics present a musical example of Russell's Paradox. Paradoxes that fall in this scheme include: The barber with "shave". The original Russell's paradox with "contain": The container (Set) that contains all (containers) that do not contain themselves. The Grelling–Nelson paradox with "describer": The describer (word) that describes all words, that do not describe themselves. Richard's paradox with "denote": The denoter (number) that denotes all denoters (numbers) that do not denote themselves. (In this paradox, all descriptions of numbers get an assigned number. The term "that denotes all denoters (numbers) that do not denote themselves" is here called Richardian.) "I am lying.", namely the liar paradox and Epimenides paradox, whose origins are ancient Russell–Myhill paradox Related paradoxes The Burali-Forti paradox, about the order type of all well-orderings The Kleene–Rosser paradox, showing that the original lambda calculus is inconsistent, by means of a self-negating statement Curry's paradox (named after Haskell Curry), which does not require negation The smallest uninteresting integer paradox Girard's paradox in type theory See also Basic Law V "On Denoting" Quine's paradox Self-reference List of self–referential paradoxes Notes References Sources External links Bertrand Russell Eponymous paradoxes Paradoxes of naive set theory 1901 in science Self-referential paradoxes
Russell's paradox
[ "Mathematics" ]
4,036
[ "Basic concepts in infinite set theory", "Basic concepts in set theory", "Paradoxes of naive set theory" ]
46,096
https://en.wikipedia.org/wiki/Simpson%27s%20paradox
Simpson's paradox is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science statistics, and is particularly problematic when frequency data are unduly given causal interpretations. The paradox can be resolved when confounding variables and causal relations are appropriately addressed in the statistical modeling (e.g., through cluster analysis). Simpson's paradox has been used to illustrate the kind of misleading results that the misuse of statistics can generate. Edward H. Simpson first described this phenomenon in a technical paper in 1951, but the statisticians Karl Pearson (in 1899) and Udny Yule (in 1903) had mentioned similar effects earlier. The name Simpson's paradox was introduced by Colin R. Blyth in 1972. It is also referred to as Simpson's reversal, the Yule–Simpson effect, the amalgamation paradox, or the reversal paradox. Mathematician Jordan Ellenberg argues that Simpson's paradox is misnamed as "there's no contradiction involved, just two different ways to think about the same data" and suggests that its lesson "isn't really to tell us which viewpoint to take but to insist that we keep both the parts and the whole in mind at once." Examples UC Berkeley gender bias One of the best-known examples of Simpson's paradox comes from a study of gender bias among graduate school admissions to University of California, Berkeley. The admission figures for the fall of 1973 showed that men applying were more likely than women to be admitted, and the difference was so large that it was unlikely to be due to chance. However, when taking into account the information about departments being applied to, the different rejection percentages reveal the different difficulty of getting into the department, and at the same time it showed that women tended to apply to more competitive departments with lower rates of admission, even among qualified applicants (such as in the English department), whereas men tended to apply to less competitive departments with higher rates of admission (such as in the engineering department). The pooled and corrected data showed a "small but statistically significant bias in favor of women". The data from the six largest departments are listed below: The entire data showed total of 4 out of 85 departments to be significantly biased against women, while 6 to be significantly biased against men (not all present in the 'six largest departments' table above). Notably, the numbers of biased departments were not the basis for the conclusion, but rather it was the gender admissions pooled across all departments, while weighing by each department's rejection rate across all of its applicants. Kidney stone treatment Another example comes from a real-life medical study comparing the success rates of two treatments for kidney stones. The table below shows the success rates (the term success rate here actually means the success proportion) and numbers of treatments for treatments involving both small and large kidney stones, where Treatment A includes open surgical procedures and Treatment B includes closed surgical procedures. The numbers in parentheses indicate the number of success cases over the total size of the group. The paradoxical conclusion is that treatment A is more effective when used on small stones, and also when used on large stones, yet treatment B appears to be more effective when considering both sizes at the same time. In this example, the "lurking" variable (or confounding variable) causing the paradox is the size of the stones, which was not previously known to researchers to be important until its effects were included. Which treatment is considered better is determined by which success ratio (successes/total) is larger. The reversal of the inequality between the two ratios when considering the combined data, which creates Simpson's paradox, happens because two effects occur together: The sizes of the groups, which are combined when the lurking variable is ignored, are very different. Doctors tend to give cases with large stones the better treatment A, and the cases with small stones the inferior treatment B. Therefore, the totals are dominated by groups 3 and 2, and not by the two much smaller groups 1 and 4. The lurking variable, stone size, has a large effect on the ratios; i.e., the success rate is more strongly influenced by the severity of the case than by the choice of treatment. Therefore, the group of patients with large stones using treatment A (group 3) does worse than the group with small stones, even if the latter used the inferior treatment B (group 2). Based on these effects, the paradoxical result is seen to arise because the effect of the size of the stones overwhelms the benefits of the better treatment (A). In short, the less effective treatment B appeared to be more effective because it was applied more frequently to the small stones cases, which were easier to treat. Jaynes argues that the correct conclusion is that though treatment A remains noticeably better than treatment B, the kidney stone size is more important. Batting averages A common example of Simpson's paradox involves the batting averages of players in professional baseball. It is possible for one player to have a higher batting average than another player each year for a number of years, but to have a lower batting average across all of those years. This phenomenon can occur when there are large differences in the number of at bats between the years. Mathematician Ken Ross demonstrated this using the batting average of two baseball players, Derek Jeter and David Justice, during the years 1995 and 1996: In both 1995 and 1996, Justice had a higher batting average (in bold type) than Jeter did. However, when the two baseball seasons are combined, Jeter shows a higher batting average than Justice. According to Ross, this phenomenon would be observed about once per year among the possible pairs of players. Vector interpretation Simpson's paradox can also be illustrated using a 2-dimensional vector space. A success rate of (i.e., successes/attempts) can be represented by a vector , with a slope of . A steeper vector then represents a greater success rate. If two rates and are combined, as in the examples given above, the result can be represented by the sum of the vectors and , which according to the parallelogram rule is the vector , with slope . Simpson's paradox says that even if a vector (in orange in figure) has a smaller slope than another vector (in blue), and has a smaller slope than , the sum of the two vectors can potentially still have a larger slope than the sum of the two vectors , as shown in the example. For this to occur one of the orange vectors must have a greater slope than one of the blue vectors (here and ), and these will generally be longer than the alternatively subscripted vectors – thereby dominating the overall comparison. Correlation between variables Simpson's reversal can also arise in correlations, in which two variables appear to have (say) a positive correlation towards one another, when in fact they have a negative correlation, the reversal having been brought about by a "lurking" confounder. Berman et al. give an example from economics, where a dataset suggests overall demand is positively correlated with price (that is, higher prices lead to more demand), in contradiction of expectation. Analysis reveals time to be the confounding variable: plotting both price and demand against time reveals the expected negative correlation over various periods, which then reverses to become positive if the influence of time is ignored by simply plotting demand against price. Psychology Psychological interest in Simpson's paradox seeks to explain why people deem sign reversal to be impossible at first. The question is where people get this strong intuition from, and how it is encoded in the mind. Simpson's paradox demonstrates that this intuition cannot be derived from either classical logic or probability calculus alone, and thus led philosophers to speculate that it is supported by an innate causal logic that guides people in reasoning about actions and their consequences. Savage's sure-thing principle is an example of what such logic may entail. A qualified version of Savage's sure thing principle can indeed be derived from Pearl's do-calculus and reads: "An action A that increases the probability of an event B in each subpopulation Ci of C must also increase the probability of B in the population as a whole, provided that the action does not change the distribution of the subpopulations." This suggests that knowledge about actions and consequences is stored in a form resembling Causal Bayesian Networks. Probability A paper by Pavlides and Perlman presents a proof, due to Hadjicostas, that in a random 2 × 2 × 2 table with uniform distribution, Simpson's paradox will occur with a probability of exactly . A study by Kock suggests that the probability that Simpson's paradox would occur at random in path models (i.e., models generated by path analysis) with two predictors and one criterion variable is approximately 12.8 percent; slightly higher than 1 occurrence per 8 path models. Simpson's second paradox A second, less well-known paradox was also discussed in Simpson's 1951 paper. It can occur when the "sensible interpretation" is not necessarily found in the separated data, like in the Kidney Stone example, but can instead reside in the combined data. Whether the partitioned or combined form of the data should be used hinges on the process giving rise to the data, meaning the correct interpretation of the data cannot always be determined by simply observing the tables. Judea Pearl has shown that, in order for the partitioned data to represent the correct causal relationships between any two variables, and , the partitioning variables must satisfy a graphical condition called "back-door criterion": They must block all spurious paths between and No variable can be affected by This criterion provides an algorithmic solution to Simpson's second paradox, and explains why the correct interpretation cannot be determined by data alone; two different graphs, both compatible with the data, may dictate two different back-door criteria. When the back-door criterion is satisfied by a set Z of covariates, the adjustment formula (see Confounding) gives the correct causal effect of X on Y. If no such set exists, Pearl's do-calculus can be invoked to discover other ways of estimating the causal effect. The completeness of do-calculus can be viewed as offering a complete resolution of the Simpson's paradox. Criticism One criticism is that the paradox is not really a paradox at all, but rather a failure to properly account for confounding variables or to consider causal relationships between variables. Another criticism of the apparent Simpson's paradox is that it may be a result of the specific way that data are stratified or grouped. The phenomenon may disappear or even reverse if the data is stratified differently or if different confounding variables are considered. Simpson's example actually highlighted a phenomenon called noncollapsibility, which occurs when subgroups with high proportions do not make simple averages when combined. This suggests that the paradox may not be a universal phenomenon, but rather a specific instance of a more general statistical issue. Critics of the apparent Simpson's paradox also argue that the focus on the paradox may distract from more important statistical issues, such as the need for careful consideration of confounding variables and causal relationships when interpreting data. Despite these criticisms, the apparent Simpson's paradox remains a popular and intriguing topic in statistics and data analysis. It continues to be studied and debated by researchers and practitioners in a wide range of fields, and it serves as a valuable reminder of the importance of careful statistical analysis and the potential pitfalls of simplistic interpretations of data. See also Spurious correlation Omitted-variable bias References Bibliography Leila Schneps and Coralie Colmez, Math on trial. How numbers get used and abused in the courtroom, Basic Books, 2013. . (Sixth chapter: "Math error number 6: Simpson's paradox. The Berkeley sex bias case: discrimination detection"). External links Simpson's Paradox at the Stanford Encyclopedia of Philosophy, by Jan Sprenger and Naftali Weinberger. How statistics can be misleading – Mark Liddell – TED-Ed video and lesson. Pearl, Judea, "Understanding Simpson's Paradox" (PDF) Simpson's Paradox, a short article by Alexander Bogomolny on the vector interpretation of Simpson's paradox The Wall Street Journal column "The Numbers Guy" for December 2, 2009 dealt with recent instances of Simpson's paradox in the news. Notably a Simpson's paradox in the comparison of unemployment rates of the 2009 recession with the 1983 recession. At the Plate, a Statistical Puzzler: Understanding Simpson's Paradox by Arthur Smith, August 20, 2010 Simpson's Paradox, a video by Henry Reich of MinutePhysics Probability theory paradoxes Statistical paradoxes Causal inference 1951 introductions
Simpson's paradox
[ "Mathematics" ]
2,624
[ "Probability theory paradoxes", "Mathematical problems", "Statistical paradoxes", "Mathematical paradoxes" ]
46,143
https://en.wikipedia.org/wiki/Planner%20%28programming%20language%29
Planner (often seen in publications as "PLANNER" although it is not an acronym) is a programming language designed by Carl Hewitt at MIT, and first published in 1969. First, subsets such as Micro-Planner and Pico-Planner were implemented, and then essentially the whole language was implemented as Popler by Julian Davies at the University of Edinburgh in the POP-2 programming language. Derivations such as QA4, Conniver, QLISP and Ether (see scientific community metaphor) were important tools in artificial intelligence research in the 1970s, which influenced commercial developments such as Knowledge Engineering Environment (KEE) and Automated Reasoning Tool (ART). Procedural approach versus logical approach The two major paradigms for constructing semantic software systems were procedural and logical. The procedural paradigm was epitomized by Lisp which featured recursive procedures that operated on list structures. The logical paradigm was epitomized by uniform proof procedure resolution-based derivation (proof) finders. According to the logical paradigm it was “cheating” to incorporate procedural knowledge. Procedural embedding of knowledge Planner was invented for the purposes of the procedural embedding of knowledge and was a rejection of the resolution uniform proof procedure paradigm, which Converted everything to clausal form. Converting all information to clausal form is problematic because it hides the underlying structure of the information. Then used resolution to attempt to obtain a proof by contradiction by adding the clausal form of the negation of the theorem to be proved. Using only resolution as the rule of inference is problematical because it hides the underlying structure of proofs. Also, using proof by contradiction is problematical because the axiomatizations of all practical domains of knowledge are inconsistent in practice. Planner was a kind of hybrid between the procedural and logical paradigms because it combined programmability with logical reasoning. Planner featured a procedural interpretation of logical sentences where an implication of the form can be procedurally interpreted in the following ways using pattern-directed invocation: Forward chaining (antecedently): Backward chaining (consequently) In this respect, the development of Planner was influenced by natural deductive logical systems (especially the one by Frederic Fitch [1952]). Micro-planner implementation A subset called Micro-Planner was implemented by Gerry Sussman, Eugene Charniak and Terry Winograd and was used in Winograd's natural-language understanding program SHRDLU, Eugene Charniak's story understanding work, Thorne McCarty's work on legal reasoning, and some other projects. This generated a great deal of excitement in the field of AI. It also generated controversy because it proposed an alternative to the logic approach that had been one of the mainstay paradigms for AI. At SRI International, Jeff Rulifson, Jan Derksen, and Richard Waldinger developed QA4 which built on the constructs in Planner and introduced a context mechanism to provide modularity for expressions in the database. Earl Sacerdoti and Rene Reboh developed QLISP, an extension of QA4 embedded in INTERLISP, providing Planner-like reasoning embedded in a procedural language and developed in its rich programming environment. QLISP was used by Richard Waldinger and Karl Levitt for program verification, by Earl Sacerdoti for planning and execution monitoring, by Jean-Claude Latombe for computer-aided design, by Nachum Dershowitz for program synthesis, by Richard Fikes for deductive retrieval, and by Steven Coles for an early expert system that guided use of an econometric model. Computers were expensive. They had only a single slow processor and their memories were very small by comparison with today. So Planner adopted some efficiency expedients including the following: Backtracking was adopted to economize on the use of time and storage by working on and storing only one possibility at a time in exploring alternatives. A unique name assumption was adopted to save space and time by assuming that different names referred to different objects. For example, names like Peking (previous PRC capital name) and Beijing (current PRC capital transliteration) were assumed to refer to different objects. A closed-world assumption could be implemented by conditionally testing whether an attempt to prove a goal exhaustively failed. Later this capability was given the misleading name "negation as failure" because for a goal it was possible to say: "if attempting to achieve exhaustively fails then assert ." The genesis of Prolog Gerry Sussman, Eugene Charniak, Seymour Papert and Terry Winograd visited the University of Edinburgh in 1971, spreading the news about Micro-Planner and SHRDLU and casting doubt on the resolution uniform proof procedure approach that had been the mainstay of the Edinburgh Logicists. At the University of Edinburgh, Bruce Anderson implemented a subset of Micro-Planner called PICO-PLANNER, and Julian Davies (1973) implemented essentially all of Planner. According to Donald MacKenzie, Pat Hayes recalled the impact of a visit from Papert to Edinburgh, which had become the "heart of artificial intelligence's Logicland," according to Papert's MIT colleague, Carl Hewitt. Papert eloquently voiced his critique of the resolution approach dominant at Edinburgh "…and at least one person upped sticks and left because of Papert." The above developments generated tension among the Logicists at Edinburgh. These tensions were exacerbated when the UK Science Research Council commissioned Sir James Lighthill to write a report on the AI research situation in the UK. The resulting report [Lighthill 1973; McCarthy 1973] was highly critical although SHRDLU was favorably mentioned. Pat Hayes visited Stanford where he learned about Planner. When he returned to Edinburgh, he tried to influence his friend Bob Kowalski to take Planner into account in their joint work on automated theorem proving. "Resolution theorem-proving was demoted from a hot topic to a relic of the misguided past. Bob Kowalski doggedly stuck to his faith in the potential of resolution theorem proving. He carefully studied Planner.”. Kowalski [1988] states "I can recall trying to convince Hewitt that Planner was similar to SL-resolution." But Planner was invented for the purposes of the procedural embedding of knowledge and was a rejection of the resolution uniform proof procedure paradigm. Colmerauer and Roussel recalled their reaction to learning about Planner in the following way: "While attending an IJCAI convention in September ‘71 with Jean Trudel, we met Robert Kowalski again and heard a lecture by Terry Winograd on natural language processing. The fact that he did not use a unified formalism left us puzzled. It was at this time that we learned of the existence of Carl Hewitt’s programming language, Planner. The lack of formalization of this language, our ignorance of Lisp and, above all, the fact that we were absolutely devoted to logic meant that this work had little influence on our later research." In the fall of 1972, Philippe Roussel implemented a language called Prolog (an abbreviation for PROgrammation en LOGique – French for "programming in logic"). Prolog programs are generically of the following form (which is a special case of the backward-chaining in Planner): Prolog duplicated the following aspects of Micro-Planner: Pattern directed invocation of procedures from goals (i.e. backward chaining) An indexed data base of pattern-directed procedures and ground sentences. Giving up on the completeness paradigm that had characterized previous work on theorem proving and replacing it with the programming language procedural embedding of knowledge paradigm. Prolog also duplicated the following capabilities of Micro-Planner which were pragmatically useful for the computers of the era because they saved space and time: Backtracking control structure Unique Name Assumption by which different names are assumed to refer to distinct entities, e.g., Peking and Beijing are assumed to be different. Reification of Failure. The way that Planner established that something was provable was to successfully attempt it as a goal and the way that it establish that something was unprovable was to attempt it as a goal and explicitly fail. Of course the other possibility is that the attempt to prove the goal runs forever and never returns any value. Planner also had a construct which succeeded if failed, which gave rise to the “Negation as Failure” terminology in Planner. Use of the Unique Name Assumption and Negation as Failure became more questionable when attention turned to Open Systems. The following capabilities of Micro-Planner were omitted from Prolog: Pattern-directed invocation of procedural plans from assertions (i.e., forward chaining) Logical negation, e.g., . Prolog did not include negation in part because it raises implementation issues. Consider for example if negation were included in the following Prolog program: The above program would be unable to prove even though it follows by the rules of mathematical logic. This is an illustration of the fact that Prolog (like Planner) is intended to be a programming language and so does not (by itself) prove many of the logical consequences that follow from a declarative reading of its programs. The work on Prolog was valuable in that it was much simpler than Planner. However, as the need arose for greater expressive power in the language, Prolog began to include many of the capabilities of Planner that were left out of the original version of Prolog. References Bibliography Bruce Anderson. Documentation for LIB PICO-PLANNER School of Artificial Intelligence, Edinburgh University. 1972 Bruce Baumgart. Micro-Planner Alternate Reference Manual Stanford AI Lab Operating Note No. 67, April 1972. . . . . . Carl Hewitt. "The Challenge of Open Systems" Byte Magazine. April 1985 Carl Hewitt and Jeff Inman. "DAI Betwixt and Between: From ‘Intelligent Agents’ to Open Systems Science" IEEE Transactions on Systems, Man, and Cybernetics. Nov/Dec 1991. Carl Hewitt and Gul Agha. "Guarded Horn clause languages: are they deductive and Logical?" International Conference on Fifth Generation Computer Systems, Ohmsha 1988. Tokyo. Also in Artificial Intelligence at MIT, Vol. 2. MIT Press 1991. . William Kornfeld and Carl Hewitt. The Scientific Community Metaphor MIT AI Memo 641. January 1981. Bill Kornfeld and Carl Hewitt. "The Scientific Community Metaphor" IEEE Transactions on Systems, Man, and Cybernetics. January 1981. Bill Kornfeld. "The Use of Parallelism to Implement a Heuristic Search" IJCAI 1981. Bill Kornfeld. "Parallelism in Problem Solving" MIT EECS Doctoral Dissertation. August 1981. Bill Kornfeld. "Combinatorially Implosive Algorithms" CACM. 1982 Robert Kowalski. "The Limitations of Logic" Proceedings of the 1986 ACM fourteenth annual conference on Computer science. Robert Kowalski. "The Early Years of Logic Programming" CACM January 1988. . . . Gerry Sussman and Terry Winograd. Micro-planner Reference Manual AI Memo No, 203, MIT Project MAC, July 1970. Terry Winograd. Procedures as a Representation for Data in a Computer Program for Understanding Natural Language MIT AI TR-235. January 1971. Gerry Sussman, Terry Winograd and Eugene Charniak. Micro-Planner Reference Manual (Update) AI Memo 203A, MIT AI Lab, December 1971. Carl Hewitt. Description and Theoretical Analysis (Using Schemata) of Planner, A Language for Proving Theorems and Manipulating Models in a Robot AI Memo No. 251, MIT Project MAC, April 1972. Eugene Charniak. Toward a Model of Children's Story Comprehension MIT AI TR-266. December 1972. Julian Davies. Popler 1.6 Reference Manual University of Edinburgh, TPU Report No. 1, May 1973. Jeff Rulifson, Jan Derksen, and Richard Waldinger. "QA4, A Procedural Calculus for Intuitive Reasoning" SRI AI Center Technical Note 73, November 1973. Scott Fahlman. "A Planning System for Robot Construction Tasks" MIT AI TR-283. June 1973 James Lighthill. "Artificial Intelligence: A General Survey Artificial Intelligence: a paper symposium." UK Science Research Council. 1973. John McCarthy. "Review of ‘Artificial Intelligence: A General Survey Artificial Intelligence: a paper symposium." UK Science Research Council. 1973. Robert Kowalski "Predicate Logic as Programming Language" Memo 70, Department of Artificial Intelligence, Edinburgh University. 1973 Pat Hayes. Computation and Deduction Mathematical Foundations of Computer Science: Proceedings of Symposium and Summer School, Štrbské Pleso, High Tatras, Czechoslovakia, September 3–8, 1973. Carl Hewitt, Peter Bishop and Richard Steiger. "A Universal Modular Actor Formalism for Artificial Intelligence" IJCAI 1973. L. Thorne McCarty. "Reflections on TAXMAN: An Experiment on Artificial Intelligence and Legal Reasoning" Harvard Law Review. Vol. 90, No. 5, March 1977 Drew McDermott and Gerry Sussman. The Conniver Reference Manual MIT AI Memo 259A. January 1974. Earl Sacerdoti, et al., "QLISP A Language for the Interactive Development of Complex Systems" AFIPS. 1976 . . External links History of artificial intelligence Automated planning and scheduling Logic programming languages Robot programming languages Theorem proving software systems Programming languages created in 1969
Planner (programming language)
[ "Mathematics" ]
2,736
[ "Theorem proving software systems", "Automated theorem proving", "Mathematical software" ]
46,150
https://en.wikipedia.org/wiki/Lua%20%28programming%20language%29
Lua is a lightweight, high-level, multi-paradigm programming language designed mainly for embedded use in applications. Lua is cross-platform software, since the interpreter of compiled bytecode is written in ANSI C, and Lua has a relatively simple C application programming interface (API) to embed it into applications. Lua originated in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages, but more complicated or domain-specific features were not included; rather, it included mechanisms for extending the language, allowing programmers to implement such features. As Lua was intended to be a general embeddable extension language, the designers of Lua focused on improving its speed, portability, extensibility and ease-of-use in development. History Lua was created in 1993 by Roberto Ierusalimschy, Luiz Henrique de Figueiredo and Waldemar Celes, members of the Computer Graphics Technology Group (Tecgraf) at the Pontifical Catholic University of Rio de Janeiro, in Brazil. From 1977 until 1992, Brazil had a policy of strong trade barriers (called a market reserve) for computer hardware and software, believing that Brazil could and should produce its own hardware and software. In that climate, Tecgraf's clients could not afford, either politically or financially, to buy customized software from abroad; under the market reserve, clients would have to go through a complex bureaucratic process to prove their needs couldn't be met by Brazilian companies. Those reasons led Tecgraf to implement the basic tools it needed from scratch. Lua's predecessors were the data-description/configuration languages Simple Object Language (SOL) and data-entry language (DEL). They had been independently developed at Tecgraf in 1992–1993 to add some flexibility into two different projects (both were interactive graphical programs for engineering applications at Petrobras company). There was a lack of any flow-control structures in SOL and DEL, and Petrobras felt a growing need to add full programming power to them. In The Evolution of Lua, the language's authors wrote: Lua 1.0 was designed in such a way that its object constructors, being then slightly different from the current light and flexible style, incorporated the data-description syntax of SOL (hence the name Lua: Sol meaning "Sun" in Portuguese, and Lua meaning "Moon"). Lua syntax for control structures was mostly borrowed from Modula (if, while, repeat/until), but also had taken influence from CLU (multiple assignments and multiple returns from function calls, as a simpler alternative to reference parameters or explicit pointers), C++ ("neat idea of allowing a local variable to be declared only where we need it"), SNOBOL and AWK (associative arrays). In an article published in Dr. Dobb's Journal, Lua's creators also state that LISP and Scheme with their single, ubiquitous data-structure mechanism (the list) were a major influence on their decision to develop the table as the primary data structure of Lua. Lua semantics have been increasingly influenced by Scheme over time, especially with the introduction of anonymous functions and full lexical scoping. Several features were added in new Lua versions. Versions of Lua prior to version 5.0 were released under a license similar to the BSD license. From version 5.0 onwards, Lua has been licensed under the MIT License. Both are permissive free software licences and are almost identical. Features Lua is commonly described as a "multi-paradigm" language, providing a small set of general features that can be extended to fit different problem types. Lua does not contain explicit support for inheritance, but allows it to be implemented with metatables. Similarly, Lua allows programmers to implement namespaces, classes and other related features using its single table implementation; first-class functions allow the employment of many techniques from functional programming and full lexical scoping allows fine-grained information hiding to enforce the principle of least privilege. In general, Lua strives to provide simple, flexible meta-features that can be extended as needed, rather than supply a feature-set specific to one programming paradigm. As a result, the base language is light; the full reference interpreter is only about 247 kB compiled and easily adaptable to a broad range of applications. As a dynamically typed language intended for use as an extension language or scripting language, Lua is compact enough to fit on a variety of host platforms. It supports only a small number of atomic data structures such as Boolean values, numbers (double-precision floating point and 64-bit integers by default) and strings. Typical data structures such as arrays, sets, lists and records can be represented using Lua's single native data structure, the table, which is essentially a heterogeneous associative array. Lua implements a small set of advanced features such as first-class functions, garbage collection, closures, proper tail calls, coercion (automatic conversion between string and number values at run time), coroutines (cooperative multitasking) and dynamic module loading. Syntax The classic "Hello, World!" program can be written as follows, with or without parentheses: print("Hello, World!") print "Hello, World!" The declaration of a variable, without a value. local variable The declaration of a variable with a value of 10. local students = 10 A comment in Lua starts with a double-hyphen and runs to the end of the line, similar to Ada, Eiffel, Haskell, SQL and VHDL. Multi-line strings and comments are marked with double square brackets. -- Single line comment --[[ Multi-line comment --]] The factorial function is implemented in this example: function factorial(n) local x = 1 for i = 2, n do x = x * i end return x end Control flow Lua has one type of conditional test: if then end with optional else and elseif then execution control constructs. The generic if then end statement requires all three keywords: if condition then --statement body end An example of an if statement if x ~= 10 then print(x) end The else keyword may be added with an accompanying statement block to control execution when the if condition evaluates to false: if condition then --statement body else --statement body end An example of an if else statement if x == 10 then print(10) else print(x) end Execution may also be controlled according to multiple conditions using the elseif then keywords: if condition then --statement body elseif condition then --statement body else -- optional --optional default statement body end An example of an if elseif else statement if x == y then print("x = y") elseif x == z then print("x = z") else -- optional print("x does not equal any other variable") end Lua has four types of conditional loops: the while loop, the repeat loop (similar to a do while loop), the numeric for loop and the generic for loop. --condition = true while condition do --statements end repeat --statements until condition for i = first, last, delta do --delta may be negative, allowing the for loop to count down or up --statements --example: print(i) end This generic for loop would iterate over the table _G using the standard iterator function pairs, until it returns nil: for key, value in pairs(_G) do print(key, value) end Loops can also be nested (put inside of another loop). local grid = { { 11, 12, 13 }, { 21, 22, 23 }, { 31, 32, 33 } } for y, row in pairs(grid) do for x, value in pairs(row) do print(x, y, value) end end Functions Lua's treatment of functions as first-class values is shown in the following example, where the print function's behavior is modified: do local oldprint = print -- Store current print function as oldprint function print(s) --[[ Redefine print function. The usual print function can still be used through oldprint. The new one has only one argument.]] oldprint(s == "foo" and "bar" or s) end end Any future calls to print will now be routed through the new function, and because of Lua's lexical scoping, the old print function will only be accessible by the new, modified print. Lua also supports closures, as demonstrated below: function addto(x) -- Return a new function that adds x to the argument return function(y) --[[ When we refer to the variable x, which is outside the current scope and whose lifetime would be shorter than that of this anonymous function, Lua creates a closure.]] return x + y end end fourplus = addto(4) print(fourplus(3)) -- Prints 7 --This can also be achieved by calling the function in the following way: print(addto(4)(3)) --[[ This is because we are calling the returned function from 'addto(4)' with the argument '3' directly. This also helps to reduce data cost and up performance if being called iteratively.]] A new closure for the variable x is created every time addto is called, so that each new anonymous function returned will always access its own x parameter. The closure is managed by Lua's garbage collector, just like any other object. Tables Tables are the most important data structures (and, by design, the only built-in composite data type) in Lua and are the foundation of all user-created types. They are associative arrays with addition of automatic numeric key and special syntax. A table is a set of key and data pairs, where the data is referenced by key; in other words, it is a hashed heterogeneous associative array. Tables are created using the {} constructor syntax. a_table = {} -- Creates a new, empty table Tables are always passed by reference (see Call by sharing). A key (index) can be any value except nil and NaN, including functions. a_table = {x = 10} -- Creates a new table, with one entry mapping "x" to the number 10. print(a_table["x"]) -- Prints the value associated with the string key, in this case 10. b_table = a_table b_table["x"] = 20 -- The value in the table has been changed to 20. print(b_table["x"]) -- Prints 20. print(a_table["x"]) -- Also prints 20, because a_table and b_table both refer to the same table. A table is often used as structure (or record) by using strings as keys. Because such use is very common, Lua features a special syntax for accessing such fields. point = { x = 10, y = 20 } -- Create new table print(point["x"]) -- Prints 10 print(point.x) -- Has exactly the same meaning as line above. The easier-to-read dot notation is just syntactic sugar. By using a table to store related functions, it can act as a namespace. Point = {} Point.new = function(x, y) return {x = x, y = y} -- return {["x"] = x, ["y"] = y} end Point.set_x = function(point, x) point.x = x -- point["x"] = x; end Tables are automatically assigned a numerical key, enabling them to be used as an array data type. The first automatic index is 1 rather than 0 as it is for many other programming languages (though an explicit index of 0 is allowed). A numeric key 1 is distinct from a string key "1". array = { "a", "b", "c", "d" } -- Indices are assigned automatically. print(array[2]) -- Prints "b". Automatic indexing in Lua starts at 1. print(#array) -- Prints 4. # is the length operator for tables and strings. array[0] = "z" -- Zero is a legal index. print(#array) -- Still prints 4, as Lua arrays are 1-based. The length of a table t is defined to be any integer index n such that t[n] is not nil and t[n+1] is nil; moreover, if t[1] is nil, n can be zero. For a regular array, with non-nil values from 1 to a given n, its length is exactly that n, the index of its last value. If the array has "holes" (that is, nil values between other non-nil values), then #t can be any of the indices that directly precedes a nil value (that is, it may consider any such nil value as the end of the array). ExampleTable = { {1, 2, 3, 4}, {5, 6, 7, 8} } print(ExampleTable[1][3]) -- Prints "3" print(ExampleTable[2][4]) -- Prints "8" A table can be an array of objects. function Point(x, y) -- "Point" object constructor return { x = x, y = y } -- Creates and returns a new object (table) end array = { Point(10, 20), Point(30, 40), Point(50, 60) } -- Creates array of points -- array = { { x = 10, y = 20 }, { x = 30, y = 40 }, { x = 50, y = 60 } }; print(array[2].y) -- Prints 40 Using a hash map to emulate an array is normally slower than using an actual array; however, Lua tables are optimized for use as arrays to help avoid this issue. Metatables Extensible semantics is a key feature of Lua, and the metatable concept allows powerful customization of tables. The following example demonstrates an "infinite" table. For any n, fibs[n] will give the n-th Fibonacci number using dynamic programming and memoization. fibs = { 1, 1 } -- Initial values for fibs[1] and fibs[2]. setmetatable(fibs, { __index = function(values, n) --[[__index is a function predefined by Lua, it is called if key "n" does not exist.]] values[n] = values[n - 1] + values[n - 2] -- Calculate and memoize fibs[n]. return values[n] end }) Object-oriented programming Although Lua does not have a built-in concept of classes, object-oriented programming can be emulated using functions and tables. An object is formed by putting methods and fields in a table. Inheritance (both single and multiple) can be implemented with metatables, delegating nonexistent methods and fields to a parent object. There is no such concept as "class" with these techniques; rather, prototypes are used, similar to Self or JavaScript. New objects are created either with a factory method (that constructs new objects from scratch) or by cloning an existing object. Creating a basic vector object: local Vector = {} local VectorMeta = { __index = Vector} function Vector.new(x, y, z) -- The constructor return setmetatable({x = x, y = y, z = z}, VectorMeta) end function Vector.magnitude(self) -- Another method return math.sqrt(self.x^2 + self.y^2 + self.z^2) end local vec = Vector.new(0, 1, 0) -- Create a vector print(vec.magnitude(vec)) -- Call a method (output: 1) print(vec.x) -- Access a member variable (output: 0) Here, tells Lua to look for an element in the table if it is not present in the table. , which is equivalent to , first looks in the table for the element. The table does not have a element, but its metatable delegates to the table for the element when it's not found in the table. Lua provides some syntactic sugar to facilitate object orientation. To declare member functions inside a prototype table, one can use , which is equivalent to . Calling class methods also makes use of the colon: is equivalent to . That in mind, here is a corresponding class with syntactic sugar: local Vector = {} Vector.__index = Vector function Vector:new(x, y, z) -- The constructor -- Since the function definition uses a colon, -- its first argument is "self" which refers -- to "Vector" return setmetatable({x = x, y = y, z = z}, self) end function Vector:magnitude() -- Another method -- Reference the implicit object using self return math.sqrt(self.x^2 + self.y^2 + self.z^2) end local vec = Vector:new(0, 1, 0) -- Create a vector print(vec:magnitude()) -- Call a method (output: 1) print(vec.x) -- Access a member variable (output: 0) Inheritance Lua supports using metatables to give Lua class inheritance. In this example, we allow vectors to have their values multiplied by a constant in a derived class. local Vector = {} Vector.__index = Vector function Vector:new(x, y, z) -- The constructor -- Here, self refers to whatever class's "new" -- method we call. In a derived class, self will -- be the derived class; in the Vector class, self -- will be Vector return setmetatable({x = x, y = y, z = z}, self) end function Vector:magnitude() -- Another method -- Reference the implicit object using self return math.sqrt(self.x^2 + self.y^2 + self.z^2) end -- Example of class inheritance local VectorMult = {} VectorMult.__index = VectorMult setmetatable(VectorMult, Vector) -- Make VectorMult a child of Vector function VectorMult:multiply(value) self.x = self.x * value self.y = self.y * value self.z = self.z * value return self end local vec = VectorMult:new(0, 1, 0) -- Create a vector print(vec:magnitude()) -- Call a method (output: 1) print(vec.y) -- Access a member variable (output: 1) vec:multiply(2) -- Multiply all components of vector by 2 print(vec.y) -- Access member again (output: 2) Lua also supports multiple inheritance; can either be a function or a table. Operator overloading can also be done; Lua metatables can have elements such as , and so on. Implementation Lua programs are not interpreted directly from the textual Lua file, but are compiled into bytecode, which is then run on the Lua virtual machine (VM). The compiling process is typically invisible to the user and is performed during run-time, especially when a just-in-time compilation (JIT) compiler is used, but it can be done offline to increase loading performance or reduce the memory footprint of the host environment by leaving out the compiler. Lua bytecode can also be produced and executed from within Lua, using the dump function from the string library and the load/loadstring/loadfile functions. Lua version 5.3.4 is implemented in approximately 24,000 lines of C code. Like most CPUs, and unlike most virtual machines (which are stack-based), the Lua VM is register-based, and therefore more closely resembles most hardware design. The register architecture both avoids excessive copying of values, and reduces the total number of instructions per function. The virtual machine of Lua 5 is one of the first register-based pure VMs to have a wide use. Parrot and Android's Dalvik are two other well-known register-based VMs. PCScheme's VM was also register-based. This example is the bytecode listing of the factorial function defined above (as shown by the luac 5.1 compiler): function <factorial.lua:1,7> (9 instructions, 36 bytes at 0x8063c60) 1 param, 6 slots, 0 upvalues, 6 locals, 2 constants, 0 functions 1 [2] LOADK 1 -1 ; 1 2 [3] LOADK 2 -2 ; 2 3 [3] MOVE 3 0 4 [3] LOADK 4 -1 ; 1 5 [3] FORPREP 2 1 ; to 7 6 [4] MUL 1 1 5 7 [3] FORLOOP 2 -2 ; to 6 8 [6] RETURN 1 2 9 [7] RETURN 0 1 C API Lua is intended to be embedded into other applications, and provides a C API for this purpose. The API is divided into two parts: the Lua core and the Lua auxiliary library. The Lua API's design eliminates the need for manual reference counting (management) in C code, unlike Python's API. The API, like the language, is minimalist. Advanced functions are provided by the auxiliary library, which consists largely of preprocessor macros which assist with complex table operations. The Lua C API is stack based. Lua provides functions to push and pop most simple C data types (integers, floats, etc.) to and from the stack, and functions to manipulate tables through the stack. The Lua stack is somewhat different from a traditional stack; the stack can be indexed directly, for example. Negative indices indicate offsets from the top of the stack. For example, −1 is the top (most recently pushed value), while positive indices indicate offsets from the bottom (oldest value). Marshalling data between C and Lua functions is also done using the stack. To call a Lua function, arguments are pushed onto the stack, and then the lua_call is used to call the actual function. When writing a C function to be directly called from Lua, the arguments are read from the stack. Here is an example of calling a Lua function from C: #include <stdio.h> #include <lua.h> // Lua main library (lua_*) #include <lauxlib.h> // Lua auxiliary library (luaL_*) int main(void) { // create a Lua state lua_State *L = luaL_newstate(); // load and execute a string if (luaL_dostring(L, "function foo (x,y) return x+y end")) { lua_close(L); return -1; } // push value of global "foo" (the function defined above) // to the stack, followed by integers 5 and 3 lua_getglobal(L, "foo"); lua_pushinteger(L, 5); lua_pushinteger(L, 3); lua_call(L, 2, 1); // call a function with two arguments and one return value printf("Result: %d\n", lua_tointeger(L, -1)); // print integer value of item at stack top lua_pop(L, 1); // return stack to original state lua_close(L); // close Lua state return 0; } Running this example gives: $ cc -o example example.c -llua $ ./example Result: 8 The C API also provides some special tables, located at various "pseudo-indices" in the Lua stack. At LUA_GLOBALSINDEX prior to Lua 5.2 is the globals table, _G from within Lua, which is the main namespace. There is also a registry located at LUA_REGISTRYINDEX where C programs can store Lua values for later retrieval. Modules Besides standard library (core) modules it is possible to write extensions using the Lua API. Extension modules are shared objects which can be used to extend the functions of the interpreter by providing native facilities to Lua scripts. Lua scripts may load extension modules using require, just like modules written in Lua itself, or with package.loadlib. When a C library is loaded via Lua will look for the function luaopen_foo and call it, which acts as any C function callable from Lua and generally returns a table filled with methods. A growing set of modules termed rocks are available through a package management system named LuaRocks, in the spirit of CPAN, RubyGems and Python eggs. Prewritten Lua bindings exist for most popular programming languages, including other scripting languages. For C++, there are a number of template-based approaches and some automatic binding generators. Applications In video game development, Lua is widely used as a scripting language, mainly due to its perceived easiness to embed, fast execution, and short learning curve. Notable games which use Lua include Roblox, Garry's Mod, World of Warcraft, Payday 2, Phantasy Star Online 2, Dota 2, Crysis, and many others. Some games that do not natively support Lua programming or scripting, have this function added by mods, as ComputerCraft does for Minecraft. Also, Lua is used in non-video game software, such as Adobe Lightroom, Moho, iClone, Aerospike, and some system software in FreeBSD and NetBSD, and used as a template scripting language on MediaWiki using the Scribunto extension. In 2003, a poll conducted by GameDev.net showed Lua was the most popular scripting language for game programming. On 12 January 2012, Lua was announced as a winner of the Front Line Award 2011 from the magazine Game Developer in the category Programming Tools. Many non-game applications also use Lua for extensibility, such as LuaTeX, an implementation of the TeX type-setting language, Redis, a key-value database, ScyllaDB, a wide-column store, Neovim, a text editor, Nginx, a web server, Wireshark, a network packet analyzer and Pure Data, a visual audio programming language (through the pdlua extension). Through the Scribunto extension, Lua is available as a server-side scripting language in the MediaWiki software that runs Wikipedia and other wikis. Among its uses are allowing the integration of data from Wikidata into articles, and powering the . Derived languages Languages that compile to Lua MoonScript is a dynamic, whitespace-sensitive scripting language inspired by CoffeeScript, which is compiled into Lua. This means that instead of using do and end (or { and }) to delimit sections of code it uses line breaks and indentation style. A notable use of MoonScript is the video game distribution website Itch.io. Haxe supports compiling to some Lua targets, including Lua 5.1-5.3 and LuaJIT 2.0 and 2.1. Fennel, a Lisp dialect that targets Lua. Urn, a Lisp dialect built on Lua. Amulet, an ML-like functional programming language, which compiler emits Lua files. Dialects LuaJIT, a just-in-time compiler of Lua 5.1. Luau developed by Roblox Corporation, a derivative of Lua 5.1 with gradual typing, additional features and a focus on performance. Ravi, a JIT-enabled Lua 5.3 language with optional static typing. JIT is guided by type information. Shine, a fork of LuaJIT with many extensions, including a module system and a macro system. Glua, a modified version embedded into the game Garry's Mod as its scripting language. Teal, a statically typed Lua dialect written in Lua. In addition, the Lua users community provides some power patches on top of the reference C implementation. See also Comparison of programming languages Notes References Further reading (The 1st ed. is available online.) Chapters 6 and 7 are dedicated to Lua, while others look at software in Brazil more broadly. Interview with Roberto Ierusalimschy. How the embeddability of Lua impacted its design. Lua papers and theses External links Lua Users , Community Lua Forum LuaDist Lua Rocks - Package manager Projects in Lua Articles with example C code Brazilian inventions Cross-platform free software Cross-platform software Dynamic programming languages Dynamically typed programming languages Embedded systems Free and open source interpreters Free computer libraries Free software programmed in C Object-oriented programming languages Pontifical Catholic University of Rio de Janeiro Programming languages Programming languages created in 1993 Prototype-based programming languages Register-based virtual machines Scripting languages Software using the MIT license
Lua (programming language)
[ "Technology", "Engineering" ]
6,361
[ "Embedded systems", "Computer science", "Computer engineering", "Computer systems" ]
46,178
https://en.wikipedia.org/wiki/SQUID
A SQUID (superconducting quantum interference device) is a very sensitive magnetometer used to measure extremely weak magnetic fields, based on superconducting loops containing Josephson junctions. SQUIDs are sensitive enough to measure fields as low as 5×10−18 T with a few days of averaged measurements. Their noise levels are as low as 3 fT·Hz−. For comparison, a typical refrigerator magnet produces 0.01 tesla (10−2 T), and some processes in animals produce very small magnetic fields between 10−9 T and 10−6 T. SERF atomic magnetometers, invented in the early 2000s are potentially more sensitive and do not require cryogenic refrigeration but are orders of magnitude larger in size (~1 cm3) and must be operated in a near-zero magnetic field. History and design There are two main types of SQUID: direct current (DC) and radio frequency (RF). RF SQUIDs can work with only one Josephson junction (superconducting tunnel junction), which might make them cheaper to produce, but are less sensitive. DC SQUID The DC SQUID was invented in 1964 by Robert Jaklevic, John J. Lambe, James Mercereau, and Arnold Silver of Ford Research Labs after Brian Josephson postulated the Josephson effect in 1962, and the first Josephson junction was made by John Rowell and Philip Anderson at Bell Labs in 1963. It has two Josephson junctions in parallel in a superconducting loop. It is based on the DC Josephson effect. In the absence of any external magnetic field, the input current splits into the two branches equally. If a small external magnetic field is applied to the superconducting loop, a screening current, , begins to circulate the loop that generates the magnetic field canceling the applied external flux, and creates an additional Josephson phase which is proportional to this external magnetic flux. The induced current is in the same direction as in one of the branches of the superconducting loop, and is opposite to in the other branch; the total current becomes in one branch and in the other. As soon as the current in either branch exceeds the critical current, , of the Josephson junction, a voltage appears across the junction. Now suppose the external flux is further increased until it exceeds , half the magnetic flux quantum. Since the flux enclosed by the superconducting loop must be an integer number of flux quanta, instead of screening the flux the SQUID now energetically prefers to increase it to . The current now flows in the opposite direction, opposing the difference between the admitted flux and the external field of just over . The current decreases as the external field is increased, is zero when the flux is exactly , and again reverses direction as the external field is further increased. Thus, the current changes direction periodically, every time the flux increases by additional half-integer multiple of , with a change at maximum amperage every half-plus-integer multiple of and at zero amps every integer multiple. If the input current is more than , then the SQUID always operates in the resistive mode. The voltage, in this case, is thus a function of the applied magnetic field and the period equal to . Since the current-voltage characteristic of the DC SQUID is hysteretic, a shunt resistance, is connected across the junction to eliminate the hysteresis (in the case of copper oxide based high-temperature superconductors the junction's own intrinsic resistance is usually sufficient). The screening current is the applied flux divided by the self-inductance of the ring. Thus can be estimated as the function of (flux to voltage converter) as follows: , where is the self inductance of the superconducting ring The discussion in this section assumed perfect flux quantization in the loop. However, this is only true for big loops with a large self-inductance. According to the relations, given above, this implies also small current and voltage variations. In practice the self-inductance of the loop is not so large. The general case can be evaluated by introducing a parameter where is the critical current of the SQUID. Usually is of order one. RF SQUID The RF SQUID was invented in 1967 by Robert Jaklevic, John J. Lambe, Arnold Silver, and James Edward Zimmerman at Ford. It is based on the AC Josephson effect and uses only one Josephson junction. It is less sensitive compared to DC SQUID but is cheaper and easier to manufacture in smaller quantities. Most fundamental measurements in biomagnetism, even of extremely small signals, have been made using RF SQUIDS. The RF SQUID is inductively coupled to a resonant tank circuit. Depending on the external magnetic field, as the SQUID operates in the resistive mode, the effective inductance of the tank circuit changes, thus changing the resonant frequency of the tank circuit. These frequency measurements can be easily taken, and thus the losses which appear as the voltage across the load resistor in the circuit are a periodic function of the applied magnetic flux with a period of . For a precise mathematical description refer to the original paper by Erné et al. Materials used The traditional superconducting materials for SQUIDs are pure niobium or a lead alloy with 10% gold or indium, as pure lead is unstable when its temperature is repeatedly changed. To maintain superconductivity, the entire device needs to operate within a few degrees of absolute zero, cooled with liquid helium. High-temperature SQUID sensors were developed in the late 1980s. They are made of high-temperature superconductors, particularly YBCO, and are cooled by liquid nitrogen which is cheaper and more easily handled than liquid helium. They are less sensitive than conventional low temperature SQUIDs but good enough for many applications. In 2006, A proof of concept was shown for CNT-SQUID sensors built with an aluminium loop and a single walled carbon nanotube Josephson junction. The sensors are a few 100 nm in size and operate at 1K or below. Such sensors allow to count spins. In 2022 a SQUID was constructed on magic angle twisted bilayer graphene (MATBG) Uses The extreme sensitivity of SQUIDs makes them ideal for studies in biology. Magnetoencephalography (MEG), for example, uses measurements from an array of SQUIDs to make inferences about neural activity inside brains. Because SQUIDs can operate at acquisition rates much higher than the highest temporal frequency of interest in the signals emitted by the brain (kHz), MEG achieves good temporal resolution. Another area where SQUIDs are used is magnetogastrography, which is concerned with recording the weak magnetic fields of the stomach. A novel application of SQUIDs is the magnetic marker monitoring method, which is used to trace the path of orally applied drugs. In the clinical environment SQUIDs are used in cardiology for magnetic field imaging (MFI), which detects the magnetic field of the heart for diagnosis and risk stratification. Probably the most common commercial use of SQUIDs is in magnetic property measurement systems (MPMS). These are turn-key systems, made by several manufacturers, that measure the magnetic properties of a material sample which typically has a temperature between 300 mK and 400 K. With the decreasing size of SQUID sensors since the last decade, such sensor can equip the tip of an AFM probe. Such device allows simultaneous measurement of roughness of the surface of a sample and the local magnetic flux. For example, SQUIDs are being used as detectors to perform magnetic resonance imaging (MRI). While high-field MRI uses precession fields of one to several teslas, SQUID-detected MRI uses measurement fields that lie in the microtesla range. In a conventional MRI system, the signal scales as the square of the measurement frequency (and hence precession field): one power of frequency comes from the thermal polarization of the spins at ambient temperature, while the second power of field comes from the fact that the induced voltage in the pickup coil is proportional to the frequency of the precessing magnetization. In the case of untuned SQUID detection of prepolarized spins, however, the NMR signal strength is independent of precession field, allowing MRI signal detection in extremely weak fields, on the order of Earth's magnetic field. SQUID-detected MRI has advantages over high-field MRI systems, such as the low cost required to build such a system, and its compactness. The principle has been demonstrated by imaging human extremities, and its future application may include tumor screening. Another application is the scanning SQUID microscope, which uses a SQUID immersed in liquid helium as the probe. The use of SQUIDs in oil prospecting, mineral exploration, earthquake prediction and geothermal energy surveying is becoming more widespread as superconductor technology develops; they are also used as precision movement sensors in a variety of scientific applications, such as the detection of gravitational waves. A SQUID is the sensor in each of the four gyroscopes employed on Gravity Probe B in order to test the limits of the theory of general relativity. A modified RF SQUID was used to observe the dynamical Casimir effect for the first time. SQUIDs constructed from super-cooled niobium wire loops are used as the basis for D-Wave Systems 2000Q quantum computer. Transition-edge sensors One of the largest uses of SQUIDs is to read out superconducting Transition-edge sensors. Hundreds of thousands of multiplexed SQUIDs coupled to transition-edge sensors are presently being deployed to study the Cosmic microwave background, for X-ray astronomy, to search for dark matter made up of Weakly interacting massive particles, and for spectroscopy at Synchrotron light sources. Cold dark matter Advanced SQUIDS called near quantum-limited SQUID amplifiers form the basis of the Axion Dark Matter Experiment (ADMX) at the University of Washington. Axions are a prime candidate for cold dark matter. Proposed uses A potential military application exists for use in anti-submarine warfare as a magnetic anomaly detector (MAD) fitted to maritime patrol aircraft. SQUIDs are used in superparamagnetic relaxometry (SPMR), a technology that utilizes the high magnetic field sensitivity of SQUID sensors and the superparamagnetic properties of magnetite nanoparticles. These nanoparticles are paramagnetic; they have no magnetic moment until exposed to an external field where they become ferromagnetic. After removal of the magnetizing field, the nanoparticles decay from a ferromagnetic state to a paramagnetic state, with a time constant that depends upon the particle size and whether they are bound to an external surface. Measurement of the decaying magnetic field by SQUID sensors is used to detect and localize the nanoparticles. Applications for SPMR may include cancer detection. See also Aharonov–Bohm effect Electromagnetism Geophysics Macroscopic quantum phenomena Notes References American inventions Measuring instruments Superconductivity Josephson effect Magnetometers
SQUID
[ "Physics", "Materials_science", "Technology", "Engineering" ]
2,268
[ "Josephson effect", "Physical quantities", "Superconductivity", "Materials science", "Measuring instruments", "Condensed matter physics", "Magnetometers", "Electrical resistance and conductance" ]
46,182
https://en.wikipedia.org/wiki/White%20noise
In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used with this or similar meanings in many scientific and technical disciplines, including physics, acoustical engineering, telecommunications, and statistical forecasting. White noise refers to a statistical model for signals and signal sources, not to any specific signal. White noise draws its name from white light, although light that appears white generally does not have a flat power spectral density over the visible band. In discrete time, white noise is a discrete signal whose samples are regarded as a sequence of serially uncorrelated random variables with zero mean and finite variance; a single realization of white noise is a random shock. In some contexts, it is also required that the samples be independent and have identical probability distribution (in other words independent and identically distributed random variables are the simplest representation of white noise). In particular, if each sample has a normal distribution with zero mean, the signal is said to be additive white Gaussian noise. The samples of a white noise signal may be sequential in time, or arranged along one or more spatial dimensions. In digital image processing, the pixels of a white noise image are typically arranged in a rectangular grid, and are assumed to be independent random variables with uniform probability distribution over some interval. The concept can be defined also for signals spread over more complicated domains, such as a sphere or a torus. An is a purely theoretical construction. The bandwidth of white noise is limited in practice by the mechanism of noise generation, by the transmission medium and by finite observation capabilities. Thus, random signals are considered white noise if they are observed to have a flat spectrum over the range of frequencies that are relevant to the context. For an audio signal, the relevant range is the band of audible sound frequencies (between 20 and 20,000 Hz). Such a signal is heard by the human ear as a hissing sound, resembling the /h/ sound in a sustained aspiration. On the other hand, the sh sound in ash is a colored noise because it has a formant structure. In music and acoustics, the term white noise may be used for any signal that has a similar hissing sound. In the context of phylogenetically based statistical methods, the term white noise can refer to a lack of phylogenetic pattern in comparative data. In nontechnical contexts, it is sometimes used to mean "random talk without meaningful contents". Statistical properties Any distribution of values is possible (although it must have zero DC component). Even a binary signal which can only take on the values 1 or -1 will be white if the sequence is statistically uncorrelated. Noise having a continuous distribution, such as a normal distribution, can of course be white. It is often incorrectly assumed that Gaussian noise (i.e., noise with a Gaussian amplitude distributionsee normal distribution) necessarily refers to white noise, yet neither property implies the other. Gaussianity refers to the probability distribution with respect to the value, in this context the probability of the signal falling within any particular range of amplitudes, while the term 'white' refers to the way the signal power is distributed (i.e., independently) over time or among frequencies. One form of white noise is the generalized mean-square derivative of the Wiener process or Brownian motion. A generalization to random elements on infinite dimensional spaces, such as random fields, is the white noise measure. Practical applications Music White noise is commonly used in the production of electronic music, usually either directly or as an input for a filter to create other types of noise signal. It is used extensively in audio synthesis, typically to recreate percussive instruments such as cymbals or snare drums which have high noise content in their frequency domain. A simple example of white noise is a nonexistent radio station (static). Electronics engineering White noise is also used to obtain the impulse response of an electrical circuit, in particular of amplifiers and other audio equipment. It is not used for testing loudspeakers as its spectrum contains too great an amount of high-frequency content. Pink noise, which differs from white noise in that it has equal energy in each octave, is used for testing transducers such as loudspeakers and microphones. Computing White noise is used as the basis of some random number generators. For example, Random.org uses a system of atmospheric antennas to generate random digit patterns from sources that can be well-modeled by white noise. Tinnitus treatment White noise is a common synthetic noise source used for sound masking by a tinnitus masker. White noise machines and other white noise sources are sold as privacy enhancers and sleep aids (see music and sleep) and to mask tinnitus. The Marpac Sleep-Mate was the first domestic use white noise machine built in 1962 by traveling salesman Jim Buckwalter. Alternatively, the use of an AM radio tuned to unused frequencies ("static") is a simpler and more cost-effective source of white noise. However, white noise generated from a common commercial radio receiver tuned to an unused frequency is extremely vulnerable to being contaminated with spurious signals, such as adjacent radio stations, harmonics from non-adjacent radio stations, electrical equipment in the vicinity of the receiving antenna causing interference, or even atmospheric events such as solar flares and especially lightning. Work environment The effects of white noise upon cognitive function are mixed. Recently, a small study found that white noise background stimulation improves cognitive functioning among secondary students with attention deficit hyperactivity disorder (ADHD), while decreasing performance of non-ADHD students. Other work indicates it is effective in improving the mood and performance of workers by masking background office noise, but decreases cognitive performance in complex card sorting tasks. Similarly, an experiment was carried out on sixty-six healthy participants to observe the benefits of using white noise in a learning environment. The experiment involved the participants identifying different images whilst having different sounds in the background. Overall the experiment showed that white noise does in fact have benefits in relation to learning. The experiments showed that white noise improved the participants' learning abilities and their recognition memory slightly. Mathematical definitions White noise vector A random vector (that is, a random variable with values in Rn) is said to be a white noise vector or white random vector if its components each have a probability distribution with zero mean and finite variance, and are statistically independent: that is, their joint probability distribution must be the product of the distributions of the individual components. A necessary (but, in general, not sufficient) condition for statistical independence of two variables is that they be statistically uncorrelated; that is, their covariance is zero. Therefore, the covariance matrix R of the components of a white noise vector w with n elements must be an n by n diagonal matrix, where each diagonal element Rii is the variance of component wi; and the correlation matrix must be the n by n identity matrix. If, in addition to being independent, every variable in w also has a normal distribution with zero mean and the same variance , w is said to be a Gaussian white noise vector. In that case, the joint distribution of w is a multivariate normal distribution; the independence between the variables then implies that the distribution has spherical symmetry in n-dimensional space. Therefore, any orthogonal transformation of the vector will result in a Gaussian white random vector. In particular, under most types of discrete Fourier transform, such as FFT and Hartley, the transform W of w will be a Gaussian white noise vector, too; that is, the n Fourier coefficients of w will be independent Gaussian variables with zero mean and the same variance . The power spectrum P of a random vector w can be defined as the expected value of the squared modulus of each coefficient of its Fourier transform W, that is, Pi = E(|Wi|2). Under that definition, a Gaussian white noise vector will have a perfectly flat power spectrum, with Pi = σ2 for all i. If w is a white random vector, but not a Gaussian one, its Fourier coefficients Wi will not be completely independent of each other; although for large n and common probability distributions the dependencies are very subtle, and their pairwise correlations can be assumed to be zero. Often the weaker condition statistically uncorrelated is used in the definition of white noise, instead of statistically independent. However, some of the commonly expected properties of white noise (such as flat power spectrum) may not hold for this weaker version. Under this assumption, the stricter version can be referred to explicitly as independent white noise vector. Other authors use strongly white and weakly white instead. An example of a random vector that is Gaussian white noise in the weak but not in the strong sense is where is a normal random variable with zero mean, and is equal to or to , with equal probability. These two variables are uncorrelated and individually normally distributed, but they are not jointly normally distributed and are not independent. If is rotated by 45 degrees, its two components will still be uncorrelated, but their distribution will no longer be normal. In some situations, one may relax the definition by allowing each component of a white random vector to have non-zero expected value . In image processing especially, where samples are typically restricted to positive values, one often takes to be one half of the maximum sample value. In that case, the Fourier coefficient corresponding to the zero-frequency component (essentially, the average of the ) will also have a non-zero expected value ; and the power spectrum will be flat only over the non-zero frequencies. Discrete-time white noise A discrete-time stochastic process is a generalization of a random vector with a finite number of components to infinitely many components. A discrete-time stochastic process is called white noise if its mean is equal to zero for all , i.e. and if the autocorrelation function has a nonzero value only for , i.e. . Continuous-time white noise In order to define the notion of white noise in the theory of continuous-time signals, one must replace the concept of a random vector by a continuous-time random signal; that is, a random process that generates a function of a real-valued parameter . Such a process is said to be white noise in the strongest sense if the value for any time is a random variable that is statistically independent of its entire history before . A weaker definition requires independence only between the values and at every pair of distinct times and . An even weaker definition requires only that such pairs and be uncorrelated. As in the discrete case, some authors adopt the weaker definition for white noise, and use the qualifier independent to refer to either of the stronger definitions. Others use weakly white and strongly white to distinguish between them. However, a precise definition of these concepts is not trivial, because some quantities that are finite sums in the finite discrete case must be replaced by integrals that may not converge. Indeed, the set of all possible instances of a signal is no longer a finite-dimensional space , but an infinite-dimensional function space. Moreover, by any definition a white noise signal would have to be essentially discontinuous at every point; therefore even the simplest operations on , like integration over a finite interval, require advanced mathematical machinery. Some authors require each value to be a real-valued random variable with expectation and some finite variance . Then the covariance between the values at two times and is well-defined: it is zero if the times are distinct, and if they are equal. However, by this definition, the integral over any interval with positive width would be simply the width times the expectation: . This property renders the concept inadequate as a model of white noise signals either in a physical or mathematical sense. Therefore, most authors define the signal indirectly by specifying random values for the integrals of and over each interval . In this approach, however, the value of at an isolated time cannot be defined as a real-valued random variable. Also the covariance becomes infinite when ; and the autocorrelation function must be defined as , where is some real constant and is the Dirac delta function. In this approach, one usually specifies that the integral of over an interval is a real random variable with normal distribution, zero mean, and variance ; and also that the covariance of the integrals , is , where is the width of the intersection of the two intervals . This model is called a Gaussian white noise signal (or process). In the mathematical field known as white noise analysis, a Gaussian white noise is defined as a stochastic tempered distribution, i.e. a random variable with values in the space of tempered distributions. Analogous to the case for finite-dimensional random vectors, a probability law on the infinite-dimensional space can be defined via its characteristic function (existence and uniqueness are guaranteed by an extension of the Bochner–Minlos theorem, which goes under the name Bochner–Minlos–Sazanov theorem); analogously to the case of the multivariate normal distribution , which has characteristic function the white noise must satisfy where is the natural pairing of the tempered distribution with the Schwartz function , taken scenariowise for , and . Mathematical applications Time series analysis and regression In statistics and econometrics one often assumes that an observed series of data values is the sum of the values generated by a deterministic linear process, depending on certain independent (explanatory) variables, and on a series of random noise values. Then regression analysis is used to infer the parameters of the model process from the observed data, e.g. by ordinary least squares, and to test the null hypothesis that each of the parameters is zero against the alternative hypothesis that it is non-zero. Hypothesis testing typically assumes that the noise values are mutually uncorrelated with zero mean and have the same Gaussian probability distributionin other words, that the noise is Gaussian white (not just white). If there is non-zero correlation between the noise values underlying different observations then the estimated model parameters are still unbiased, but estimates of their uncertainties (such as confidence intervals) will be biased (not accurate on average). This is also true if the noise is heteroskedasticthat is, if it has different variances for different data points. Alternatively, in the subset of regression analysis known as time series analysis there are often no explanatory variables other than the past values of the variable being modeled (the dependent variable). In this case the noise process is often modeled as a moving average process, in which the current value of the dependent variable depends on current and past values of a sequential white noise process. Random vector transformations These two ideas are crucial in applications such as channel estimation and channel equalization in communications and audio. These concepts are also used in data compression. In particular, by a suitable linear transformation (a coloring transformation), a white random vector can be used to produce a non-white random vector (that is, a list of random variables) whose elements have a prescribed covariance matrix. Conversely, a random vector with known covariance matrix can be transformed into a white random vector by a suitable whitening transformation. Generation White noise may be generated digitally with a digital signal processor, microprocessor, or microcontroller. Generating white noise typically entails feeding an appropriate stream of random numbers to a digital-to-analog converter. The quality of the white noise will depend on the quality of the algorithm used. Informal use The term is sometimes used as a colloquialism to describe a backdrop of ambient sound, creating an indistinct or seamless commotion. Following are some examples: Chatter from multiple conversations within the acoustics of a confined space. The pleonastic jargon used by politicians to mask a point that they don't want noticed. Music that is disagreeable, harsh, dissonant or discordant with no melody. The term can also be used metaphorically, as in the novel White Noise (1985) by Don DeLillo which explores the symptoms of modern culture that came together so as to make it difficult for an individual to actualize their ideas and personality. See also References External links Noise (electronics) Statistical signal processing Data compression Sound Acoustics
White noise
[ "Physics", "Engineering" ]
3,363
[ "Statistical signal processing", "Classical mechanics", "Acoustics", "Engineering statistics" ]
46,183
https://en.wikipedia.org/wiki/Butter
Butter is a dairy product made from the fat and protein components of churned cream. It is a semi-solid emulsion at room temperature, consisting of approximately 80% butterfat. It is used at room temperature as a spread, melted as a condiment, and used as a fat in baking, sauce-making, pan frying, and other cooking procedures. Most frequently made from cow's milk, butter can also be manufactured from the milk of other mammals, including sheep, goats, buffalo, and yaks. It is made by churning milk or cream to separate the fat globules from the buttermilk. Salt has been added to butter since antiquity to help preserve it, particularly when being transported; salt may still play a preservation role but is less important today as the entire supply chain is usually refrigerated. In modern times, salt may be added for taste. Food coloring is sometimes added to butter. Rendering butter, removing the water and milk solids, produces clarified butter, or ghee, which is almost entirely butterfat. Butter is a water-in-oil emulsion resulting from an inversion of the cream, where the milk proteins are the emulsifiers. Butter remains a firm solid when refrigerated but softens to a spreadable consistency at room temperature and melts to a thin liquid consistency at . The density of butter is . It generally has a pale yellow color but varies from deep yellow to nearly white. Its natural, unmodified color is dependent on the source animal's feed and genetics, but the commercial manufacturing process sometimes alters this with food colorings like annatto or carotene. Etymology The word butter derives (via Germanic languages) from the Latin butyrum, which is the latinisation of the Greek βούτυρον (bouturon) and βούτυρος. This may be a compound of βοῦς (bous), "ox, cow" + τυρός (turos), "cheese", that is "cow-cheese". The word turos ("cheese") is attested in Mycenaean Greek. The Latinized form is found in the name butyric acid, a compound found in rancid butter and other dairy products. Production Unhomogenized milk and cream contain butterfat in microscopic globules. These globules are surrounded by membranes made of phospholipids (fatty acid emulsifiers) and proteins, which prevent the fat in milk from pooling together into a single mass. Butter is produced by agitating cream, which damages these membranes and allows the milk fats to conjoin, separating from the other parts of the cream. Variations in the production method will create butters with different consistencies, mostly due to the butterfat composition in the finished product. Butter contains fat in three separate forms: free butterfat, butterfat crystals, and undamaged fat globules. In the finished product, different proportions of these forms result in different consistencies within the butter; butters with many crystals are harder than butters dominated by free fats. Churning produces small butter grains floating in the water-based portion of the cream. This watery liquid is called buttermilk, although the buttermilk most commonly sold today is instead directly fermented skimmed milk. The buttermilk is drained off; sometimes more buttermilk is removed by rinsing the grains with water. Then the grains are "worked": pressed and kneaded together. When prepared manually, this is done using wooden boards called scotch hands. This consolidates the butter into a solid mass and breaks up embedded pockets of buttermilk or water into tiny droplets. Commercial butter is about 80% butterfat and 15% water; traditionally-made butter may have as little as 65% fat and 30% water. Butterfat is a mixture of triglyceride, a triester derived from glycerol, and three of any of several fatty acid groups. Annatto is sometimes added by U.S. butter manufacturers without declaring it on the label because the U.S. allows butter to have an undisclosed flavorless and natural coloring agent (whereas all other foods in the U.S. must label coloring agents). The preservative lactic acid is sometimes added instead of salt (and as a flavor enhancer), and sometimes additional diacetyl is added to boost the buttery flavor (in the U.S., both ingredients can be listed simply as "natural flavors"). When used together in the NIZO manufacturing method, these two flavorings produce the flavor of cultured butter without actually fully fermenting. Types Before modern factory butter making, cream was usually collected from several milkings and was therefore several days old and somewhat fermented by the time it was made into butter. Butter made in this traditional way (from a fermented cream) is known as cultured butter. During fermentation, the cream naturally sours as bacteria convert milk sugars into lactic acid. The fermentation process produces additional aroma compounds, including diacetyl, which makes for a fuller-flavored and more "buttery" tasting product. Butter made from fresh cream is called sweet cream butter. Production of sweet cream butter first became common in the 19th century, when the development of refrigeration and the mechanical milk separator made sweet cream butter faster and cheaper to produce at scale (sweet cream butter can be made in 6 hours, whereas cultured butter can take up to 72 hours to make). Cultured butter is preferred throughout continental Europe, while sweet cream butter dominates in the United States and the United Kingdom. Chef Jansen Chan, the director of pastry operations at the International Culinary Center in Manhattan, says, "It's no secret that dairy in France and most of Europe is higher quality than most of the U.S." The combination of butter culturing, the 82% butterfat minimum (as opposed to the 80% minimum in the U.S.), and the fact that French butter is grass-fed, accounts for why French pastry (and French food in general) has a reputation for being richer-tasting and flakier. Cultured butter is sometimes labeled "European-style" butter in the United States, although cultured butter is made and sold by some, especially Amish, dairies. Milk that is to be made into butter is usually pasteurized during production to kill pathogenic bacteria and other microbes. Butter made from raw milk is very rare and can be dangerous because it is made from unpasteurized milk. Commercial raw milk products are not legal to sell through interstate commerce in the United States and are very rare in Europe. Raw cream butter is generally only found made at home by dairy farmers or by consumers who have purchased raw whole milk directly from them, skimmed the cream themselves, and made butter with it. Clarified butter Clarified butter has almost all of its water and milk solids removed, leaving almost-pure butterfat. Clarified butter is made by heating butter to its melting point and then allowing it to cool; after settling, the remaining components separate by density. At the top, whey proteins form a skin, which is removed. The resulting butterfat is then poured off from the mixture of water and casein proteins that settle to the bottom. Ghee is clarified butter that has been heated to around 120 °C (250 °F) after the water evaporated, turning the milk solids brown. This process flavors the ghee, and also produces antioxidants that help protect it from rancidity. Because of this, ghee can be kept for six to eight months under normal conditions. Whey butter Cream may be separated (usually by a centrifuge or a sedimentation) from whey instead of milk, as a byproduct of cheese-making. Whey butter may be made from whey cream. Whey cream and butter have a lower fat content and taste more salty, tangy and "cheesy". They are also cheaper to make than "sweet" cream and butter. The fat content of whey is low, so 1,000 pounds of whey will typically give only three pounds of butter. European butters There are several butters produced in Europe with protected geographical indications; these include: Beurre d'Ardenne, from Belgium Beurre d'Isigny, from France Beurre Charentes-Poitou (Which also includes: Beurre des Charentes and Beurre des Deux-Sèvres under the same classification), from France Beurre Rose, from Luxembourg Mantequilla de Soria, from Spain Mantega de l'Alt Urgell i la Cerdanya, from Spain Rucava white butter (Rucavas baltais sviests), from Latvia History Elaine Khosrova traces the invention of butter back to Neolithic-era Africa 8,000 BC in her book. A later Sumerian tablet, dating to approximately 2,500 B.C., describes the butter making process, from the milking of cattle, while contemporary Sumerian tablets identify butter as a ritual offering. In the Mediterranean climate, unclarified butter spoils quickly, unlike cheese, so it is not a practical method of preserving the nutrients of milk. The ancient Greeks and Romans seemed to use the butter only as unguent and medicine and considered it as a food of the barbarians. A play by the Greek comic poet Anaxandrides refers to Thracians as boutyrophagoi, "butter-eaters". In his Natural History, Pliny the Elder calls butter "the most delicate of food among barbarous nations" and goes on to describe its medicinal properties. Later, the physician Galen also described butter as a medicinal agent only. Middle Ages In the cooler climates of northern Europe, people could store butter longer before it spoiled. Scandinavia has the oldest tradition in Europe of butter export trade, dating at least to the 12th century. After the fall of Rome and through much of the Middle Ages, butter was a common food across most of Europe—but had a low reputation, and so was consumed principally by peasants. Butter slowly became more accepted by the upper class, notably when the Roman Catholic Church allowed its consumption during Lent from the early 16th century. Bread and butter became common fare among the middle class and the English, in particular, gained a reputation for their liberal use of melted butter as a sauce with meat and vegetables. In antiquity, butter was used for fuel in lamps, as a substitute for oil. The Butter Tower of Rouen Cathedral was erected in the early 16th century when Archbishop Georges d'Amboise authorized the burning of butter during Lent, instead of oil, which was scarce at the time. Across northern Europe, butter was sometimes packed into barrels (firkins) and buried in peat bogs, perhaps for years. Such "bog butter" would develop a strong flavor as it aged, but remain edible, in large part because of the cool, airless, antiseptic and acidic environment of a peat bog. Firkins of such buried butter are a common archaeological find in Ireland; the National Museum of Ireland – Archaeology has some containing "a grayish cheese-like substance, partially hardened, not much like butter, and quite free from putrefaction." The practice was most common in Ireland in the 11th–14th centuries; it ended entirely before the 19th century. Industrialization Until the 19th century, the vast majority of butter was made by hand, on farms. Butter also provided extra income to farm families. They used wood presses with carved decoration to press butter into pucks or small bricks to sell at nearby markets or general stores. The decoration identified the farm that produced the butter. This practice continued until production was mechanized and butter was produced in less decorative stick form. Like Ireland, France became well known for its butter, particularly in Normandy and Brittany. Butter consumption in London in the mid-1840s was estimated at 15,357 tons annually. The first butter factories appeared in the United States in the early 1860s, after the successful introduction of cheese factories a decade earlier. In the late 1870s, the centrifugal cream separator was introduced, marketed most successfully by Swedish engineer Carl Gustaf Patrik de Laval. In 1920, Otto Hunziker authored The Butter Industry, Prepared for Factory, School and Laboratory, a well-known text in the industry that enjoyed at least three editions (1920, 1927, 1940). As part of the efforts of the American Dairy Science Association, Hunziker and others published articles regarding: causes of tallowiness (an odor defect, distinct from rancidity, a taste defect); mottles (an aesthetic issue related to uneven color); introduced salts; the impact of creamery metals and liquids; and acidity measurement. These and other ADSA publications helped standardize practices internationally. Butter consumption declined in most western nations during the 20th century, mainly because of the rising popularity of margarine, which is less expensive and, until recent years, was perceived as being healthier. In the United States, margarine consumption overtook butter during the 1950s, and it is still the case today that more margarine than butter is eaten in the U.S. and the EU. Worldwide production In 1997, India produced of butter, most of which was consumed domestically. Second in production was the United States (), followed by France (), Germany (), and New Zealand (). France ranks first in per capita butter consumption with 8 kg per capita per year. In terms of absolute consumption, Germany was second after India, using of butter in 1997, followed by France (), Russia (), and the United States (). New Zealand, Australia, Denmark and Ukraine are among the few nations that export a significant percentage of the butter they produce. Different varieties are found around the world. Smen is a spiced Moroccan clarified butter, buried in the ground and aged for months or years. A similar product is maltash of the Hunza Valley, where cow and yak butter can be buried for decades, and is used at events such as weddings. Yak butter is a specialty in Tibet; tsampa, barley flour mixed with yak butter, is a staple food. Butter tea is consumed in the Himalayan regions of Tibet, Bhutan, Nepal and India. It consists of tea served with intensely flavored—or "rancid"—yak butter and salt. In African and Asian nations, butter is sometimes traditionally made from sour milk rather than cream. It can take several hours of churning to produce workable butter grains from fermented milk. Storage Normal butter softens to a spreadable consistency around 15 °C (60 °F), well above refrigerator temperatures. The "butter compartment" found in many refrigerators may be one of the warmer sections inside, but it still leaves butter quite hard. Until recently, many refrigerators sold in New Zealand featured a "butter conditioner", a compartment kept warmer than the rest of the refrigerator—but still cooler than room temperature—with a small heater. Keeping butter tightly wrapped delays rancidity, which is hastened by exposure to light or air, and also helps prevent it from picking up other odors. Wrapped butter has a shelf life of several months at refrigerator temperatures. Butter can also be frozen to extend its storage life. Packaging United States In the United States, butter has traditionally been made into small, rectangular blocks by means of a pair of wooden butter paddles. It is usually produced in sticks that are individually wrapped in waxed or foiled paper, and sold as a package of 4 sticks. This practice is believed to have originated in 1907, when Swift and Company began packaging butter in this manner for mass distribution. Due to historical differences in butter printers (machines that cut and package butter), 4-ounce sticks are commonly produced in two different shapes: The dominant shape east of the Rocky Mountains is the Elgin, or Eastern-pack shape, named for a dairy in Elgin, Illinois. The sticks measure and are typically sold stacked two by two in elongated cube-shaped boxes. West of the Rocky Mountains, butter printers standardized on a different shape that is now referred to as the Western-pack shape. These butter sticks measure and are usually sold with four sticks packed side-by-side in a flat, rectangular box. Most butter dishes are designed for Elgin-style butter sticks. Elsewhere Outside the United States, butter is measured for sale by mass (rather than by volume or unit/stick), and is often sold in and packages. Bulk packaging Since the 1940s, but more commonly the 1960s, butter pats have been individually wrapped and packed in cardboard boxes. Prior to use of cardboard, butter was bulk packed in wood. The earliest discoveries used firkins. From about 1882 wooden boxes were used, as the introduction of refrigeration on ships brought about longer transit times. Butter boxes were generally made with woods whose resin would not taint the butter, such as sycamore, kahikatea, hoop pine, maple, or spruce. They commonly weighed a firkin at . In cooking and gastronomy Butter has been considered indispensable in French cuisine since the 17th century. Chefs and cooks have extolled its importance: Fernand Point said "Donnez-moi du beurre, encore du beurre, toujours du beurre!" ('Give me butter, more butter, still more butter!'). Julia Child said, "With enough butter, anything is good." Melted butter plays an important role in the preparation of sauces, notably in French cuisine. Beurre noisette (hazelnut butter) and Beurre noir (black butter) are sauces of melted butter cooked until the milk solids and sugars have turned golden or dark brown; they are often finished with an addition of vinegar or lemon juice. Hollandaise and béarnaise sauces are emulsions of egg yolk and melted butter. Hollandaise and béarnaise sauces are stabilized with the powerful emulsifiers in the egg yolks, but butter itself contains enough emulsifiers—mostly remnants of the fat globule membranes—to form a stable emulsion on its own. Beurre blanc (white butter) is made by whisking butter into reduced vinegar or wine, forming an emulsion with the texture of thick cream. Beurre monté (prepared butter) is melted but still emulsified butter; it lends its name to the practice of "mounting" a sauce with butter: whisking cold butter into any water-based sauce at the end of cooking, giving the sauce a thicker body and a glossy shine—as well as a buttery taste. Butter is used for sautéing and frying, although its milk solids brown and burn above 150 °C (250 °F)—a rather low temperature for most applications. The smoke point of butterfat is around 200 °C (400 °F), so clarified butter or ghee is better suited to frying. Butter fills several roles in baking, including making possible a range of textures, making chemical leavenings work better, tenderizing proteins, and enhancing the tastes of other ingredients. It is used in a similar manner to other solid fats like lard, suet, or shortening, but has a flavor that may better complement sweet baked goods. Compound butters are mixtures of butter and other ingredients used to flavor various dishes. Nutritional information Butter (salted during manufacturing) is 16% water, 81% fat, and 1% protein, with negligible carbohydrates (provided from table source as 100 g). Saturated fat is 51% of total fats in butter (table source). In a reference amount of , butter supplies 717 calories and 76% of the Daily Value (DV) for vitamin A, 15% DV for vitamin E, and 28% DV for sodium, with no other micronutrients in significant content (table). In 100 grams, salted butter contains 215 mg of cholesterol (table source). As butter is essentially just the milk fat, it contains only traces of lactose, so moderate consumption of butter is not a problem for lactose intolerant people. People with milk allergies may still need to avoid butter, which contains enough of the allergy-causing proteins to cause reactions. Health concerns A 2015 study concluded that "hypercholesterolemic people should keep their consumption of butter to a minimum, whereas moderate butter intake may be considered part of the diet in the normocholesterolemic population." A meta-analysis and systematic review published in 2016 found relatively small or insignificant overall associations of a dose of 14g/day of butter with mortality and cardiovascular disease, and consumption was insignificantly inversely associated with incidence of diabetes. The study states that "findings do not support a need for major emphasis in dietary guidelines on either increasing or decreasing butter consumption." See also List of butter dishes List of dairy products List of butter sauces List of spreads References Further reading pp. 33–39, "Butter and Margarine" Michael Douma (editor). WebExhibits' Butter pages . Retrieved 21 November 2005. Full text online Grigg, David B. (7 November 1974). The Agricultural Systems of the World: An Evolutionary Approach , 196–198. Google Print. (accessed 28 November 2005). Also available in print from Cambridge University Press. External links Manufacture of butter, The University of Guelph "Butter", Food Resource, College of Health and Human Sciences, Oregon State University, 20 February 2007. – FAQ, links, and extensive bibliography of food science articles on butter. Cork Butter Museum: the story of Ireland’s most important food export and the world’s largest butter market Virtual Museum Exhibit on Milk, Cream & Butter Dairy products Cooking fats Colloids Spreads (food) Condiments
Butter
[ "Physics", "Chemistry", "Materials_science" ]
4,540
[ "Chemical mixtures", "Condensed matter physics", "Colloids" ]
46,238
https://en.wikipedia.org/wiki/Refrigeration
Refrigeration is any of various types of cooling of a space, substance, or system to lower and/or maintain its temperature below the ambient one (while the removed heat is ejected to a place of higher temperature). Refrigeration is an artificial, or human-made, cooling method. Refrigeration refers to the process by which energy, in the form of heat, is removed from a low-temperature medium and transferred to a high-temperature medium. This work of energy transfer is traditionally driven by mechanical means (whether ice or electromechanical machines), but it can also be driven by heat, magnetism, electricity, laser, or other means. Refrigeration has many applications, including household refrigerators, industrial freezers, cryogenics, and air conditioning. Heat pumps may use the heat output of the refrigeration process, and also may be designed to be reversible, but are otherwise similar to air conditioning units. Refrigeration has had a large impact on industry, lifestyle, agriculture, and settlement patterns. The idea of preserving food dates back to human prehistory, but for thousands of years humans were limited regarding the means of doing so. They used curing via salting and drying, and they made use of natural coolness in caves, root cellars, and winter weather, but other means of cooling were unavailable. In the 19th century, they began to make use of the ice trade to develop cold chains. In the late 19th through mid-20th centuries, mechanical refrigeration was developed, improved, and greatly expanded in its reach. Refrigeration has thus rapidly evolved in the past century, from ice harvesting to temperature-controlled rail cars, refrigerator trucks, and ubiquitous refrigerators and freezers in both stores and homes in many countries. The introduction of refrigerated rail cars contributed to the settlement of areas that were not on earlier main transport channels such as rivers, harbors, or valley trails. These new settlement patterns sparked the building of large cities which are able to thrive in areas that were otherwise thought to be inhospitable, such as Houston, Texas, and Las Vegas, Nevada. In most developed countries, cities are heavily dependent upon refrigeration in supermarkets in order to obtain their food for daily consumption. The increase in food sources has led to a larger concentration of agricultural sales coming from a smaller percentage of farms. Farms today have a much larger output per person in comparison to the late 1800s. This has resulted in new food sources available to entire populations, which has had a large impact on the nutrition of society. History Earliest forms of cooling The seasonal harvesting of snow and ice is an ancient practice estimated to have begun earlier than 1000 BC. A Chinese collection of lyrics from this time period known as the Sleaping, describes religious ceremonies for filling and emptying ice cellars. However, little is known about the construction of these ice cellars or the purpose of the ice. The next ancient society to record the harvesting of ice may have been the Jews in the book of Proverbs, which reads, "As the cold of snow in the time of harvest, so is a faithful messenger to them who sent him." Historians have interpreted this to mean that the Jews used ice to cool beverages rather than to preserve food. Other ancient cultures such as the Greeks and the Romans dug large snow pits insulated with grass, chaff, or branches of trees as cold storage. Like the Jews, the Greeks and Romans did not use ice and snow to preserve food, but primarily as a means to cool beverages. Egyptians cooled water by evaporation in shallow earthen jars on the roofs of their houses at night. The ancient people of India used this same concept to produce ice. The Persians stored ice in a pit called a Yakhchal and may have been the first group of people to use cold storage to preserve food. In the Australian outback before a reliable electricity supply was available many farmers used a Coolgardie safe, consisting of a box frame with hessian (burlap) sides soaked in water. The water would evaporate and thereby cool the interior air, allowing many perishables such as fruit, butter, and cured meats to be kept. Ice harvesting Before 1830, few Americans used ice to refrigerate foods due to a lack of ice-storehouses and iceboxes. As these two things became more widely available, individuals used axes and saws to harvest ice for their storehouses. This method proved to be difficult, dangerous, and certainly did not resemble anything that could be duplicated on a commercial scale. Despite the difficulties of harvesting ice, Frederic Tudor thought that he could capitalize on this new commodity by harvesting ice in New England and shipping it to the Caribbean islands as well as the southern states. In the beginning, Tudor lost thousands of dollars, but eventually turned a profit as he constructed icehouses in Charleston, Virginia and in the Cuban port town of Havana. These icehouses as well as better insulated ships helped reduce ice wastage from 66% to 8%. This efficiency gain influenced Tudor to expand his ice market to other towns with icehouses such as New Orleans and Savannah. This ice market further expanded as harvesting ice became faster and cheaper after one of Tudor's suppliers, Nathaniel Wyeth, invented a horse-drawn ice cutter in 1825. This invention as well as Tudor's success inspired others to get involved in the ice trade and the ice industry grew. Ice became a mass-market commodity by the early 1830s with the price of ice dropping from six cents per pound to a half of a cent per pound. In New York City, ice consumption increased from 12,000 tons in 1843 to 100,000 tons in 1856. Boston's consumption leapt from 6,000 tons to 85,000 tons during that same period. Ice harvesting created a "cooling culture" as majority of people used ice and iceboxes to store their dairy products, fish, meat, and even fruits and vegetables. These early cold storage practices paved the way for many Americans to accept the refrigeration technology that would soon take over the country. Refrigeration research The history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time. In 1758, Benjamin Franklin and John Hadley, professor of chemistry, collaborated on a project investigating the principle of evaporation as a means to rapidly cool an object at Cambridge University, England. They confirmed that the evaporation of highly volatile liquids, such as alcohol and ether, could be used to drive down the temperature of an object past the freezing point of water. They conducted their experiment with the bulb of a mercury thermometer as their object and with a bellows used to quicken the evaporation; they lowered the temperature of the thermometer bulb down to , while the ambient temperature was . They noted that soon after they passed the freezing point of water , a thin film of ice formed on the surface of the thermometer's bulb and that the ice mass was about a thick when they stopped the experiment upon reaching . Franklin wrote, "From this experiment, one may see the possibility of freezing a man to death on a warm summer's day". In 1805, American inventor Oliver Evans described a closed vapor-compression refrigeration cycle for the production of ice by ether under vacuum. In 1820, the English scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures, and in 1834, an American expatriate to Great Britain, Jacob Perkins, built the first working vapor-compression refrigeration system in the world. It was a closed-cycle that could operate continuously, as he described in his patent: I am enabled to use volatile fluids for the purpose of producing the cooling or freezing of fluids, and yet at the same time constantly condensing such volatile fluids, and bringing them again into operation without waste. His prototype system worked although it did not succeed commercially. In 1842, a similar attempt was made by American physician, John Gorrie, who built a working prototype, but it was a commercial failure. Like many of the medical experts during this time, Gorrie thought too much exposure to tropical heat led to mental and physical degeneration, as well as the spread of diseases such as malaria. He conceived the idea of using his refrigeration system to cool the air for comfort in homes and hospitals to prevent disease. American engineer Alexander Twining took out a British patent in 1850 for a vapour compression system that used ether. The first practical vapour-compression refrigeration system was built by James Harrison, a British journalist who had emigrated to Australia. His 1856 patent was for a vapour-compression system using ether, alcohol, or ammonia. He built a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong, Victoria, and his first commercial ice-making machine followed in 1854. Harrison also introduced commercial vapour-compression refrigeration to breweries and meat-packing houses, and by 1861, a dozen of his systems were in operation. He later entered the debate of how to compete against the American advantage of unrefrigerated beef sales to the United Kingdom. In 1873 he prepared the sailing ship Norfolk for an experimental beef shipment to the United Kingdom, which used a cold room system instead of a refrigeration system. The venture was a failure as the ice was consumed faster than expected. The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as "aqua ammonia") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Carl von Linde, an engineer specializing in steam locomotives and professor of engineering at the Technological University of Munich in Germany, began researching refrigeration in the 1860s and 1870s in response to demand from brewers for a technology that would allow year-round, large-scale production of lager; he patented an improved method of liquefying gases in 1876. His new process made possible using gases such as ammonia, sulfur dioxide (SO2) and methyl chloride (CH3Cl) as refrigerants and they were widely used for that purpose until the late 1920s. Thaddeus Lowe, an American balloonist, held several patents on ice-making machines. His "Compression Ice Machine" would revolutionize the cold-storage industry. In 1869, he and other investors purchased an old steamship onto which they loaded one of Lowe's refrigeration units and began shipping fresh fruit from New York to the Gulf Coast area, and fresh meat from Galveston, Texas back to New York, but because of Lowe's lack of knowledge about shipping, the business was a costly failure. Commercial use In 1842, John Gorrie created a system capable of refrigerating water to produce ice. Although it was a commercial failure, it inspired scientists and inventors around the world. France's Ferdinand Carre was one of the inspired and he created an ice producing system that was simpler and smaller than that of Gorrie. During the Civil War, cities such as New Orleans could no longer get ice from New England via the coastal ice trade. Carre's refrigeration system became the solution to New Orleans' ice problems and, by 1865, the city had three of Carre's machines. In 1867, in San Antonio, Texas, a French immigrant named Andrew Muhl built an ice-making machine to help service the expanding beef industry before moving it to Waco in 1871. In 1873, the patent for this machine was contracted by the Columbus Iron Works, a company acquired by the W.C. Bradley Co., which went on to produce the first commercial ice-makers in the US. By the 1870s, breweries had become the largest users of harvested ice. Though the ice-harvesting industry had grown immensely by the turn of the 20th century, pollution and sewage had begun to creep into natural ice, making it a problem in the metropolitan suburbs. Eventually, breweries began to complain of tainted ice. Public concern for the purity of water, from which ice was formed, began to increase in the early 1900s with the rise of germ theory. Numerous media outlets published articles connecting diseases such as typhoid fever with natural ice consumption. This caused ice harvesting to become illegal in certain areas of the country. All of these scenarios increased the demands for modern refrigeration and manufactured ice. Ice producing machines like that of Carre's and Muhl's were looked to as means of producing ice to meet the needs of grocers, farmers, and food shippers. Refrigerated railroad cars were introduced in the US in the 1840s for short-run transport of dairy products, but these used harvested ice to maintain a cool temperature. The new refrigerating technology first met with widespread industrial use as a means to freeze meat supplies for transport by sea in reefer ships from the British Dominions and other countries to the British Isles. Although not actually the first to achieve successful transportation of frozen goods overseas (the Strathleven had arrived at the London docks on 2 February 1880 with a cargo of frozen beef, mutton and butter from Sydney and Melbourne ), the breakthrough is often attributed to William Soltau Davidson, an entrepreneur who had emigrated to New Zealand. Davidson thought that Britain's rising population and meat demand could mitigate the slump in world wool markets that was heavily affecting New Zealand. After extensive research, he commissioned the Dunedin to be refitted with a compression refrigeration unit for meat shipment in 1881. On February 15, 1882, the Dunedin sailed for London with what was to be the first commercially successful refrigerated shipping voyage, and the foundation of the refrigerated meat industry. The Times commented "Today we have to record such a triumph over physical difficulties, as would have been incredible, even unimaginable, a very few days ago...". The Marlborough—sister ship to the Dunedin – was immediately converted and joined the trade the following year, along with the rival New Zealand Shipping Company vessel Mataurua, while the German Steamer Marsala began carrying frozen New Zealand lamb in December 1882. Within five years, 172 shipments of frozen meat were sent from New Zealand to the United Kingdom, of which only 9 had significant amounts of meat condemned. Refrigerated shipping also led to a broader meat and dairy boom in Australasia and South America. J & E Hall of Dartford, England outfitted the SS Selembria with a vapor compression system to bring 30,000 carcasses of mutton from the Falkland Islands in 1886. In the years ahead, the industry rapidly expanded to Australia, Argentina and the United States. By the 1890s, refrigeration played a vital role in the distribution of food. The meat-packing industry relied heavily on natural ice in the 1880s and continued to rely on manufactured ice as those technologies became available. By 1900, the meat-packing houses of Chicago had adopted ammonia-cycle commercial refrigeration. By 1914, almost every location used artificial refrigeration. The major meat packers, Armour, Swift, and Wilson, had purchased the most expensive units which they installed on train cars and in branch houses and storage facilities in the more remote distribution areas. By the middle of the 20th century, refrigeration units were designed for installation on trucks or lorries. Refrigerated vehicles are used to transport perishable goods, such as frozen foods, fruit and vegetables, and temperature-sensitive chemicals. Most modern refrigerators keep the temperature between –40 and –20 °C, and have a maximum payload of around 24,000 kg gross weight (in Europe). Although commercial refrigeration quickly progressed, it had limitations that prevented it from moving into the household. First, most refrigerators were far too large. Some of the commercial units being used in 1910 weighed between five and two hundred tons. Second, commercial refrigerators were expensive to produce, purchase, and maintain. Lastly, these refrigerators were unsafe. It was not uncommon for commercial refrigerators to catch fire, explode, or leak toxic gases. Refrigeration did not become a household technology until these three challenges were overcome. Home and consumer use During the early 1800s, consumers preserved their food by storing food and ice purchased from ice harvesters in iceboxes. In 1803, Thomas Moore patented a metal-lined butter-storage tub which became the prototype for most iceboxes. These iceboxes were used until nearly 1910 and the technology did not progress. In fact, consumers that used the icebox in 1910 faced the same challenge of a moldy and stinky icebox that consumers had in the early 1800s. General Electric (GE) was one of the first companies to overcome these challenges. In 1911, GE released a household refrigeration unit that was powered by gas. The use of gas eliminated the need for an electric compressor motor and decreased the size of the refrigerator. However, electric companies that were customers of GE did not benefit from a gas-powered unit. Thus, GE invested in developing an electric model. In 1927, GE released the Monitor Top, the first refrigerator to run on electricity. In 1930, Frigidaire, one of GE's main competitors, synthesized Freon. With the invention of synthetic refrigerants based mostly on a chlorofluorocarbon (CFC) chemical, safer refrigerators were possible for home and consumer use. Freon led to the development of smaller, lighter, and cheaper refrigerators. The average price of a refrigerator dropped from $275 to $154 with the synthesis of Freon. This lower price allowed ownership of refrigerators in American households to exceed 50% by 1940. Freon is a trademark of the DuPont Corporation and refers to these CFCs, and later hydro chlorofluorocarbon (HCFC) and hydro fluorocarbon (HFC), refrigerants developed in the late 1920s. These refrigerants were considered — at the time — to be less harmful than the commonly-used refrigerants of the time, including methyl formate, ammonia, methyl chloride, and sulfur dioxide. The intent was to provide refrigeration equipment for home use without danger. These CFC refrigerants answered that need. In the 1970s, though, the compounds were found to be reacting with atmospheric ozone, an important protection against solar ultraviolet radiation, and their use as a refrigerant worldwide was curtailed in the Montreal Protocol of 1987. Impact on settlement patterns in the United States of America In the last century, refrigeration allowed new settlement patterns to emerge. This new technology has allowed for new areas to be settled that are not on a natural channel of transport such as a river, valley trail or harbor that may have otherwise not been settled. Refrigeration has given opportunities to early settlers to expand westward and into rural areas that were unpopulated. These new settlers with rich and untapped soil saw opportunity to profit by sending raw goods to the eastern cities and states. In the 20th century, refrigeration has made "Galactic Cities" such as Dallas, Phoenix, and Los Angeles possible. Refrigerated rail cars The refrigerated rail car (refrigerated van or refrigerator car), along with the dense railroad network, became an exceedingly important link between the marketplace and the farm allowing for a national opportunity rather than a just a regional one. Before the invention of the refrigerated rail car, it was impossible to ship perishable food products long distances. The beef packing industry made the first demand push for refrigeration cars. The railroad companies were slow to adopt this new invention because of their heavy investments in cattle cars, stockyards, and feedlots. Refrigeration cars were also complex and costly compared to other rail cars, which also slowed the adoption of the refrigerated rail car. After the slow adoption of the refrigerated car, the beef packing industry dominated the refrigerated rail car business with their ability to control ice plants and the setting of icing fees. The United States Department of Agriculture estimated that, in 1916, over sixty-nine percent of the cattle killed in the country was done in plants involved in interstate trade. The same companies that were also involved in the meat trade later implemented refrigerated transport to include vegetables and fruit. The meat packing companies had much of the expensive machinery, such as refrigerated cars, and cold storage facilities that allowed for them to effectively distribute all types of perishable goods. During World War I, a national refrigerator car pool was established by the United States Administration to deal with problem of idle cars and was later continued after the war. The idle car problem was the problem of refrigeration cars sitting pointlessly in between seasonal harvests. This meant that very expensive cars sat in rail yards for a good portion of the year while making no revenue for the car's owner. The car pool was a system where cars were distributed to areas as crops matured ensuring maximum use of the cars. Refrigerated rail cars moved eastward from vineyards, orchards, fields, and gardens in western states to satisfy Americas consuming market in the east. The refrigerated car made it possible to transport perishable crops hundreds and even thousands of kilometres or miles. The most noticeable effect the car gave was a regional specialization of vegetables and fruits. The refrigeration rail car was widely used for the transportation of perishable goods up until the 1950s. By the 1960s, the nation's interstate highway system was adequately complete allowing for trucks to carry the majority of the perishable food loads and to push out the old system of the refrigerated rail cars. Expansion west and into rural areas The widespread use of refrigeration allowed for a vast amount of new agricultural opportunities to open up in the United States. New markets emerged throughout the United States in areas that were previously uninhabited and far-removed from heavily populated areas. New agricultural opportunity presented itself in areas that were considered rural, such as states in the south and in the west. Shipments on a large scale from the south and California were both made around the same time, although natural ice was used from the Sierras in California rather than manufactured ice in the south. Refrigeration allowed for many areas to specialize in the growing of specific fruits. California specialized in several fruits, grapes, peaches, pears, plums, and apples, while Georgia became famous for specifically its peaches. In California, the acceptance of the refrigerated rail cars led to an increase of car loads from 4,500 carloads in 1895 to between 8,000 and 10,000 carloads in 1905. The Gulf States, Arkansas, Missouri and Tennessee entered into strawberry production on a large-scale while Mississippi became the center of the tomato industry. New Mexico, Colorado, Arizona, and Nevada grew cantaloupes. Without refrigeration, this would have not been possible. By 1917, well-established fruit and vegetable areas that were close to eastern markets felt the pressure of competition from these distant specialized centers. Refrigeration was not limited to meat, fruit and vegetables but it also encompassed dairy product and dairy farms. In the early twentieth century, large cities got their dairy supply from farms as far as . Dairy products were not as easily transported over great distances like fruits and vegetables due to greater perishability. Refrigeration made production possible in the west far from eastern markets, so much in fact that dairy farmers could pay transportation cost and still undersell their eastern competitors. Refrigeration and the refrigerated rail gave opportunity to areas with rich soil far from natural channel of transport such as a river, valley trail or harbors. Rise of the galactic city "Edge city" was a term coined by Joel Garreau, whereas the term "galactic city" was coined by Lewis Mumford. These terms refer to a concentration of business, shopping, and entertainment outside a traditional downtown or central business district in what had previously been a residential or rural area. There were several factors contributing to the growth of these cities such as Los Angeles, Las Vegas, Houston, and Phoenix. The factors that contributed to these large cities include reliable automobiles, highway systems, refrigeration, and agricultural production increases. Large cities such as the ones mentioned above have not been uncommon in history, but what separates these cities from the rest are that these cities are not along some natural channel of transport, or at some crossroad of two or more channels such as a trail, harbor, mountain, river, or valley. These large cities have been developed in areas that only a few hundred years ago would have been uninhabitable. Without a cost efficient way of cooling air and transporting water and food from great distances, these large cities would have never developed. The rapid growth of these cities was influenced by refrigeration and an agricultural productivity increase, allowing more distant farms to effectively feed the population. Impact on agriculture and food production Agriculture's role in developed countries has drastically changed in the last century due to many factors, including refrigeration. Statistics from the 2007 census gives information on the large concentration of agricultural sales coming from a small portion of the existing farms in the United States today. This is a partial result of the market created for the frozen meat trade by the first successful shipment of frozen sheep carcasses coming from New Zealand in the 1880s. As the market continued to grow, regulations on food processing and quality began to be enforced. Eventually, electricity was introduced into rural homes in the United States, which allowed refrigeration technology to continue to expand on the farm, increasing output per person. Today, refrigeration's use on the farm reduces humidity levels, avoids spoiling due to bacterial growth, and assists in preservation. Demographics The introduction of refrigeration and evolution of additional technologies drastically changed agriculture in the United States. During the beginning of the 20th century, farming was a common occupation and lifestyle for United States citizens, as most farmers actually lived on their farm. In 1935, there were 6.8 million farms in the United States and a population of 127 million. Yet, while the United States population has continued to climb, citizens pursuing agriculture continue to decline. Based on the 2007 US Census, less than one percent of a population of 310 million people claim farming as an occupation today. However, the increasing population has led to an increasing demand for agricultural products, which is met through a greater variety of crops, fertilizers, pesticides, and improved technology. Improved technology has decreased the risk and time involved for agricultural management and allows larger farms to increase their output per person to meet society's demand. Meat packing and trade Prior to 1882, the South Island of New Zealand had been experimenting with sowing grass and crossbreeding sheep, which immediately gave their farmers economic potential in the exportation of meat. In 1882, the first successful shipment of sheep carcasses was sent from Port Chalmers in Dunedin, New Zealand, to London. By the 1890s, the frozen meat trade became increasingly more profitable in New Zealand, especially in Canterbury, where 50% of exported sheep carcasses came from in 1900. It was not long before Canterbury meat was known for the highest quality, creating a demand for New Zealand meat around the world. In order to meet this new demand, the farmers improved their feed so sheep could be ready for the slaughter in only seven months. This new method of shipping led to an economic boom in New Zealand by the mid 1890s. In the United States, the Meat Inspection Act of 1891 was put in place in the United States because local butchers felt the refrigerated railcar system was unwholesome. When meat packing began to take off, consumers became nervous about the quality of the meat for consumption. Upton Sinclair's 1906 novel The Jungle brought negative attention to the meat packing industry, by drawing to light unsanitary working conditions and processing of diseased animals. The book caught the attention of President Theodore Roosevelt, and the 1906 Meat Inspection Act was put into place as an amendment to the Meat Inspection Act of 1891. This new act focused on the quality of the meat and environment it is processed in. Electricity in rural areas In the early 1930s, 90 percent of the urban population of the United States had electric power, in comparison to only 10 percent of rural homes. At the time, power companies did not feel that extending power to rural areas (rural electrification) would produce enough profit to make it worth their while. However, in the midst of the Great Depression, President Franklin D. Roosevelt realized that rural areas would continue to lag behind urban areas in both poverty and production if they were not electrically wired. On May 11, 1935, the president signed an executive order called the Rural Electrification Administration, also known as REA. The agency provided loans to fund electric infrastructure in the rural areas. In just a few years, 300,000 people in rural areas of the United States had received power in their homes. While electricity dramatically improved working conditions on farms, it also had a large impact on the safety of food production. Refrigeration systems were introduced to the farming and food distribution processes, which helped in food preservation and kept food supplies safe. Refrigeration also allowed for shipment of perishable commodities throughout the United States. As a result, United States farmers quickly became the most productive in the world, and entire new food systems arose. Farm use In order to reduce humidity levels and spoiling due to bacterial growth, refrigeration is used for meat, produce, and dairy processing in farming today. Refrigeration systems are used the heaviest in the warmer months for farming produce, which must be cooled as soon as possible in order to meet quality standards and increase the shelf life. Meanwhile, dairy farms refrigerate milk year round to avoid spoiling. Effects on lifestyle and diet In the late 19th Century and into the very early 20th Century, except for staple foods (sugar, rice, and beans) that needed no refrigeration, the available foods were affected heavily by the seasons and what could be grown locally. Refrigeration has removed these limitations. Refrigeration played a large part in the feasibility and then popularity of the modern supermarket. Fruits and vegetables out of season, or grown in distant locations, are now available at relatively low prices. Refrigerators have led to a huge increase in meat and dairy products as a portion of overall supermarket sales. As well as changing the goods purchased at the market, the ability to store these foods for extended periods of time has led to an increase in leisure time. Prior to the advent of the household refrigerator, people would have to shop on a daily basis for the supplies needed for their meals. Impact on nutrition The introduction of refrigeration allowed for the hygienic handling and storage of perishables, and as such, promoted output growth, consumption, and the availability of nutrition. The change in our method of food preservation moved us away from salts to a more manageable sodium level. The ability to move and store perishables such as meat and dairy led to a 1.7% increase in dairy consumption and overall protein intake by 1.25% annually in the US after the 1890s. People were not only consuming these perishables because it became easier for they themselves to store them, but because the innovations in refrigerated transportation and storage led to less spoilage and waste, thereby driving the prices of these products down. Refrigeration accounts for at least 5.1% of the increase in adult stature (in the US) through improved nutrition, and when the indirect effects associated with improvements in the quality of nutrients and the reduction in illness is additionally factored in, the overall impact becomes considerably larger. Recent studies have also shown a negative relationship between the number of refrigerators in a household and the rate of gastric cancer mortality. Current applications of refrigeration Probably the most widely used current applications of refrigeration are for air conditioning of private homes and public buildings, and refrigerating foodstuffs in homes, restaurants and large storage warehouses. The use of refrigerators and walk-in coolers and freezers in kitchens, factories and warehouses for storing and processing fruits and vegetables has allowed adding fresh salads to the modern diet year round, and storing fish and meats safely for long periods. The optimum temperature range for perishable food storage is . In commerce and manufacturing, there are many uses for refrigeration. Refrigeration is used to liquefy gases – oxygen, nitrogen, propane, and methane, for example. In compressed air purification, it is used to condense water vapor from compressed air to reduce its moisture content. In oil refineries, chemical plants, and petrochemical plants, refrigeration is used to maintain certain processes at their needed low temperatures (for example, in alkylation of butenes and butane to produce a high-octane gasoline component). Metal workers use refrigeration to temper steel and cutlery. When transporting temperature-sensitive foodstuffs and other materials by trucks, trains, airplanes and seagoing vessels, refrigeration is a necessity. Dairy products are constantly in need of refrigeration, and it was only discovered in the past few decades that eggs needed to be refrigerated during shipment rather than waiting to be refrigerated after arrival at the grocery store. Meats, poultry and fish all must be kept in climate-controlled environments before being sold. Refrigeration also helps keep fruits and vegetables edible longer. One of the most influential uses of refrigeration was in the development of the sushi/sashimi industry in Japan. Before the discovery of refrigeration, many sushi connoisseurs were at risk of contracting diseases. The dangers of unrefrigerated sashimi were not brought to light for decades due to the lack of research and healthcare distribution across rural Japan. Around mid-century, the Zojirushi corporation, based in Kyoto, made breakthroughs in refrigerator designs, making refrigerators cheaper and more accessible for restaurant proprietors and the general public. Methods of refrigeration Methods of refrigeration can be classified as non-cyclic, cyclic, thermoelectric and magnetic. Non-cyclic refrigeration This refrigeration method cools a contained area by melting ice, or by sublimating dry ice. Perhaps the simplest example of this is a portable cooler, where items are put in it, then ice is poured over the top. Regular ice can maintain temperatures near, but not below the freezing point, unless salt is used to cool the ice down further (as in a traditional ice-cream maker). Dry ice can reliably bring the temperature well below water freezing point. Cyclic refrigeration This consists of a refrigeration cycle, where heat is removed from a low-temperature space or source and rejected to a high-temperature sink with the help of external work, and its inverse, the thermodynamic power cycle. In the power cycle, heat is supplied from a high-temperature source to the engine, part of the heat being used to produce work and the rest being rejected to a low-temperature sink. This satisfies the second law of thermodynamics. A refrigeration cycle describes the changes that take place in the refrigerant as it alternately absorbs and rejects heat as it circulates through a refrigerator. It is also applied to heating, ventilation, and air conditioning HVACR work, when describing the "process" of refrigerant flow through an HVACR unit, whether it is a packaged or split system. Heat naturally flows from hot to cold. Work is applied to cool a living space or storage volume by pumping heat from a lower temperature heat source into a higher temperature heat sink. Insulation is used to reduce the work and energy needed to achieve and maintain a lower temperature in the cooled space. The operating principle of the refrigeration cycle was described mathematically by Sadi Carnot in 1824 as a heat engine. The most common types of refrigeration systems use the reverse-Rankine vapor-compression refrigeration cycle, although absorption heat pumps are used in a minority of applications. Cyclic refrigeration can be classified as: Vapor cycle, and Gas cycle Vapor cycle refrigeration can further be classified as: Vapor-compression refrigeration Sorption Refrigeration Vapor-absorption refrigeration Adsorption refrigeration Vapor-compression cycle The vapor-compression cycle is used in most household refrigerators as well as in many large commercial and industrial refrigeration systems. Figure 1 provides a schematic diagram of the components of a typical vapor-compression refrigeration system. The thermodynamics of the cycle can be analyzed on a diagram as shown in Figure 2. In this cycle, a circulating refrigerant such as a low boiling hydrocarbon or hydrofluorocarbons enters the compressor as a vapour. From point 1 to point 2, the vapor is compressed at constant entropy and exits the compressor as a vapor at a higher temperature, but still below the vapor pressure at that temperature. From point 2 to point 3 and on to point 4, the vapor travels through the condenser which cools the vapour until it starts condensing, and then condenses the vapor into a liquid by removing additional heat at constant pressure and temperature. Between points 4 and 5, the liquid refrigerant goes through the expansion valve (also called a throttle valve) where its pressure abruptly decreases, causing flash evaporation and auto-refrigeration of, typically, less than half of the liquid. That results in a mixture of liquid and vapour at a lower temperature and pressure as shown at point 5. The cold liquid-vapor mixture then travels through the evaporator coil or tubes and is completely vaporized by cooling the warm air (from the space being refrigerated) being blown by a fan across the evaporator coil or tubes. The resulting refrigerant vapour returns to the compressor inlet at point 1 to complete the thermodynamic cycle. The above discussion is based on the ideal vapour-compression refrigeration cycle, and does not take into account real-world effects like frictional pressure drop in the system, slight thermodynamic irreversibility during the compression of the refrigerant vapor, or non-ideal gas behavior, if any. Vapor compression refrigerators can be arranged in two stages in cascade refrigeration systems, with the second stage cooling the condenser of the first stage. This can be used for achieving very low temperatures. More information about the design and performance of vapor-compression refrigeration systems is available in the classic Perry's Chemical Engineers' Handbook. Sorption cycle Absorption cycle In the early years of the twentieth century, the vapor absorption cycle using water-ammonia systems or LiBr-water was popular and widely used. After the development of the vapor compression cycle, the vapor absorption cycle lost much of its importance because of its low coefficient of performance (about one fifth of that of the vapor compression cycle). Today, the vapor absorption cycle is used mainly where fuel for heating is available but electricity is not, such as in recreational vehicles that carry LP gas. It is also used in industrial environments where plentiful waste heat overcomes its inefficiency. The absorption cycle is similar to the compression cycle, except for the method of raising the pressure of the refrigerant vapor. In the absorption system, the compressor is replaced by an absorber which dissolves the refrigerant in a suitable liquid, a liquid pump which raises the pressure and a generator which, on heat addition, drives off the refrigerant vapor from the high-pressure liquid. Some work is needed by the liquid pump but, for a given quantity of refrigerant, it is much smaller than needed by the compressor in the vapor compression cycle. In an absorption refrigerator, a suitable combination of refrigerant and absorbent is used. The most common combinations are ammonia (refrigerant) with water (absorbent), and water (refrigerant) with lithium bromide (absorbent). Adsorption cycle The main difference with absorption cycle, is that in adsorption cycle, the refrigerant (adsorbate) could be ammonia, water, methanol, etc., while the adsorbent is a solid, such as silica gel, activated carbon, or zeolite, unlike in the absorption cycle where absorbent is liquid. The reason adsorption refrigeration technology has been extensively researched in recent 30 years lies in that the operation of an adsorption refrigeration system is often noiseless, non-corrosive and environment friendly. Gas cycle When the working fluid is a gas that is compressed and expanded but does not change phase, the refrigeration cycle is called a gas cycle. Air is most often this working fluid. As there is no condensation and evaporation intended in a gas cycle, components corresponding to the condenser and evaporator in a vapor compression cycle are the hot and cold gas-to-gas heat exchangers in gas cycles. The gas cycle is less efficient than the vapor compression cycle because the gas cycle works on the reverse Brayton cycle instead of the reverse Rankine cycle. As such, the working fluid does not receive and reject heat at constant temperature. In the gas cycle, the refrigeration effect is equal to the product of the specific heat of the gas and the rise in temperature of the gas in the low temperature side. Therefore, for the same cooling load, a gas refrigeration cycle needs a large mass flow rate and is bulky. Because of their lower efficiency and larger bulk, air cycle coolers are not often used nowadays in terrestrial cooling devices. However, the air cycle machine is very common on gas turbine-powered jet aircraft as cooling and ventilation units, because compressed air is readily available from the engines' compressor sections. Such units also serve the purpose of pressurizing the aircraft. Thermoelectric refrigeration Thermoelectric cooling uses the Peltier effect to create a heat flux between the junction of two types of material. This effect is commonly used in camping and portable coolers and for cooling electronic components and small instruments. Peltier coolers are often used where a traditional vapor-compression cycle refrigerator would be impractical or take up too much space, and in cooled image sensors as an easy, compact and lightweight, if inefficient, way to achieve very low temperatures, using two or more stage peltier coolers arranged in a cascade refrigeration configuration, meaning that two or more Peltier elements are stacked on top of each other, with each stage being larger than the one before it, in order to extract more heat and waste heat generated by the previous stages. Peltier cooling has a low COP (efficiency) when compared with that of the vapor-compression cycle, so it emits more waste heat (heat generated by the Peltier element or cooling mechanism) and consumes more power for a given cooling capacity. Magnetic refrigeration Magnetic refrigeration, or adiabatic demagnetization, is a cooling technology based on the magnetocaloric effect, an intrinsic property of magnetic solids. The refrigerant is often a paramagnetic salt, such as cerium magnesium nitrate. The active magnetic dipoles in this case are those of the electron shells of the paramagnetic atoms. A strong magnetic field is applied to the refrigerant, forcing its various magnetic dipoles to align and putting these degrees of freedom of the refrigerant into a state of lowered entropy. A heat sink then absorbs the heat released by the refrigerant due to its loss of entropy. Thermal contact with the heat sink is then broken so that the system is insulated, and the magnetic field is switched off. This increases the heat capacity of the refrigerant, thus decreasing its temperature below the temperature of the heat sink. Because few materials exhibit the needed properties at room temperature, applications have so far been limited to cryogenics and research. Other methods Other methods of refrigeration include the air cycle machine used in aircraft; the vortex tube used for spot cooling, when compressed air is available; and thermoacoustic refrigeration using sound waves in a pressurized gas to drive heat transfer and heat exchange; steam jet cooling popular in the early 1930s for air conditioning large buildings; thermoelastic cooling using a smart metal alloy stretching and relaxing. Many Stirling cycle heat engines can be run backwards to act as a refrigerator, and therefore these engines have a niche use in cryogenics. In addition, there are other types of cryocoolers such as Gifford-McMahon coolers, Joule-Thomson coolers, pulse-tube refrigerators and, for temperatures between 2 mK and 500 mK, dilution refrigerators. Elastocaloric refrigeration Another potential solid-state refrigeration technique and a relatively new area of study comes from a special property of super elastic materials. These materials undergo a temperature change when experiencing an applied mechanical stress (called the elastocaloric effect). Since super elastic materials deform reversibly at high strains, the material experiences a flattened elastic region in its stress-strain curve caused by a resulting phase transformation from an austenitic to a martensitic crystal phase. When a super elastic material experiences a stress in the austenitic phase, it undergoes an exothermic phase transformation to the martensitic phase, which causes the material to heat up. Removing the stress reverses the process, restores the material to its austenitic phase, and absorbs heat from the surroundings cooling down the material. The most appealing part of this research is how potentially energy efficient and environmentally friendly this cooling technology is. The different materials used, commonly shape-memory alloys, provide a non-toxic source of emission free refrigeration. The most commonly studied materials studied are shape-memory alloys, like nitinol and Cu-Zn-Al. Nitinol is of the more promising alloys with output heat at about 66 J/cm3 and a temperature change of about 16–20 K. Due to the difficulty in manufacturing some of the shape memory alloys, alternative materials like natural rubber have been studied. Even though rubber may not give off as much heat per volume (12 J/cm3 ) as the shape memory alloys, it still generates a comparable temperature change of about 12 K and operates at a suitable temperature range, low stresses, and low cost. The main challenge however comes from potential energy losses in the form of hysteresis, often associated with this process. Since most of these losses comes from incompatibilities between the two phases, proper alloy tuning is necessary to reduce losses and increase reversibility and efficiency. Balancing the transformation strain of the material with the energy losses enables a large elastocaloric effect to occur and potentially a new alternative for refrigeration. Fridge Gate The Fridge Gate method is a theoretical application of using a single logic gate to drive a refrigerator in the most energy efficient way possible without violating the laws of thermodynamics. It operates on the fact that there are two energy states in which a particle can exist: the ground state and the excited state. The excited state carries a little more energy than the ground state, small enough so that the transition occurs with high probability. There are three components or particle types associated with the fridge gate. The first is on the interior of the refrigerator, the second on the outside and the third is connected to a power supply which heats up every so often that it can reach the E state and replenish the source. In the cooling step on the inside of the refrigerator, the g state particle absorbs energy from ambient particles, cooling them, and itself jumping to the e state. In the second step, on the outside of the refrigerator where the particles are also at an e state, the particle falls to the g state, releasing energy and heating the outside particles. In the third and final step, the power supply moves a particle at the e state, and when it falls to the g state it induces an energy-neutral swap where the interior e particle is replaced by a new g particle, restarting the cycle. Passive systems When combining a passive daytime radiative cooling system with thermal insulation and evaporative cooling, one study found a 300% increase in ambient cooling power when compared to a stand-alone radiative cooling surface, which could extend the shelf life of food by 40% in humid climates and 200% in desert climates without refrigeration. The system's evaporative cooling layer would require water "re-charges" every 10 days to a month in humid areas and every 4 days in hot and dry areas. Capacity ratings The refrigeration capacity of a refrigeration system is the product of the evaporators' enthalpy rise and the evaporators' mass flow rate. The measured capacity of refrigeration is often dimensioned in the unit of kW or BTU/h. Domestic and commercial refrigerators may be rated in kJ/s, or Btu/h of cooling. For commercial and industrial refrigeration systems, the kilowatt (kW) is the basic unit of refrigeration, except in North America, where both ton of refrigeration and BTU/h are used. A refrigeration system's coefficient of performance (CoP) is very important in determining a system's overall efficiency. It is defined as refrigeration capacity in kW divided by the energy input in kW. While CoP is a very simple measure of performance, it is typically not used for industrial refrigeration in North America. Owners and manufacturers of these systems typically use performance factor (PF). A system's PF is defined as a system's energy input in horsepower divided by its refrigeration capacity in TR. Both CoP and PF can be applied to either the entire system or to system components. For example, an individual compressor can be rated by comparing the energy needed to run the compressor versus the expected refrigeration capacity based on inlet volume flow rate. It is important to note that both CoP and PF for a refrigeration system are only defined at specific operating conditions, including temperatures and thermal loads. Moving away from the specified operating conditions can dramatically change a system's performance. Air conditioning systems used in residential application typically use SEER (Seasonal Energy Efficiency Ratio)for the energy performance rating. Air conditioning systems for commercial application often use EER (Energy Efficiency Ratio) and IEER (Integrated Energy Efficiency Ratio) for the energy efficiency performance rating. See also Air conditioning Auto-defrost Beef ring Carnot heat engine Cold chain Coolgardie safe Cryocooler Darcy friction factor formulae Einstein refrigerator Freezer Heat pump Heat pump and refrigeration cycle Heating, ventilation, and air conditioning (HVAC, HVACR) Icebox Icyball Joule–Thomson effect Laser cooling Pot-in-pot refrigerator Pumpable ice technology Quantum refrigerators Redundant refrigeration system Reefer ship Refrigerant Refrigerated container Refrigerator Refrigerator car Refrigerator truck Seasonal energy efficiency ratio (SEER) Steam jet cooling Thermoacoustics Vapor-compression refrigeration Working fluid World Refrigeration Day References Further reading Refrigeration volume, ASHRAE Handbook, ASHRAE, Inc., Atlanta, GA Stoecker and Jones, Refrigeration and Air Conditioning, Tata-McGraw Hill Publishers Mathur, M.L., Mehta, F.S., Thermal Engineering Vol II MSN Encarta Encyclopedia External links Green Cooling Initiative on alternative natural refrigerants cooling technologies "The Refrigeration Cycle", from HowStuffWorks "The Refrigeration", from frigokey American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) International Institute of Refrigeration (IIR) British Institute of Refrigeration Scroll down to "Continuous-Cycle Absorption System" US Department of Energy: Technology Basics of Absorption Cycles Institute of Refrigeration Chemical processes Cooling technology Food preservation Heating, ventilation, and air conditioning Thermodynamics
Refrigeration
[ "Physics", "Chemistry", "Mathematics" ]
10,896
[ "Chemical processes", "Thermodynamics", "nan", "Chemical process engineering", "Dynamical systems" ]
46,545
https://en.wikipedia.org/wiki/Telecommunications%20network
A telecommunications network is a group of nodes interconnected by telecommunications links that are used to exchange messages between the nodes. The links may use a variety of technologies based on the methodologies of circuit switching, message switching, or packet switching, to pass messages and signals. Multiple nodes may cooperate to pass the message from an originating node to the destination node, via multiple network hops. For this routing function, each node in the network is assigned a network address for identification and locating it on the network. The collection of addresses in the network is called the address space of the network. Examples of telecommunications networks include computer networks, the Internet, the public switched telephone network (PSTN), the global Telex network, the aeronautical ACARS network, and the wireless radio networks of cell phone telecommunication providers. Network structure this is the structure of network general, every telecommunications network conceptually consists of three parts, or planes (so-called because they can be thought of as being and often are, separate overlay networks): The data plane (also user plane, bearer plane, or forwarding plane) carries the network's users' traffic, the actual payload. The control plane carries control information (also known as signaling). The management plane carries the operations, administration and management traffic required for network management. The management plane is sometimes considered a part of the control plane. Data networks Data networks are used extensively throughout the world for communication between individuals and organizations. Data networks can be connected to allow users seamless access to resources that are hosted outside of the particular provider they are connected to. The Internet is the best example of the internetworking of many data networks from different organizations. Terminals attached to IP networks like the Internet are addressed using IP addresses. Protocols of the Internet protocol suite (TCP/IP) provide the control and routing of messages across the and IP data network. There are many different network structures that IP can be used across to efficiently route messages, for example: Wide area networks (WAN) Metropolitan area networks (MAN) Local area networks (LAN) There are three features that differentiate MANs from LANs or WANs: The area of the network size is between LANs and WANs. The MAN will have a physical area between 5 and 50 km in diameter. MANs do not generally belong to a single organization. The equipment that interconnects the network, the links, and the MAN itself are often owned by an association or a network provider that provides or leases the service to others. A MAN is a means for sharing resources at high speeds within the network. It often provides connections to WAN networks for access to resources outside the scope of the MAN. Data center networks also rely highly on TCP/IP for communication across machines. They connect thousands of servers, are designed to be highly robust, provide low latency and high bandwidth. Data center network topology plays a significant role in determining the level of failure resiliency, ease of incremental expansion, communication bandwidth and latency. Capacity and speed In analogy to the improvements in the speed and capacity of digital computers, provided by advances in semiconductor technology and expressed in the bi-yearly doubling of transistor density, which is described empirically by Moore's law, the capacity and speed of telecommunications networks have followed similar advances, for similar reasons. In telecommunication, this is expressed in Edholm's law, proposed by and named after Phil Edholm in 2004. This empirical law holds that the bandwidth of telecommunication networks doubles every 18 months, which has proven to be true since the 1970s. The trend is evident in the Internet, cellular (mobile), wireless and wired local area networks (LANs), and personal area networks. This development is the consequence of rapid advances in the development of metal-oxide-semiconductor technology. See also Transcoder free operation References Telecommunications engineering Network architecture Telecommunications infrastructure
Telecommunications network
[ "Engineering" ]
782
[ "Network architecture", "Electrical engineering", "Telecommunications engineering", "Computer networks engineering" ]
46,553
https://en.wikipedia.org/wiki/Escherichia%20coli%20O157%3AH7
Escherichia coli O157:H7 is a serotype of the bacterial species Escherichia coli and is one of the Shiga-like toxin–producing types of E. coli. It is a cause of disease, typically foodborne illness, through consumption of contaminated and raw food, including raw milk and undercooked ground beef. Infection with this type of pathogenic bacteria may lead to hemorrhagic diarrhea, and to kidney failure; these have been reported to cause the deaths of children younger than five years of age, of elderly patients, and of patients whose immune systems are otherwise compromised. Transmission is via the fecal–oral route, and most illness has been through distribution of contaminated raw leaf green vegetables, undercooked meat and raw milk. Signs and symptoms E. coli O157:H7 infection often causes severe, acute hemorrhagic diarrhea (although nonhemorrhagic diarrhea is also possible) and abdominal cramps. Usually little or no fever is present, and the illness resolves in 5 to 10 days. It can also sometimes be asymptomatic. In some people, particularly children under five years of age, persons whose immunologies are otherwise compromised, and the elderly, the infection can cause hemolytic–uremic syndrome (HUS), in which the red blood cells are destroyed and the kidneys fail. About 2–7% of infections lead to this complication. In the United States, HUS is the principal cause of acute kidney failure in children, and most cases of HUS are caused by E. coli O157:H7. Bacteriology Like the other strains of the E. coli, O157:H7 is gram-negative and oxidase-negative. Unlike many other strains, it does not ferment sorbitol, which provides a basis for clinical laboratory differentiation of the strain. Strains of E. coli that express Shiga and Shiga-like toxins gained that ability via infection with a prophage containing the structural gene coding for the toxin, and nonproducing strains may become infected and produce shiga-like toxins after incubation with shiga toxin positive strains. The prophage responsible seems to have infected the strain's ancestors fairly recently, as viral particles have been observed to replicate in the host if it is stressed in some way (e.g. antibiotics). All clinical isolates of E. coli O157:H7 possess the plasmid pO157. The periplasmic catalase is encoded on pO157 and may enhance the virulence of the bacterium by providing additional oxidative protection when infecting the host. E. coli O157:H7 non-hemorrhagic strains are converted to hemorrhagic strains by lysogenic conversion after bacteriophage infection of non-hemorrhagic cells. Natural habitat While it is relatively uncommon, the E. coli serotype O157:H7 can naturally be found in the intestinal contents of some cattle, goats, and even sheep. The digestive tract of cattle lack the Shiga toxin receptor globotriaosylceramide, and thus, these can be asymptomatic carriers of the bacterium. The prevalence of E. coli O157:H7 in North American feedlot cattle herds ranges from 0 to 60%. Some cattle may also be so-called "super-shedders" of the bacterium. Super-shedders may be defined as cattle exhibiting rectoanal junction colonization and excreting >103 to 4 CFU g−1 feces. Super-shedders have been found to constitute a small proportion of the cattle in a feedlot (<10%) but they may account for >90% of all E. coli O157:H7 excreted. Transmission Infection with E. coli O157:H7 can come from ingestion of contaminated food or water, or oral contact with contaminated surfaces. Examples of this can be undercooked ground beef but also leafy vegetables and raw milk. Fields often become contaminated with the bacterium through irrigation processes or contaminated water naturally entering the soil. It is highly virulent, with a low infectious dose: an inoculation of fewer than 10 to 100 colony-forming units (CFU) of E. coli O157:H7 is sufficient to cause infection, compared to over a million CFU for other pathogenic E. coli strains. Diagnosis A stool culture can detect the bacterium. The sample is cultured on sorbitol-MacConkey (SMAC) agar, or the variant cefixime potassium tellurite sorbitol-MacConkey agar (CT-SMAC). On SMAC agar, O157:H7 colonies appear clear due to their inability to ferment sorbitol, while the colonies of the usual sorbitol-fermenting serotypes of E. coli appear red. Sorbitol non-fermenting colonies are tested for the somatic O157 antigen before being confirmed as E. coli O157:H7. Like all cultures, diagnosis is time-consuming with this method; swifter diagnosis is possible using quick E. coli DNA extraction method plus polymerase chain reaction techniques. Newer technologies using fluorescent and antibody detection are also under development. Prevention Avoiding the consumption of, or contact with, unpasteurized dairy products, undercooked beef, uncleaned vegetables, and non disinfected water reduces the risk of an E. coli infection. Proper hand washing with water that has been treated with adequate levels of chlorine or other effective disinfectants after using the lavatory or changing a diaper, especially among children or those with diarrhea, reduces the risk of transmission. E. coli O157:H7 infection is a nationally reportable disease in the US, Great Britain, and Germany. It is also reportable in most states of Australia including Queensland. Treatment While fluid replacement and blood pressure support may be necessary to prevent death from dehydration, most patients recover without treatment in 5–10 days. There is no evidence that antibiotics improve the course of disease, and treatment with antibiotics may precipitate hemolytic–uremic syndrome (HUS). The antibiotics are thought to trigger prophage induction, and the prophages released by the dying bacteria infect other susceptible bacteria, converting them into toxin-producing forms. Antidiarrheal agents, such as loperamide (imodium), should also be avoided as they may prolong the duration of the infection. Certain novel treatment strategies, such as the use of anti-induction strategies to prevent toxin production and the use of anti-Shiga toxin antibodies, have also been proposed. History The common ancestor of Escherichia coli O157:H7 was found to have originated in the Netherlands around 1890 as estimated by molecular biologists. It is thought that international spread was through animal movements, like Holstein Friesian cattle. E.coli O157:H7 is thought to have moved from Europe to Australia around 1937, to the United States in 1941, to Canada in 1960, and from Australia to New Zealand in 1966. The first recorded observation of human E. coli O157:H7 infection was in 1975, in association with a sporadic case of hemorrhagic colitis, but it was not identified as pathogenic then. It was first recognized as a human pathogen following a 1982 hemorrhagic colitis outbreak in Oregon and Michigan, in which at least 47 people were sickened by eating beef hamburger patties from a fast food chain that were found to be contaminated with it. The United States Department of Agriculture banned the sale of ground beef contaminated with the O157:H7 strain in 1994. Culture and society The pathogen results in an estimated 2,100 hospitalizations annually in the United States. The illness is often misdiagnosed; therefore, expensive and invasive diagnostic procedures may be performed. Patients who develop HUS often require prolonged hospitalization, dialysis, and long-term followup. See also 1993 Jack in the Box E. coli outbreak 1996 Odwalla E. coli outbreak 2011 Germany E. coli O104:H4 outbreak 2024 McDonald's E. coli outbreak Escherichia coli O104:H4 Escherichia coli O121 Food-induced purpura List of foodborne illness outbreaks Walkerton E. coli outbreak References External links Haemolytic Uraemic Syndrome Help (HUSH) – a UK based charity E. coli: Protecting yourself and your family from a sometimes deadly bacterium Escherichia coli O157:H7 genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID For more information about reducing your risk of foodborne illness, visit the US Department of Agriculture's Food Safety and Inspection Service website or The Partnership for Food Safety Education | Fight BAC! briandeer.com, report from The Sunday Times on a UK outbreak, May 17, 1998 CBS5 report on September 2006 outbreak Escherichia coli Bovine diseases Zoonoses Foodborne illnesses Infraspecific bacteria taxa Pathogenic bacteria bg:Escherichia coli O157:H7
Escherichia coli O157:H7
[ "Biology" ]
1,959
[ "Model organisms", "Escherichia coli" ]
46,594
https://en.wikipedia.org/wiki/Straw
Straw is an agricultural byproduct consisting of the dry stalks of cereal plants after the grain and chaff have been removed. It makes up about half of the yield by weight of cereal crops such as barley, oats, rice, rye and wheat. It has a number of different uses, including fuel, livestock bedding and fodder, thatching and basket making. Straw is usually gathered and stored in a straw bale, which is a bale, or bundle, of straw tightly bound with twine, wire, or string. Straw bales may be square, rectangular, star shaped or round, and can be very large, depending on the type of baler used. Uses Current and historic uses of straw include: Animal feed Straw may be fed as part of the roughage component of the diet to cattle or horses that are on a near maintenance level of energy requirement. It has a low digestible energy and nutrient content (as opposed to hay, which is much more nutritious). The heat generated when microorganisms in a herbivore's gut digest straw can be useful in maintaining body temperature in cold climates. Due to the risk of impaction and its poor nutrient profile, it should always be restricted to part of the diet. It may be fed as it is, or chopped into short lengths, known as chaff. Basketry Bee skeps and linen baskets are made from coiled and bound together continuous lengths of straw. The technique is known as lip work. Bedding Straw is commonly used as bedding for ruminants and horses. It may be used as bedding and food for small animals, but this often leads to injuries to mouth, nose and eyes as straw is quite sharp. The straw-filled mattress, also known as a palliasse, is still used by people in many parts of the world. Bioplastic Rice straw, an agricultural waste which is not usually recovered, can be turned into bioplastic with mechanical properties akin to polystyrene in its dry state. Chemicals Straw is being investigated as a source of fine chemicals including alkaloids, flavonoids, lignins, phenols, and steroids. Construction material In many parts of the world, straw is used to bind clay and concrete. A mixture of clay and straw, known as cob, can be used as a building material. There are many recipes for making cob. When baled, straw has moderate insulation characteristics (about R-1.5/inch according to Oak Ridge National Lab and Forest Product Lab testing). It can be used, alone or in a post-and-beam construction, to build straw bale houses. When bales are used to build or insulate buildings, the straw bales are commonly finished with earthen plaster. The plastered walls provide some thermal mass, compressive and ductile structural strength, and acceptable fire resistance as well as thermal resistance (insulation), somewhat in excess of North American building code. Straw is an abundant agricultural waste product, and requires little energy to bale and transport for construction. For these reasons, straw bale construction is gaining popularity as part of passive solar and other renewable energy projects. Wheat straw can be used as a fibrous filler combined with polymers to produce composite lumber. Enviroboard can be made from straw. Strawblocks are strawbales that have been recompressed to the density of woodblocks, for compact cargo container shipment, or for straw-bale construction of load-bearing walls that support roof-loads, such as a "living" or green roofs. Crafts Craft usages of straw include: Corn dollies Straw marquetry Straw mobile (straw art) Straw painting Straw plaiting Scarecrows Japanese Traditional Cat's House Japanese wara art Construction site sediment control Straw bales are sometimes used for sediment control at construction sites. However, bales are often ineffective in protecting water quality and are maintenance-intensive. For these reasons the U.S. Environmental Protection Agency (EPA) and various state agencies recommend use of alternative sediment control practices where possible, such as silt fences, fiber rolls and geotextiles. They can also be used as burned area emergency response, as ground cover or as in-stream check dams. Fuel source The use of straw as a carbon-neutral energy source is increasing rapidly, especially for biobutanol. Straw or hay briquettes are a biofuel substitute to coal. Straw, processed first as briquettes, has been fed into a biogas plant in Aarhus University, Denmark, in a test to see if higher gas yields could be attained. The use of straw in large-scale biomass power plants is becoming mainstream in the EU, with several facilities already online. The straw is either used directly in the form of bales, or densified into pellets which allows for the feedstock to be transported over longer distances. Finally, torrefaction of straw with pelletisation is gaining attention, because it increases the energy density of the resource, making it possible to transport it still further. This processing step also makes storage much easier, because torrefied straw pellets are hydrophobic. Torrefied straw in the form of pellets can be directly co-fired with coal or natural gas at very high rates and make use of the processing infrastructures at existing coal and gas plants. Because the torrefied straw pellets have superior structural, chemical and combustion properties to coal, they can replace all coal and turn a coal plant into an entirely biomass-fed power station. First generation pellets are limited to a co-firing rate of 15% in modern IGCC plants. Gardening Straw bale gardening is also popular among gardeners who do not have enough space for soil gardening. When properly conditioned, straw bales can be used as a perfect soil substitute. Hats There are several styles of straw hats that are made of woven straw. Many thousands of women and children in England (primarily in the Luton district of Bedfordshire), and large numbers in the United States (mostly Massachusetts), were employed in plaiting straw for making hats. By the late 19th century, vast quantities of plaits were being imported to England from Canton in China, and in the United States most of the straw plait was imported. A fiber analogous to straw is obtained from the plant Carludovica palmata, and is used to make Panama hats. Traditional Japanese rain protection consisted of a straw hat and a mino cape. Horticulture Straw is used in cucumber houses and for mushroom growing. In Japan, certain trees are wrapped with straw to protect them from the effects of a hard winter as well as to use them as a trap for parasite insects. (see Komomaki) It is also used in ponds to reduce algae by changing the nutrient ratios in the water. The soil under strawberries is covered with straw to protect the ripe berries from dirt, and straw is also used to cover the plants during winter to prevent the cold from killing them. Straw also makes an excellent mulch. Packaging Straw is resistant to being crushed and therefore makes a good packing material. A company in France makes a straw mat sealed in thin plastic sheets. Straw envelopes for wine bottles have become rarer, but are still to be found at some wine merchants. Wheat straw is also used in compostable food packaging such as compostable plates. Packaging made from wheat straw can be certified compostable and will biodegrade in a commercial composting environment. Paper Straw can be pulped to make paper. Rope Rope made from straw was used by thatchers, in the packaging industry and even in iron foundries. Saekki is a traditional Korean rope made of woven straw. Shoes The Chinese wore cailu or caixie, shoes and sandals made of straw, well into modernity. Koreans wear jipsin, sandals made of straw. Several types of traditional Japanese shoes, such as waraji and zōri, are made of straw. In some parts of Germany like Black Forest and Hunsrück people wear straw shoes at home or at carnival. Targets Heavy-gauge straw rope is coiled and sewn tightly together to make archery targets. This is no longer done entirely by hand, but is partially mechanised. Sometimes a paper or plastic target is set up in front of straw bales, which serve to support the target and provide a safe backdrop. Thatching Thatching uses straw, reed or similar materials to make a waterproof, lightweight roof with good insulation properties. Straw for this purpose (often wheat straw) is grown specially and harvested using a reaper-binder. Health and safety Dried straw presents a fire hazard that can ignite easily if exposed to sparks or an open flame. It can also trigger allergic rhinitis in people who are hypersensitive to airborne allergens such as straw dust. See also Corn stover (corn straw) Crop residue Drinking straw Hay Straw (colour) Sheaf (agriculture), a bundle of straw Stook, a stack of straw Straw dog Wood wool Yule Goat References External links Biodegradable materials Biomass Packaging materials Building insulation materials Soil erosion Natural materials By-products
Straw
[ "Physics", "Chemistry" ]
1,887
[ "Natural materials", "Biodegradable materials", "Biodegradation", "Materials", "Matter" ]
46,676
https://en.wikipedia.org/wiki/Banach%20fixed-point%20theorem
In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contractive mapping theorem or Banach–Caccioppoli theorem) is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces and provides a constructive method to find those fixed points. It can be understood as an abstract formulation of Picard's method of successive approximations. The theorem is named after Stefan Banach (1892–1945) who first stated it in 1922. Statement Definition. Let be a metric space. Then a map is called a contraction mapping on X if there exists such that for all Banach fixed-point theorem. Let be a non-empty complete metric space with a contraction mapping Then T admits a unique fixed-point in X (i.e. ). Furthermore, can be found as follows: start with an arbitrary element and define a sequence by for Then . Remark 1. The following inequalities are equivalent and describe the speed of convergence: Any such value of q is called a Lipschitz constant for , and the smallest one is sometimes called "the best Lipschitz constant" of . Remark 2. for all is in general not enough to ensure the existence of a fixed point, as is shown by the map which lacks a fixed point. However, if is compact, then this weaker assumption does imply the existence and uniqueness of a fixed point, that can be easily found as a minimizer of , indeed, a minimizer exists by compactness, and has to be a fixed point of It then easily follows that the fixed point is the limit of any sequence of iterations of Remark 3. When using the theorem in practice, the most difficult part is typically to define properly so that Proof Let be arbitrary and define a sequence by setting . We first note that for all we have the inequality This follows by induction on , using the fact that is a contraction mapping. Then we can show that is a Cauchy sequence. In particular, let such that : Let be arbitrary. Since , we can find a large so that Therefore, by choosing and greater than we may write: This proves that the sequence is Cauchy. By completeness of , the sequence has a limit Furthermore, must be a fixed point of : As a contraction mapping, is continuous, so bringing the limit inside was justified. Lastly, cannot have more than one fixed point in , since any pair of distinct fixed points and would contradict the contraction of : Applications A standard application is the proof of the Picard–Lindelöf theorem about the existence and uniqueness of solutions to certain ordinary differential equations. The sought solution of the differential equation is expressed as a fixed point of a suitable integral operator on the space of continuous functions under the uniform norm. The Banach fixed-point theorem is then used to show that this integral operator has a unique fixed point. One consequence of the Banach fixed-point theorem is that small Lipschitz perturbations of the identity are bi-lipschitz homeomorphisms. Let Ω be an open set of a Banach space E; let denote the identity (inclusion) map and let g : Ω → E be a Lipschitz map of constant k < 1. Then Ω′ := (I + g)(Ω) is an open subset of E: precisely, for any x in Ω such that one has I + g : Ω → Ω′ is a bi-Lipschitz homeomorphism; precisely, (I + g)−1 is still of the form with h a Lipschitz map of constant k/(1 − k). A direct consequence of this result yields the proof of the inverse function theorem. It can be used to give sufficient conditions under which Newton's method of successive approximations is guaranteed to work, and similarly for Chebyshev's third-order method. It can be used to prove existence and uniqueness of solutions to integral equations. It can be used to give a proof to the Nash embedding theorem. It can be used to prove existence and uniqueness of solutions to value iteration, policy iteration, and policy evaluation of reinforcement learning. It can be used to prove existence and uniqueness of an equilibrium in Cournot competition, and other dynamic economic models. Converses Several converses of the Banach contraction principle exist. The following is due to Czesław Bessaga, from 1959: Let f : X → X be a map of an abstract set such that each iterate fn has a unique fixed point. Let then there exists a complete metric on X such that f is contractive, and q is the contraction constant. Indeed, very weak assumptions suffice to obtain such a kind of converse. For example if is a map on a T1 topological space with a unique fixed point a, such that for each we have fn(x) → a, then there already exists a metric on X with respect to which f satisfies the conditions of the Banach contraction principle with contraction constant 1/2. In this case the metric is in fact an ultrametric. Generalizations There are a number of generalizations (some of which are immediate corollaries). Let T : X → X be a map on a complete non-empty metric space. Then, for example, some generalizations of the Banach fixed-point theorem are: Assume that some iterate Tn of T is a contraction. Then T has a unique fixed point. Assume that for each n, there exist cn such that d(Tn(x), Tn(y)) ≤ cnd(x, y) for all x and y, and that Then T has a unique fixed point. In applications, the existence and uniqueness of a fixed point often can be shown directly with the standard Banach fixed point theorem, by a suitable choice of the metric that makes the map T a contraction. Indeed, the above result by Bessaga strongly suggests to look for such a metric. See also the article on fixed point theorems in infinite-dimensional spaces for generalizations. A different class of generalizations arise from suitable generalizations of the notion of metric space, e.g. by weakening the defining axioms for the notion of metric. Some of these have applications, e.g., in the theory of programming semantics in theoretical computer science. Example An application of the Banach fixed-point theorem and fixed-point iteration can be used to quickly obtain an approximation of with high accuracy. Consider the function . It can be verified that is a fixed point of f, and that f maps the interval to itself. Moreover, , and it can be verified that on this interval. Therefore, by an application of the mean value theorem, f has a Lipschitz constant less than 1 (namely ). Applying the Banach fixed-point theorem shows that the fixed point is the unique fixed point on the interval, allowing for fixed-point iteration to be used. For example, the value 3 may be chosen to start the fixed-point iteration, as . The Banach fixed-point theorem may be used to conclude that Applying f to 3 only three times already yields an expansion of accurate to 33 digits: See also Brouwer fixed-point theorem Caristi fixed-point theorem Contraction mapping Fichera's existence principle Fixed-point iteration Fixed-point theorems Infinite compositions of analytic functions Kantorovich theorem Notes References See chapter 7. Articles containing proofs Eponymous theorems of mathematics Fixed-point theorems Metric geometry Topology
Banach fixed-point theorem
[ "Physics", "Mathematics" ]
1,556
[ "Theorems in mathematical analysis", "Articles containing proofs", "Fixed-point theorems", "Theorems in topology", "Topology", "Space", "Geometry", "Spacetime" ]
47,011
https://en.wikipedia.org/wiki/Arrhenius%20equation
In physical chemistry, the Arrhenius equation is a formula for the temperature dependence of reaction rates. The equation was proposed by Svante Arrhenius in 1889, based on the work of Dutch chemist Jacobus Henricus van 't Hoff who had noted in 1884 that the van 't Hoff equation for the temperature dependence of equilibrium constants suggests such a formula for the rates of both forward and reverse reactions. This equation has a vast and important application in determining the rate of chemical reactions and for calculation of energy of activation. Arrhenius provided a physical justification and interpretation for the formula. Currently, it is best seen as an empirical relationship. It can be used to model the temperature variation of diffusion coefficients, population of crystal vacancies, creep rates, and many other thermally induced processes and reactions. The Eyring equation, developed in 1935, also expresses the relationship between rate and energy. Formulation The Arrhenius equation describes the exponential dependence of the rate constant of a chemical reaction on the absolute temperature as where is the rate constant (frequency of collisions resulting in a reaction), is the absolute temperature, is the pre-exponential factor or Arrhenius factor or frequency factor. Arrhenius originally considered A to be a temperature-independent constant for each chemical reaction. However more recent treatments include some temperature dependence – see below. is the molar activation energy for the reaction, is the universal gas constant. Alternatively, the equation may be expressed as where is the activation energy for the reaction (in the same unit as kBT), is the Boltzmann constant. The only difference is the unit of : the former form uses energy per mole, which is common in chemistry, while the latter form uses energy per molecule directly, which is common in physics. The different units are accounted for in using either the gas constant, , or the Boltzmann constant, , as the multiplier of temperature . The unit of the pre-exponential factor are identical to those of the rate constant and will vary depending on the order of the reaction. If the reaction is first order it has the unit s−1, and for that reason it is often called the frequency factor or attempt frequency of the reaction. Most simply, is the number of collisions that result in a reaction per second, is the number of collisions (leading to a reaction or not) per second occurring with the proper orientation to react and is the probability that any given collision will result in a reaction. It can be seen that either increasing the temperature or decreasing the activation energy (for example through the use of catalysts) will result in an increase in rate of reaction. Given the small temperature range of kinetic studies, it is reasonable to approximate the activation energy as being independent of the temperature. Similarly, under a wide range of practical conditions, the weak temperature dependence of the pre-exponential factor is negligible compared to the temperature dependence of the factor ; except in the case of "barrierless" diffusion-limited reactions, in which case the pre-exponential factor is dominant and is directly observable. With this equation it can be roughly estimated that the rate of reaction increases by a factor of about 2 to 3 for every 10 °C rise in temperature, for common values of activation energy and temperature range. The factor denotes the fraction of molecules with energy greater than or equal to . Derivation Van't Hoff argued that the temperature of a reaction and the standard equilibrium constant exhibit the relation: where denotes the apposite standard internal energy change value. Let and respectively denote the forward and backward reaction rates of the reaction of interest, then , an equation from which naturally follows. Substituting the expression for in eq.(), we obtain . The preceding equation can be broken down into the following two equations: and where and are the activation energies associated with the forward and backward reactions respectively, with . Experimental findings suggest that the constants in eq.() and eq.() can be treated as being equal to zero, so that and Integrating these equations and taking the exponential yields the results and , where each pre-exponential factor or is mathematically the exponential of the constant of integration for the respective indefinite integral in question. Arrhenius plot Taking the natural logarithm of Arrhenius equation yields: Rearranging yields: This has the same form as an equation for a straight line: where x is the reciprocal of T. So, when a reaction has a rate constant obeying the Arrhenius equation, a plot of ln k versus T−1 gives a straight line, whose slope and intercept can be used to determine Ea and A respectively. This procedure is common in experimental chemical kinetics. The activation energy is simply obtained by multiplying by (−R) the slope of the straight line drawn from a plot of ln k versus (1/T): Modified Arrhenius equation The modified Arrhenius equation makes explicit the temperature dependence of the pre-exponential factor. The modified equation is usually of the form The original Arrhenius expression above corresponds to . Fitted rate constants typically lie in the range . Theoretical analyses yield various predictions for n. It has been pointed out that "it is not feasible to establish, on the basis of temperature studies of the rate constant, whether the predicted T1/2 dependence of the pre-exponential factor is observed experimentally". However, if additional evidence is available, from theory and/or from experiment (such as density dependence), there is no obstacle to incisive tests of the Arrhenius law. Another common modification is the stretched exponential form where β is a dimensionless number of order 1. This is typically regarded as a purely empirical correction or fudge factor to make the model fit the data, but can have theoretical meaning, for example showing the presence of a range of activation energies or in special cases like the Mott variable range hopping. Theoretical interpretation Arrhenius's concept of activation energy Arrhenius argued that for reactants to transform into products, they must first acquire a minimum amount of energy, called the activation energy Ea. At an absolute temperature T, the fraction of molecules that have a kinetic energy greater than Ea can be calculated from statistical mechanics. The concept of activation energy explains the exponential nature of the relationship, and in one way or another, it is present in all kinetic theories. The calculations for reaction rate constants involve an energy averaging over a Maxwell–Boltzmann distribution with as lower bound and so are often of the type of incomplete gamma functions, which turn out to be proportional to . Collision theory One approach is the collision theory of chemical reactions, developed by Max Trautz and William Lewis in the years 1916–18. In this theory, molecules are supposed to react if they collide with a relative kinetic energy along their line of centers that exceeds Ea. The number of binary collisions between two unlike molecules per second per unit volume is found to be where NA is the Avogadro constant, dAB is the average diameter of A and B, T is the temperature which is multiplied by the Boltzmann constant kB to convert to energy, and μAB is the reduced mass. The rate constant is then calculated as , so that the collision theory predicts that the pre-exponential factor is equal to the collision number zAB. However for many reactions this agrees poorly with experiment, so the rate constant is written instead as . Here is an empirical steric factor, often much less than 1.00, which is interpreted as the fraction of sufficiently energetic collisions in which the two molecules have the correct mutual orientation to react. Transition state theory The Eyring equation, another Arrhenius-like expression, appears in the "transition state theory" of chemical reactions, formulated by Eugene Wigner, Henry Eyring, Michael Polanyi and M. G. Evans in the 1930s. The Eyring equation can be written: where is the Gibbs energy of activation, is the entropy of activation, is the enthalpy of activation, is the Boltzmann constant, and is the Planck constant. At first sight this looks like an exponential multiplied by a factor that is linear in temperature. However, free energy is itself a temperature dependent quantity. The free energy of activation is the difference of an enthalpy term and an entropy term multiplied by the absolute temperature. The pre-exponential factor depends primarily on the entropy of activation. The overall expression again takes the form of an Arrhenius exponential (of enthalpy rather than energy) multiplied by a slowly varying function of T. The precise form of the temperature dependence depends upon the reaction, and can be calculated using formulas from statistical mechanics involving the partition functions of the reactants and of the activated complex. Limitations of the idea of Arrhenius activation energy Both the Arrhenius activation energy and the rate constant k are experimentally determined, and represent macroscopic reaction-specific parameters that are not simply related to threshold energies and the success of individual collisions at the molecular level. Consider a particular collision (an elementary reaction) between molecules A and B. The collision angle, the relative translational energy, the internal (particularly vibrational) energy will all determine the chance that the collision will produce a product molecule AB. Macroscopic measurements of E and k are the result of many individual collisions with differing collision parameters. To probe reaction rates at molecular level, experiments are conducted under near-collisional conditions and this subject is often called molecular reaction dynamics. Another situation where the explanation of the Arrhenius equation parameters falls short is in heterogeneous catalysis, especially for reactions that show Langmuir-Hinshelwood kinetics. Clearly, molecules on surfaces do not "collide" directly, and a simple molecular cross-section does not apply here. Instead, the pre-exponential factor reflects the travel across the surface towards the active site. There are deviations from the Arrhenius law during the glass transition in all classes of glass-forming matter. The Arrhenius law predicts that the motion of the structural units (atoms, molecules, ions, etc.) should slow down at a slower rate through the glass transition than is experimentally observed. In other words, the structural units slow down at a faster rate than is predicted by the Arrhenius law. This observation is made reasonable assuming that the units must overcome an energy barrier by means of a thermal activation energy. The thermal energy must be high enough to allow for translational motion of the units which leads to viscous flow of the material. See also Accelerated aging Eyring equation Q10 (temperature coefficient) Van 't Hoff equation Clausius–Clapeyron relation Gibbs–Helmholtz equation Cherry blossom frontpredicted using the Arrhenius equation References Bibliography External links Carbon Dioxide solubility in Polyethylene – Using Arrhenius equation for calculating species solubility in polymers Chemical kinetics Eponymous equations of physics Statistical mechanics
Arrhenius equation
[ "Physics", "Chemistry" ]
2,243
[ "Chemical reaction engineering", "Equations of physics", "Eponymous equations of physics", "Statistical mechanics", "Chemical kinetics" ]
47,012
https://en.wikipedia.org/wiki/Peering
In computer networking, peering is a voluntary interconnection of administratively separate Internet networks for the purpose of exchanging traffic between the "down-stream" users of each network. Peering is settlement-free, also known as "bill-and-keep" or "sender keeps all", meaning that neither party pays the other in association with the exchange of traffic; instead, each derives and retains revenue from its own customers. An agreement by two or more networks to peer is instantiated by a physical interconnection of the networks, an exchange of routing information through the Border Gateway Protocol (BGP) routing protocol, tacit agreement to norms of conduct and, in some extraordinarily rare cases (0.07%), a formalized contractual document. In 0.02% of cases the word "peering" is used to describe situations where there is some settlement involved. Because these outliers can be viewed as creating ambiguity, the phrase "settlement-free peering" is sometimes used to explicitly denote normal cost-free peering. History The first Internet exchange point was the Commercial Internet eXchange (CIX), formed by Alternet/UUNET (now Verizon Business), PSI, and CERFNET to exchange traffic without regard for whether the traffic complied with the acceptable use policy (AUP) of the NSFNet or ANS' interconnection policy. The CIX infrastructure consisted of a single router, managed by PSI, and was initially located in Santa Clara, California. Paying CIX members were allowed to attach to the router directly or via leased lines. After some time, the router was also attached to the Pacific Bell SMDS cloud. The router was later moved to the Palo Alto Internet Exchange, or PAIX, which was developed and operated by Digital Equipment Corporation (DEC). Because the CIX operated at OSI layer 3, rather than OSI layer 2, and because it was not neutral, in the sense that it was operated by one of its participants rather than by all of them collectively, and it conducted lobbying activities supported by some of its participants and not by others, it would not today be considered an Internet exchange point. Nonetheless, it was the first thing to bear that name. The first exchange point to resemble modern, neutral, Ethernet-based exchanges was the Metropolitan Area Ethernet, or MAE, in Tysons Corner, Virginia. When the United States government de-funded the NSFNET backbone, Internet exchange points were needed to replace its function, and initial governmental funding was used to aid the preexisting MAE and bootstrap three other exchanges, which they dubbed NAPs, or "Network Access Points," in accordance with the terminology of the National Information Infrastructure document. All four are now defunct or no longer functioning as Internet exchange points: MAE-East – Located in Tysons Corner, Virginia, and later relocated to Ashburn, Virginia Chicago NAP – Operated by Ameritech and located in Chicago, Illinois New York NAP – Operated by Sprint and located in Pennsauken, New Jersey San Francisco NAP – Operated by PacBell and located in the Bay Area As the Internet grew, and traffic levels increased, these NAPs became a network bottleneck. Most of the early NAPs utilized FDDI technology, which provided only 100 Mbit/s of capacity to each participant. Some of these exchanges upgraded to ATM technology, which provided OC-3 (155 Mbit/s) and OC-12 (622 Mbit/s) of capacity. Other prospective exchange point operators moved directly into offering Ethernet technology, such as gigabit Ethernet (1,000 Mbit/s), which quickly became the predominant choice for Internet exchange points due to the reduced cost and increased capacity offered. Today, almost all significant exchange points operate solely over Ethernet, and most of the largest exchange points offer 10, 40, and even 100 gigabit service. During the dot-com boom, many exchange point and carrier-neutral colocation providers had plans to build as many as 50 locations to promote carrier interconnection in the United States alone. Essentially all of these plans were abandoned following the dot-com bust, and today it is considered both economically and technically infeasible to support this level of interconnection among even the largest of networks. How peering works The Internet is a collection of separate and distinct networks referred to as autonomous systems, each one consisting of a set of globally unique IP addresses and a unique global BGP routing policy. The interconnection relationships between Autonomous Systems are of exactly two types: Peering - Two networks exchange traffic between their users freely, and for mutual benefit. Transit – One network pays another network for access to the Internet. Therefore, in order for a network to reach any specific other network on the Internet, it must either: Sell transit service to that network or a chain of resellers ending at that network (making them a 'customer'), Peer with that network or with a network which sells transit service to that network, or Buy transit service from any other network (which is then responsible for providing interconnection to the rest of the Internet). The Internet is based on the principle of global or end-to-end reachability, which means that any Internet user can transparently exchange traffic with any other Internet user. Therefore, a network is connected to the Internet if and only if it buys transit, or peers with every other network which also does not purchase transit (which together constitute a "default free zone" or "DFZ"). Public peering is done at Internet exchange points (IXPs), while private peering can be done with direct links between networks. Motivations for peering Peering involves two networks coming together to exchange traffic with each other freely, and for mutual benefit. This 'mutual benefit' is most often the motivation behind peering, which is often described solely by "reduced costs for transit services". Other less tangible motivations can include: Increased redundancy (by reducing dependence on one or more transit providers). Increased capacity for extremely large amounts of traffic (distributing traffic across many networks). Increased routing control over one's traffic. Improved performance (attempting to bypass potential bottlenecks with a "direct" path). Improved perception of one's network (being able to claim a "higher tier"). Ease of requesting for emergency aid (from friendly peers). Physical interconnections for peering The physical interconnections used for peering are categorized into two types: Public peering – Interconnection utilizing a multi-party shared switch fabric such as an Ethernet switch. Private peering – Interconnection utilizing a point-to-point link between two parties. Public peering Public peering is accomplished across a Layer 2 access technology, generally called a shared fabric. At these locations, multiple carriers interconnect with one or more other carriers across a single physical port. Historically, public peering locations were known as network access points (NAPs). Today they are most often called exchange points or Internet exchanges ("IXP"). Many of the largest exchange points in the world can have hundreds of participants, and some span multiple buildings and colocation facilities across a city. Since public peering allows networks interested in peering to interconnect with many other networks through a single port, it is often considered to offer "less capacity" than private peering, but to a larger number of networks. Many smaller networks, or networks which are just beginning to peer, find that public peering exchange points provide an excellent way to meet and interconnect with other networks which may be open to peering with them. Some larger networks utilize public peering as a way to aggregate a large number of "smaller peers", or as a location for conducting low-cost "trial peering" without the expense of provisioning private peering on a temporary basis, while other larger networks are not willing to participate at public exchanges at all. A few exchange points, particularly in the United States, are operated by commercial carrier-neutral third parties, which are critical for achieving cost-effective data center connectivity. Private peering Private peering is the direct interconnection between only two networks, across a Layer 1 or 2 medium that offers dedicated capacity that is not shared by any other parties. Early in the history of the Internet, many private peers occurred across "telco" provisioned SONET circuits between individual carrier-owned facilities. Today, most private interconnections occur at carrier hotels or carrier neutral colocation facilities, where a direct crossconnect can be provisioned between participants within the same building, usually for a much lower cost than telco circuits. Most of the traffic on the Internet, especially traffic between the largest networks, occurs via private peering. However, because of the resources required to provision each private peer, many networks are unwilling to provide private peering to "small" networks, or to "new" networks which have not yet proven that they will provide a mutual benefit. Peering agreement Throughout the history of the Internet, there have been a spectrum of kinds of agreements between peers, ranging from handshake agreements to written contracts as required by one or more parties. Such agreements set forth the details of how traffic is to be exchanged, along with a list of expected activities which may be necessary to maintain the peering relationship, a list of activities which may be considered abusive and result in termination of the relationship, and details concerning how the relationship can be terminated. Detailed contracts of this type are typically used between the largest ISPs, as well as the ones operating in the most heavily regulated economies. As of 2011, such contracts account for less than 0.5% of all peering agreements. Depeering By definition, peering is the voluntary and free exchange of traffic between two networks, for mutual benefit. If one or both networks believes that there is no longer a mutual benefit, they may decide to cease peering: this is known as depeering. Some of the reasons why one network may wish to depeer another include: A desire that the other network pay settlement, either in exchange for continued peering or for transit services. A belief that the other network is "profiting unduly" from the no-settlement interconnection. Concern over traffic ratios, which is related to the fair sharing of cost for the interconnection. A desire to peer with the upstream transit provider of the peered network. Abuse of the interconnection by the other party, such as pointing default or utilizing the peer for transit. Instability of the peered network, repeated routing leaks, lack of response to network abuse issues, etc. The inability or unwillingness of the peered network to provision additional capacity for peering. The belief that the peered network is unduly peering with one's customers. Various external political factors (including personal conflicts between individuals at each network). In some situations, networks which are being depeered have been known to attempt to fight to keep the peering by intentionally breaking the connectivity between the two networks when the peer is removed, either through a deliberate act or an act of omission. The goal is to force the depeering network to have so many customer complaints that they are willing to restore peering. Examples of this include forcing traffic via a path that does not have enough capacity to handle the load, or intentionally blocking alternate routes to or from the other network. Some notable examples of these situations have included: BBN Planet vs Exodus Communications PSINet vs Cable & Wireless AOL Transit Data Network (ATDN) vs Cogent Communications France Telecom vs Cogent Communications France Telecom (Wanadoo) vs Proxad (Free) Level 3 Communications vs XO Communications Level 3 Communications vs Cogent Communications Telecom/Telefónica/Impsat/Prima vs CABASE (Argentina) Cogent Communications vs TeliaSonera Sprint-Nextel vs Cogent Communications SFR vs OVH The French ISP 'Free' vs YouTube Modern peering Donut peering model The "donut peering" model describes the intensive interconnection of small and medium-sized regional networks that make up much of the Internet. Traffic between these regional networks can be modeled as a toroid, with a core "donut hole" that is poorly interconnected to the networks around it. As detailed above, some carriers attempted to form a cartel of self-described Tier 1 networks, nominally refusing to peer with any networks outside the oligopoly. Seeking to reduce transit costs, connections between regional networks bypass those "core" networks. Data takes a more direct path, reducing latency and packet loss. This also improves resiliency between consumers and content providers via multiple connections in many locations around the world, in particular during business disputes between the core transit providers. Multilateral peering The majority of BGP AS-AS adjacencies are the product of multilateral peering agreements, or MLPAs. In multilateral peering, an unlimited number of parties agree to exchange traffic on common terms, using a single agreement to which they each accede. The multilateral peering is typically technically instantiated in a route server or route reflector (which differ from looking glasses in that they serve routes back out to participants, rather than just listening to inbound routes) to redistribute routes via a BGP hub-and-spoke topology, rather than a partial-mesh topology. The two primary criticisms of multilateral peering are that it breaks the shared fate of the forwarding and routing planes, since the layer-2 connection between two participants could hypothetically fail while their layer-2 connections with the route server remained up, and that they force all participants to treat each other with the same, undifferentiated, routing policy. The primary benefit of multilateral peering is that it minimizes configuration for each peer, while maximizing the efficiency with which new peers can begin contributing routes to the exchange. While optional multilateral peering agreements and route servers are now widely acknowledged to be a good practice, mandatory multilateral peering agreements (MMLPAs) have long been agreed to not be a good practice. Peering locations The modern Internet operates with significantly more peering locations than at any time in the past, resulting in improved performance and better routing for the majority of the traffic on the Internet. However, in the interests of reducing costs and improving efficiency, most networks have attempted to standardize on relatively few locations within these individual regions where they will be able to quickly and efficiently interconnect with their peering partners. Exchange points As of 2021, the largest exchange points in the world are Ponto de Troca de Tráfego Metro São Paulo, in São Paulo, with 2,289 peering networks; OpenIXP in Jakarta, with 1,097 peering networks; and DE-CIX in Frankfurt, with 1,050 peering networks. The United States, with a historically larger focus on private peering and commercial public peering, has much less traffic visible on public peering switch-fabrics compared to other regions that are dominated by non-profit membership exchange points. Collectively, the many exchange points operated by Equinix are generally considered to be the largest, though traffic figures are not generally published. Other important but smaller exchange points include AMS-IX in Amsterdam, LINX and LONAP in London, and NYIIX in New York. URLs to some public traffic statistics of exchange points include: AMS-IX DE-CIX LINX MSK-IX TORIX NYIIX LAIIX TOP-IX Netnod Mix Milano ix.br SP SFMIX Peering and BGP A great deal of the complexity in the BGP routing protocol exists to aid the enforcement and fine-tuning of peering and transit agreements. BGP allows operators to define a policy that determines where traffic is routed. Three things are commonly used to determine routing: local-preference, multi exit discriminators (MEDs) and AS-Path. Local-preference is used internally within a network to differentiate classes of networks. For example, a particular network will have a higher preference set on internal and customer advertisements. Settlement free peering is then configured to be preferred over paid IP transit. Networks that speak BGP to each other can engage in multi exit discriminator exchange with each other, although most do not. When networks interconnect in several locations, MEDs can be used to reference that network's interior gateway protocol cost. This results in both networks sharing the burden of transporting each other's traffic on their own network (or cold potato). Hot-potato or nearest-exit routing, which is typically the normal behavior on the Internet, is where traffic destined to another network is delivered to the closest interconnection point. Law and policy Internet interconnection is not regulated in the same way that public telephone network interconnection is regulated. Nevertheless, Internet interconnection has been the subject of several areas of federal policy in the United States. Perhaps the most dramatic example of this is the attempted MCI Worldcom/Sprint merger. In this case, the Department of Justice blocked the merger specifically because of the impact of the merger on the Internet backbone market (thereby requiring MCI to divest itself of its successful "internetMCI" business to gain approval). In 2001, the Federal Communications Commission's advisory committee, the Network Reliability and Interoperability Council recommended that Internet backbones publish their peering policies, something that they had been hesitant to do beforehand. The FCC has also reviewed competition in the backbone market in its Section 706 proceedings which review whether advanced telecommunications are being provided to all Americans in a reasonable and timely manner. Finally, Internet interconnection has become an issue in the international arena under something known as the International Charging Arrangements for Internet Services (ICAIS). In the ICAIS debate, countries underserved by Internet backbones have complained that it is unfair that they must pay the full cost of connecting to an Internet exchange point in a different country, frequently the United States. These advocates argue that Internet interconnection should work like international telephone interconnection, with each party paying half of the cost. Those who argue against ICAIS point out that much of the problem would be solved by building local exchange points. A significant amount of the traffic, it is argued, that is brought to the US and exchanged then leaves the US, using US exchange points as switching offices but not terminating in the US. In some worst-case scenarios, traffic from one side of a street is brought all the way to a distant exchange point in a foreign country, exchanged, and then returned to another side of the street. Countries with liberalized telecommunications and open markets, where competition between backbone providers occurs, tend to oppose ICAIS. See also Autonomous system Default-free zone Interconnect agreement Internet traffic engineering Net neutrality North American Network Operators' Group (NANOG) References External links PeeringDB: A free database of peering locations and participants The peering Playbook (PDF): Strategies of peering networks Example Tier 1 Peering Requirements: AT&T (AS7018) Example Tier 1 Peering Requirements: AOL Transit Data Network (AS1668) Example Tier 2 Peering Requirements: Entanet (AS8468) Cybertelecom :: Backbones – Federal Internet Law and Policy How the 'Net works: an introduction into Peering and Transit, Ars Technica Internet architecture Net neutrality
Peering
[ "Technology", "Engineering" ]
3,955
[ "Net neutrality", "Internet architecture", "IT infrastructure", "Computer networks engineering" ]
47,107
https://en.wikipedia.org/wiki/Phosgene
Phosgene is an organic chemical compound with the formula . It is a toxic, colorless gas; in low concentrations, its musty odor resembles that of freshly cut hay or grass. It can be thought of chemically as the double acyl chloride analog of carbonic acid, or structurally as formaldehyde with the hydrogen atoms replaced by chlorine atoms. In 2013, about 75–80 % of global phosgene was consumed for isocyanates, 18% for polycarbonates and about 5% for other fine chemicals. Phosgene is extremely poisonous and was used as a chemical weapon during World War I, where it was responsible for 85,000 deaths. It is a highly potent pulmonary irritant and quickly filled enemy trenches due to it being a heavy gas. It is classified as a Schedule 3 substance under the Chemical Weapons Convention. In addition to its industrial production, small amounts occur from the breakdown and the combustion of organochlorine compounds, such as chloroform. Structure and basic properties Phosgene is a planar molecule as predicted by VSEPR theory. The C=O distance is 1.18 Å, the C−Cl distance is 1.74 Å and the Cl−C−Cl angle is 111.8°. Phosgene is a carbon oxohalide and it can be considered one of the simplest acyl chlorides, being formally derived from carbonic acid. Production Industrially, phosgene is produced by passing purified carbon monoxide and chlorine gas through a bed of porous activated carbon, which serves as a catalyst: (ΔHrxn = −107.6 kJ/mol) This reaction is exothermic and is typically performed between 50 and 150 °C. Above 200 °C, phosgene reverts to carbon monoxide and chlorine, Keq(300 K) = 0.05. World production of this compound was estimated to be 2.74 million tonnes in 1989. Phosgene is fairly simple to produce, but is listed as a Schedule 3 substance under the Chemical Weapons Convention. As such, it is usually considered too dangerous to transport in bulk quantities. Instead, phosgene is usually produced and consumed within the same plant, as part of an "on demand" process. This involves maintaining equivalent rates of production and consumption, which keeps the amount of phosgene in the system at any one time fairly low, reducing the risks in the event of an accident. Some batch production does still take place, but efforts are made to reduce the amount of phosgene stored. Inadvertent generation Atmospheric chemistry Simple organochlorides slowly convert into phosgene when exposed to ultraviolet (UV) irradiation in the presence of oxygen. Before the discovery of the ozone hole in the late 1970s large quantities of organochlorides were routinely used by industry, which inevitably led to them entering the atmosphere. In the 1970-80s phosgene levels in the troposphere were around 20-30 pptv (peak 60 pptv). These levels had not decreased significantly nearly 30 years later, despite organochloride production becoming restricted under the Montreal Protocol. Phosgene in the troposphere can last up to about 70 days and is removed primarily by hydrolysis with ambient humidity or cloudwater. Less than 1% makes it to the stratosphere, where it is expected to have a lifetime of several years, since this layer is much drier and phosgene decomposes slowly through UV photolysis. It plays a minor part in ozone depletion. Combustion Carbon tetrachloride () can turn into phosgene when exposed to heat in air. This was a problem as carbon tetrachloride is an effective fire suppressant and was formerly in widespread use in fire extinguishers. There are reports of fatalities caused by its use to fight fires in confined spaces. Carbon tetrachloride's generation of phosgene and its own toxicity mean it is no longer used for this purpose. Biologically Phosgene is also formed as a metabolite of chloroform, likely via the action of cytochrome P-450. History Phosgene was synthesized by the Cornish chemist John Davy (1790–1868) in 1812 by exposing a mixture of carbon monoxide and chlorine to sunlight. He named it "phosgene" from Greek (, light) and (, to give birth) in reference of the use of light to promote the reaction. It gradually became important in the chemical industry as the 19th century progressed, particularly in dye manufacturing. Reactions and uses The reaction of an organic substrate with phosgene is called phosgenation. Phosgenation of diols give carbonates (R = H, alkyl, aryl), which can be either linear or cyclic: An example is the reaction of phosgene with bisphenol A to form polycarbonates. Phosgenation of diamines gives di-isocyanates, like toluene diisocyanate (TDI), methylene diphenyl diisocyanate (MDI), hexamethylene diisocyanate (HDI), and isophorone diisocyanate (IPDI). In these conversions, phosgene is used in excess to increase yield and minimize side reactions. The phosgene excess is separated during the work-up of resulting end products and recycled into the process, with any remaining phosgene decomposed in water using activated carbon as the catalyst. Diisocyanates are precursors to polyurethanes. More than 90% of the phosgene is used in these processes, with the biggest production units located in the United States (Texas and Louisiana), Germany, Shanghai, Japan, and South Korea. The most important producers are Dow Chemical, Covestro, and BASF. Phosgene is also used to produce monoisocyanates, used as pesticide precursors (e.g. methyl isocyanate (MIC). Aside from the widely used reactions described above, phosgene is also used to produce acyl chlorides from carboxylic acids: For this application, thionyl chloride is commonly used instead of phosgene. Laboratory uses The synthesis of isocyanates from amines illustrates the electrophilic character of this reagent and its use in introducing the equivalent synthon "CO2+": , where R = alkyl, aryl Such reactions are conducted on laboratory scale in the presence of a base such as pyridine that neutralizes the hydrogen chloride side-product. Phosgene is used to produce chloroformates such as benzyl chloroformate: In these syntheses, phosgene is used in excess to prevent formation of the corresponding carbonate ester. With amino acids, phosgene (or its trimer) reacts to give amino acid N-carboxyanhydrides. More generally, phosgene acts to link two nucleophiles by a carbonyl group. For this purpose, alternatives to phosgene such as carbonyldiimidazole (CDI) are safer, albeit expensive. CDI itself is prepared by reacting phosgene with imidazole. Phosgene is stored in metal cylinders. In the US, the cylinder valve outlet is a tapered thread known as "CGA 160" that is used only for phosgene. Alternatives to phosgene In the research laboratory, due to safety concerns phosgene nowadays finds limited use in organic synthesis. A variety of substitutes have been developed, notably trichloromethyl chloroformate ("diphosgene"), a liquid at room temperature, and bis(trichloromethyl) carbonate ("triphosgene"), a crystalline substance. Other reactions Phosgene reacts with water to release hydrogen chloride and carbon dioxide: Analogously, upon contact with ammonia, it converts to urea: Halide exchange with nitrogen trifluoride and aluminium tribromide gives and , respectively. Chemical warfare It is listed on Schedule 3 of the Chemical Weapons Convention: All production sites manufacturing more than 30 tonnes per year must be declared to the OPCW. Although less toxic than many other chemical weapons such as sarin, phosgene is still regarded as a viable chemical warfare agent because of its simpler manufacturing requirements when compared to that of more technically advanced chemical weapons such as tabun, a first-generation nerve agent. Phosgene was first deployed as a chemical weapon by the French in 1915 in World War I. It was also used in a mixture with an equal volume of chlorine, with the chlorine helping to spread the denser phosgene. Phosgene was more potent than chlorine, though some symptoms took 24 hours or more to manifest. Following the extensive use of phosgene during World War I, it was stockpiled by various countries. Phosgene was then only infrequently used by the Imperial Japanese Army against the Chinese during the Second Sino-Japanese War. Gas weapons, such as phosgene, were produced by the IJA's Unit 731. Toxicology and safety Phosgene is an insidious poison as the odor may not be noticed and symptoms may be slow to appear. At low concentrations, phosgene may have a pleasant odor of freshly mown hay or green corn, but has also been described as sweet, like rotten banana peels. The odor detection threshold for phosgene is 0.4 ppm, four times the threshold limit value (time weighted average). Its high toxicity arises from the action of the phosgene on the , and groups of the proteins in pulmonary alveoli (the site of gas exchange), respectively forming ester, amide and thioester functional groups in accord with the reactions discussed above. This results in disruption of the blood–air barrier, eventually causing pulmonary edema. The extent of damage in the alveoli does not primarily depend on phosgene concentration in the inhaled air, with the dose (amount of inhaled phosgene) being the critical factor. Dose can be approximately calculated as "concentration" × "duration of exposure". Therefore, persons in workplaces where there exists risk of accidental phosgene release usually wear indicator badges close to the nose and mouth. Such badges indicate the approximate inhaled dose, which allows for immediate treatment if the monitored dose rises above safe limits. In case of low or moderate quantities of inhaled phosgene, the exposed person is to be monitored and subjected to precautionary therapy, then released after several hours. For higher doses of inhaled phosgene (above 150 ppm × min) a pulmonary edema often develops which can be detected by X-ray imaging and regressive blood oxygen concentration. Inhalation of such high doses can eventually result in fatality within hours up to 2–3 days of the exposure. The risk connected to a phosgene inhalation is based not so much on its toxicity (which is much lower in comparison to modern chemical weapons like sarin or tabun) but rather on its typical effects: the affected person may not develop any symptoms for hours until an edema appears, at which point it could be too late for medical treatment to assist. Nearly all fatalities as a result of accidental releases from the industrial handling of phosgene occurred in this fashion. On the other hand, pulmonary edemas treated in a timely manner usually heal in the mid- and longterm, without major consequences once some days or weeks after exposure have passed. Nonetheless, the detrimental health effects on pulmonary function from untreated, chronic low-level exposure to phosgene should not be ignored; although not exposed to concentrations high enough to immediately cause an edema, many synthetic chemists (e.g. Leonidas Zervas) working with the compound were reported to experience chronic respiratory health issues and eventual respiratory failure from continuous low-level exposure. If accidental release of phosgene occurs in an industrial or laboratory setting, it can be mitigated with ammonia gas; in the case of liquid spills (e.g. of diphosgene or phosgene solutions) an absorbent and sodium carbonate can be applied. Accidents The first major phosgene-related incident happened in May 1928 when eleven tons of phosgene escaped from a war surplus store in central Hamburg. Three hundred people were poisoned, of whom ten died. In the second half of 20th century several fatal incidents implicating phosgene occurred in Europe, Asia and the US. Most of them have been investigated by authorities and the outcome made accessible to the public. For example, phosgene was initially blamed for the Bhopal disaster, but investigations proved methyl isocyanate to be responsible for the numerous poisonings and fatalities. Recent major incidents happened in January 2010 and May 2016. An accidental release of phosgene gas at a DuPont facility in West Virginia killed one employee in 2010. The US Chemical Safety Board released a video detailing the accident. Six years later, a phosgene leak occurred in a BASF plant in South Korea, where a contractor inhaled a lethal dose of phosgene. 2023 Ohio train derailment: A freight train carrying vinyl chloride derailed and burned in East Palestine, Ohio, releasing phosgene and hydrogen chloride into the air and contaminating the Ohio River. See also Carbonyl bromide Carbonyl fluoride Oxalyl chloride Thiophosgene Thionyl chloride Perfluoroisobutene Bis(trifluoromethyl) disulfide References External links Davy's account of his discovery of phosgene International Chemical Safety Card 0007 CDC - Phosgene - NIOSH Workplace Safety and Health Topic NIOSH Pocket Guide to Chemical Hazards U.S. CDC Emergency Preparedness & Response U.S. EPA Acute Exposure Guideline Levels Regime For Schedule 3 Chemicals And Facilities Related To Such Chemicals, OPCW website CBWInfo website Use of Phosgene in WWII and in modern-day warfare US Chemical Safety Board Video on accidental release at DuPont facility in West Virginia Acyl chlorides Inorganic carbon compounds Nonmetal halides Oxychlorides Carbon oxohalides Pulmonary agents Reagents for organic chemistry World War I chemical weapons
Phosgene
[ "Chemistry" ]
3,050
[ "Inorganic compounds", "Chemical weapons", "Inorganic carbon compounds", "Reagents for organic chemistry", "World War I chemical weapons", "Pulmonary agents" ]
47,280
https://en.wikipedia.org/wiki/Infrastructure%20bias
In economics and social policy, infrastructure bias is the influence of the location and availability of pre-existing infrastructure, such as roads and telecommunications facilities, on social and economic development. In science, infrastructure bias is the influence of existing social or scientific infrastructure on scientific observations. In astronomy and particle physics, where the availability of particular kinds of telescopes or particle accelerators acts as a constraint on the types of experiments that can be done, the data that can be retrieved is biased towards that which can be obtained by the equipment. Procedural bias, related to infrastructure bias, is shown by a case of irregular genetic sampling of Bolivian wild potatoes. A 2000 report of previous studies' sampling found that 60% of samples had been taken near towns or roads, where 22% would be the average, had the samples been taken at random (or from equidistant points, or at specifically varying distances from towns, representative of the average terrain density). References Bias Sampling (statistics) Sampling techniques Research Infrastructure
Infrastructure bias
[ "Engineering" ]
200
[ "Construction", "Infrastructure" ]
11,785,522
https://en.wikipedia.org/wiki/Weyl%20integral
In mathematics, the Weyl integral (named after Hermann Weyl) is an operator defined, as an example of fractional calculus, on functions f on the unit circle having integral 0 and a Fourier series. In other words there is a Fourier series for f of the form with a0 = 0. Then the Weyl integral operator of order s is defined on Fourier series by where this is defined. Here s can take any real value, and for integer values k of s the series expansion is the expected k-th derivative, if k > 0, or (−k)th indefinite integral normalized by integration from θ = 0. The condition a0 = 0 here plays the obvious role of excluding the need to consider division by zero. The definition is due to Hermann Weyl (1917). See also Sobolev space References Fourier series Fractional calculus
Weyl integral
[ "Mathematics" ]
176
[ "Fractional calculus", "Calculus" ]
11,790,568
https://en.wikipedia.org/wiki/Percolation%20threshold
The percolation threshold is a mathematical concept in percolation theory that describes the formation of long-range connectivity in random systems. Below the threshold a giant connected component does not exist; while above it, there exists a giant component of the order of system size. In engineering and coffee making, percolation represents the flow of fluids through porous media, but in the mathematics and physics worlds it generally refers to simplified lattice models of random systems or networks (graphs), and the nature of the connectivity in them. The percolation threshold is the critical value of the occupation probability p, or more generally a critical surface for a group of parameters p1, p2, ..., such that infinite connectivity (percolation) first occurs. Percolation models The most common percolation model is to take a regular lattice, like a square lattice, and make it into a random network by randomly "occupying" sites (vertices) or bonds (edges) with a statistically independent probability p. At a critical threshold pc, large clusters and long-range connectivity first appear, and this is called the percolation threshold. Depending on the method for obtaining the random network, one distinguishes between the site percolation threshold and the bond percolation threshold. More general systems have several probabilities p1, p2, etc., and the transition is characterized by a critical surface or manifold. One can also consider continuum systems, such as overlapping disks and spheres placed randomly, or the negative space (Swiss-cheese models). To understand the threshold, you can consider a quantity such as the probability that there is a continuous path from one boundary to another along occupied sites or bonds—that is, within a single cluster. For example, one can consider a square system, and ask for the probability P that there is a path from the top boundary to the bottom boundary. As a function of the occupation probability p, one finds a sigmoidal plot that goes from P=0 at p=0 to P=1 at p=1. The larger the square is compared to the lattice spacing, the sharper the transition will be. When the system size goes to infinity, P(p) will be a step function at the threshold value pc. For finite large systems, P(pc) is a constant whose value depends upon the shape of the system; for the square system discussed above, P(pc)= exactly for any lattice by a simple symmetry argument. There are other signatures of the critical threshold. For example, the size distribution (number of clusters of size s) drops off as a power-law for large s at the threshold, ns(pc) ~ s−τ, where τ is a dimension-dependent percolation critical exponents. For an infinite system, the critical threshold corresponds to the first point (as p increases) where the size of the clusters become infinite. In the systems described so far, it has been assumed that the occupation of a site or bond is completely random—this is the so-called Bernoulli percolation. For a continuum system, random occupancy corresponds to the points being placed by a Poisson process. Further variations involve correlated percolation, such as percolation clusters related to Ising and Potts models of ferromagnets, in which the bonds are put down by the Fortuin–Kasteleyn method. In bootstrap or k-sat percolation, sites and/or bonds are first occupied and then successively culled from a system if a site does not have at least k neighbors. Another important model of percolation, in a different universality class altogether, is directed percolation, where connectivity along a bond depends upon the direction of the flow. Another variation of recent interest is Explosive Percolation, whose thresholds are listed on that page. Over the last several decades, a tremendous amount of work has gone into finding exact and approximate values of the percolation thresholds for a variety of these systems. Exact thresholds are only known for certain two-dimensional lattices that can be broken up into a self-dual array, such that under a triangle-triangle transformation, the system remains the same. Studies using numerical methods have led to numerous improvements in algorithms and several theoretical discoveries. Simple duality in two dimensions implies that all fully triangulated lattices (e.g., the triangular, union jack, cross dual, martini dual and asanoha or 3-12 dual, and the Delaunay triangulation) all have site thresholds of , and self-dual lattices (square, martini-B) have bond thresholds of . The notation such as (4,82) comes from Grünbaum and Shephard, and indicates that around a given vertex, going in the clockwise direction, one encounters first a square and then two octagons. Besides the eleven Archimedean lattices composed of regular polygons with every site equivalent, many other more complicated lattices with sites of different classes have been studied. Error bars in the last digit or digits are shown by numbers in parentheses. Thus, 0.729724(3) signifies 0.729724 ± 0.000003, and 0.74042195(80) signifies 0.74042195 ± 0.00000080. The error bars variously represent one or two standard deviations in net error (including statistical and expected systematic error), or an empirical confidence interval, depending upon the source. Percolation on networks For a random tree-like network (i.e., a connected network with no cycle) without degree-degree correlation, it can be shown that such network can have a giant component, and the percolation threshold (transmission probability) is given by . Where is the generating function corresponding to the excess degree distribution, is the average degree of the network and is the second moment of the degree distribution. So, for example, for an ER network, since the degree distribution is a Poisson distribution, where the threshold is at . In networks with low clustering, , the critical point gets scaled by such that: This indicates that for a given degree distribution, the clustering leads to a larger percolation threshold, mainly because for a fixed number of links, the clustering structure reinforces the core of the network with the price of diluting the global connections. For networks with high clustering, strong clustering could induce the core–periphery structure, in which the core and periphery might percolate at different critical points, and the above approximate treatment is not applicable. Percolation in 2D Thresholds on Archimedean lattices Note: sometimes "hexagonal" is used in place of honeycomb, although in some contexts a triangular lattice is also called a hexagonal lattice. z = bulk coordination number. 2D lattices with extended and complex neighborhoods In this section, sq-1,2,3 corresponds to square (NN+2NN+3NN), etc. Equivalent to square-2N+3N+4N, sq(1,2,3). tri = triangular, hc = honeycomb. Here NN = nearest neighbor, 2NN = second nearest neighbor (or next nearest neighbor), 3NN = third nearest neighbor (or next-next nearest neighbor), etc. These are also called 2N, 3N, 4N respectively in some papers. For overlapping or touching squares, (site) given here is the net fraction of sites occupied similar to the in continuum percolation. The case of a 2×2 square is equivalent to percolation of a square lattice NN+2NN+3NN+4NN or sq-1,2,3,4 with threshold with . The 3×3 square corresponds to sq-1,2,3,4,5,6,7,8 with z=44 and . The value of z for a k x k square is (2k+1)2-5. For larger overlapping squares, see. 2D distorted lattices Here, one distorts a regular lattice of unit spacing by moving vertices uniformly within the box , and considers percolation when sites are within Euclidean distance of each other. Overlapping shapes on 2D lattices Site threshold is number of overlapping objects per lattice site. k is the length (net area). Overlapping squares are shown in the complex neighborhood section. Here z is the coordination number to k-mers of either orientation, with for sticks. The coverage is calculated from by for sticks, because there are sites where a stick will cause an overlap with a given site. For aligned sticks: Approximate formulas for thresholds of Archimedean lattices AB percolation and colored percolation in 2D In AB percolation, a is the proportion of A sites among B sites, and bonds are drawn between sites of opposite species. It is also called antipercolation. In colored percolation, occupied sites are assigned one of colors with equal probability, and connection is made along bonds between neighbors of different colors. Site-bond percolation in 2D Site bond percolation. Here is the site occupation probability and is the bond occupation probability, and connectivity is made only if both the sites and bonds along a path are occupied. The criticality condition becomes a curve = 0, and some specific critical pairs are listed below. Square lattice: Honeycomb (hexagonal) lattice: Kagome lattice: * For values on different lattices, see "An investigation of site-bond percolation on many lattices". Approximate formula for site-bond percolation on a honeycomb lattice Archimedean duals (Laves lattices) Laves lattices are the duals to the Archimedean lattices. Drawings from. See also Uniform tilings. 2-uniform lattices Top 3 lattices: #13 #12 #36 Bottom 3 lattices: #34 #37 #11 Top 2 lattices: #35 #30 Bottom 2 lattices: #41 #42 Top 4 lattices: #22 #23 #21 #20 Bottom 3 lattices: #16 #17 #15 Top 2 lattices: #31 #32 Bottom lattice: #33 Inhomogeneous 2-uniform lattice This figure shows something similar to the 2-uniform lattice #37, except the polygons are not all regular—there is a rectangle in the place of the two squares—and the size of the polygons is changed. This lattice is in the isoradial representation in which each polygon is inscribed in a circle of unit radius. The two squares in the 2-uniform lattice must now be represented as a single rectangle in order to satisfy the isoradial condition. The lattice is shown by black edges, and the dual lattice by red dashed lines. The green circles show the isoradial constraint on both the original and dual lattices. The yellow polygons highlight the three types of polygons on the lattice, and the pink polygons highlight the two types of polygons on the dual lattice. The lattice has vertex types ()(33,42) + ()(3,4,6,4), while the dual lattice has vertex types ()(46)+()(42,52)+()(53)+()(52,4). The critical point is where the longer bonds (on both the lattice and dual lattice) have occupation probability p = 2 sin (π/18) = 0.347296... which is the bond percolation threshold on a triangular lattice, and the shorter bonds have occupation probability 1 − 2 sin(π/18) = 0.652703..., which is the bond percolation on a hexagonal lattice. These results follow from the isoradial condition but also follow from applying the star-triangle transformation to certain stars on the honeycomb lattice. Finally, it can be generalized to having three different probabilities in the three different directions, p1, p2 and p3 for the long bonds, and , , and for the short bonds, where p1, p2 and p3 satisfy the critical surface for the inhomogeneous triangular lattice. Thresholds on 2D bow-tie and martini lattices To the left, center, and right are: the martini lattice, the martini-A lattice, the martini-B lattice. Below: the martini covering/medial lattice, same as the 2×2, 1×1 subnet for kagome-type lattices (removed). Some other examples of generalized bow-tie lattices (a-d) and the duals of the lattices (e-h): Thresholds on 2D covering, medial, and matching lattices Thresholds on 2D chimera non-planar lattices Thresholds on subnet lattices The 2 x 2, 3 x 3, and 4 x 4 subnet kagome lattices. The 2 × 2 subnet is also known as the "triangular kagome" lattice. Thresholds of random sequentially adsorbed objects (For more results and comparison to the jamming density, see Random sequential adsorption) The threshold gives the fraction of sites occupied by the objects when site percolation first takes place (not at full jamming). For longer k-mers see Ref. Thresholds of full dimer coverings of two dimensional lattices Here, we are dealing with networks that are obtained by covering a lattice with dimers, and then consider bond percolation on the remaining bonds. In discrete mathematics, this problem is known as the 'perfect matching' or the 'dimer covering' problem. Thresholds of polymers (random walks) on a square lattice System is composed of ordinary (non-avoiding) random walks of length l on the square lattice. Thresholds of self-avoiding walks of length k added by random sequential adsorption Thresholds on 2D inhomogeneous lattices Thresholds for 2D continuum models For disks, equals the critical number of disks per unit area, measured in units of the diameter , where is the number of objects and is the system size For disks, equals critical total disk area. gives the number of disk centers within the circle of influence (radius 2 r). is the critical disk radius. for ellipses of semi-major and semi-minor axes of a and b, respectively. Aspect ratio with . for rectangles of dimensions and . Aspect ratio with . for power-law distributed disks with , . equals critical area fraction. For disks, Ref. use where is the density of disks of radius . equals number of objects of maximum length per unit area. For ellipses, For void percolation, is the critical void fraction. For more ellipse values, see For more rectangle values, see Both ellipses and rectangles belong to the superellipses, with . For more percolation values of superellipses, see. For the monodisperse particle systems, the percolation thresholds of concave-shaped superdisks are obtained as seen in For binary dispersions of disks, see Thresholds on 2D random and quasi-lattices *Theoretical estimate Thresholds on 2D correlated systems Assuming power-law correlations Thresholds on slabs h is the thickness of the slab, h × ∞ × ∞. Boundary conditions (b.c.) refer to the top and bottom planes of the slab. Percolation in 3D Filling factor = fraction of space filled by touching spheres at every lattice site (for systems with uniform bond length only). Also called Atomic Packing Factor. Filling fraction (or Critical Filling Fraction) = filling factor * pc(site). NN = nearest neighbor, 2NN = next-nearest neighbor, 3NN = next-next-nearest neighbor, etc. kxkxk cubes are cubes of occupied sites on a lattice, and are equivalent to extended-range percolation of a cube of length (2k+1), with edges and corners removed, with z = (2k+1)3-12(2k-1)-9 (center site not counted in z). Question: the bond thresholds for the hcp and fcc lattice agree within the small statistical error. Are they identical, and if not, how far apart are they? Which threshold is expected to be bigger? Similarly for the ice and diamond lattices. See 3D distorted lattices Here, one distorts a regular lattice of unit spacing by moving vertices uniformly within the cube , and considers percolation when sites are within Euclidean distance of each other. Overlapping shapes on 3D lattices Site threshold is the number of overlapping objects per lattice site. The coverage φc is the net fraction of sites covered, and v is the volume (number of cubes). Overlapping cubes are given in the section on thresholds of 3D lattices. Here z is the coordination number to k-mers of either orientation, with The coverage is calculated from by for sticks, and for plaquettes. Dimer percolation in 3D Thresholds for 3D continuum models All overlapping except for jammed spheres and polymer matrix. is the total volume (for spheres), where N is the number of objects and L is the system size. is the critical volume fraction, valid for overlapping randomly placed objects. For disks and plates, these are effective volumes and volume fractions. For void ("Swiss-Cheese" model), is the critical void fraction. For more results on void percolation around ellipsoids and elliptical plates, see. For more ellipsoid percolation values see. For spherocylinders, H/D is the ratio of the height to the diameter of the cylinder, which is then capped by hemispheres. Additional values are given in. For superballs, m is the deformation parameter, the percolation values are given in., In addition, the thresholds of concave-shaped superballs are also determined in For cuboid-like particles (superellipsoids), m is the deformation parameter, more percolation values are given in. Void percolation in 3D Void percolation refers to percolation in the space around overlapping objects. Here refers to the fraction of the space occupied by the voids (not of the particles) at the critical point, and is related to by . is defined as in the continuum percolation section above. Thresholds on 3D random and quasi-lattices Thresholds for other 3D models In drilling percolation, the site threshold represents the fraction of columns in each direction that have not been removed, and . For the 1d drilling, we have (columns) (sites). † In tube percolation, the bond threshold represents the value of the parameter such that the probability of putting a bond between neighboring vertical tube segments is , where is the overlap height of two adjacent tube segments. Thresholds in different dimensional spaces Continuum models in higher dimensions In 4d, . In 5d, . In 6d, . is the critical volume fraction, valid for overlapping objects. For void models, is the critical void fraction, and is the total volume of the overlapping objects Thresholds on hypercubic lattices For thresholds on high dimensional hypercubic lattices, we have the asymptotic series expansions where . For 13-dimensional bond percolation, for example, the error with the measured value is less than 10−6, and these formulas can be useful for higher-dimensional systems. Thresholds in other higher-dimensional lattices Thresholds in one-dimensional long-range percolation In a one-dimensional chain we establish bonds between distinct sites and with probability decaying as a power-law with an exponent . Percolation occurs at a critical value for . The numerically determined percolation thresholds are given by: Thresholds on hyperbolic, hierarchical, and tree lattices In these lattices there may be two percolation thresholds: the lower threshold is the probability above which infinite clusters appear, and the upper is the probability above which there is a unique infinite cluster. Note: {m,n} is the Schläfli symbol, signifying a hyperbolic lattice in which n regular m-gons meet at every vertex For bond percolation on {P,Q}, we have by duality . For site percolation, because of the self-matching of triangulated lattices. Cayley tree (Bethe lattice) with coordination number Thresholds for directed percolation nn = nearest neighbors. For a (d + 1)-dimensional hypercubic system, the hypercube is in d dimensions and the time direction points to the 2D nearest neighbors. Directed percolation with multiple neighbors Site-Bond Directed Percolation p_b = bond threshold p_s = site threshold Site-bond percolation is equivalent to having different probabilities of connections: P_0 = probability that no sites are connected P_2 = probability that exactly one descendant is connected to the upper vertex (two connected together) P_3 = probability that both descendants are connected to the original vertex (all three connected together) Formulas: P_0 = (1-p_s) + p_s(1-p_b)^2 P_2 = p_s p_b (1-p_b) P_3 = p_s p_b^2 P_0 + 2P_2 + P_3 = 1 Exact critical manifolds of inhomogeneous systems Inhomogeneous triangular lattice bond percolation Inhomogeneous honeycomb lattice bond percolation = kagome lattice site percolation Inhomogeneous (3,12^2) lattice, site percolation or Inhomogeneous union-jack lattice, site percolation with probabilities Inhomogeneous martini lattice, bond percolation Inhomogeneous martini lattice, site percolation. r = site in the star Inhomogeneous martini-A (3–7) lattice, bond percolation. Left side (top of "A" to bottom): . Right side: . Cross bond: . Inhomogeneous martini-B (3–5) lattice, bond percolation Inhomogeneous martini lattice with outside enclosing triangle of bonds, probabilities from inside to outside, bond percolation Inhomogeneous checkerboard lattice, bond percolation Inhomogeneous bow-tie lattice, bond percolation where are the four bonds around the square and is the diagonal bond connecting the vertex between bonds and . See also 2D percolation cluster Bootstrap percolation Directed percolation Effective medium approximations Epidemic models on lattices Graph theory Network science Percolation Percolation critical exponents Percolation theory Continuum percolation theory Random sequential adsorption Uniform tilings References Percolation theory Critical phenomena Random graphs
Percolation threshold
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
4,766
[ "Physical phenomena", "Phase transitions", "Critical phenomena", "Percolation theory", "Graph theory", "Combinatorics", "Mathematical relations", "Condensed matter physics", "Random graphs", "Statistical mechanics", "Dynamical systems" ]
14,453,424
https://en.wikipedia.org/wiki/Pinning%20points
In a crystalline material, a dislocation is capable of traveling throughout the lattice when relatively small stresses are applied. This movement of dislocations results in the material plastically deforming. Pinning points in the material act to halt a dislocation's movement, requiring a greater amount of force to be applied to overcome the barrier. This results in an overall strengthening of materials. Types of pinning points Point defects Point defects (as well as stationary dislocations, jogs, and kinks) present in a material create stress fields within a material that disallow traveling dislocations to come into direct contact. Much like two particles of the same electric charge feel a repulsion to one another when brought together, the dislocation is pushed away from the already present stress field. Alloying elements The introduction of atom1 into a crystal of atom2 creates a pinning point for multiple reasons. An alloying atom is by nature a point defect, thus it must create a stress field when placed into a foreign crystallographic position, which could block the passage of a dislocation. However, it is possible that the alloying material is approximately the same size as the atom that is replaced, and thus its presence would not stress the lattice (as occurs in cobalt alloyed nickel). The different atom would, though, have a different elastic modulus, which would create a different terrain for the moving dislocation. A higher modulus would look like an energy barrier, and a lower like an energy trough – both of which would stop its movement. Second phase precipitates The precipitation of a second phase within the lattice of a material creates physical blockades through which a dislocation cannot pass. The result is that the dislocation must bend (which requires greater energy, or a greater stress to be applied) around the precipitates, which inevitably leaves residual dislocation loops encircling the second phase material and shortens the original dislocation. Grain boundaries Dislocations require proper lattice ordering to move through a material. At grain boundaries, there is a lattice mismatch, and every atom that lies on the boundary is uncoordinated. This stops dislocations that encounter the boundary from moving. Crystals Crystallography Physical quantities Materials science Metallurgy
Pinning points
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
474
[ "Physical phenomena", "Applied and interdisciplinary physics", "Physical quantities", "Metallurgy", "Quantity", "Materials science", "Crystallography", "Crystals", "Condensed matter physics", "nan", "Physical properties" ]
14,455,119
https://en.wikipedia.org/wiki/GPR179
Probable G-protein coupled receptor 179 is a protein that in humans is encoded by the GPR179 gene. Clinical relevance Mutations in this gene have been associated to cases of congenital stationary Night Blindness. References Further reading G protein-coupled receptors
GPR179
[ "Chemistry" ]
50
[ "G protein-coupled receptors", "Signal transduction" ]
14,457,042
https://en.wikipedia.org/wiki/Dynactin
Dynactin is a 23 subunit protein complex that acts as a co-factor for the microtubule motor cytoplasmic dynein-1. It is built around a short filament of actin related protein-1 (Arp1). Discovery Dynactin was identified as an activity that allowed purified cytoplasmic dynein to move membrane vesicles along microtubules in vitro. It was shown to be a multiprotein complex and named "dynactin" because of its role in dynein activation. The main features of dynactin were visualized by quick-freeze, deep-etch, rotary shadow electron microscopy. It appears as a short filament, 37-nm in length, which resembles F-actin, plus a thinner, laterally oriented arm. Antibody labelling was used to map the location of the dynactin subunits. Structure Dynactin consists of three major structural domains: (1) sidearm-shoulder: DCTN1/p150Glued, DCTN2/p50/dynamitin, DCTN3/p24/p22;(2)the Arp1 filament: ACTR1A/Arp1/centractin, actin, CapZ; and (3) the pointed end complex: Actr10/Arp11, DCTN4/p62, DCTN5/p25, and DCTN6/p27. A 4Å cryo-EM structure of dynactin revealed that its filament contains eight Arp1 molecules, one β-actin and one Arp11. In the pointed end complex p62/DCTN4 binds to Arp11 and β-actin and p25 and p27 bind both p62 and Arp11. At the barbed end the capping protein (CapZαβ) binds the Arp1 filament in the same way that it binds actin, although with more charge complementarity, explaining why it binds dynactin more tightly than actin. The shoulder contains two copies of p150Glued/DCTN1, four copies of p50/DCTN2 and two copies of p24/DCTN3. These proteins form long bundles of alpha helices, which wrap over each other and contact the Arp1 filament. The N-termini of p50/DCTN2 emerge from the shoulder and coat the filament, providing a mechanism for controlling the filament length. The C-termini of the p150Glued/DCTN1 dimer are embedded in the shoulder, whereas the N-terminal 1227 amino acids form the projecting arm. The arm consists of an N-terminal CAPGly domain which can bind the C-terminal tails of microtubules and the microtubule plus end binding protein EB1. This is followed by a basic region, also involved in microtubule binding, a folded-back coiled coil (CC1), the intercoiled domain (ICD) and a second coiled coil domain (CC2). The p150Glued arm can dock into against the side of the Arp1 filament and pointed end complex. DCTN2 (dynamitin) is also involved in anchoring microtubules to centrosomes and may play a role in synapse formation during brain development. Arp1 has been suggested as the domain for dynactin binding to membrane vesicles (such as Golgi or late endosome) through its association with β-spectrin. The pointed end complex (PEC) has been shown to be involved in selective cargo binding. PEC subunits p62/DCTN4 and Arp11/Actr10 are essential for dynactin complex integrity and dynactin/dynein targeting to the nuclear envelope before mitosis. Actr10 along with Drp1 (Dynamin related protein 1) have been documented as vital to the attachment of mitochondria to the dynactin complex. Dynactin p25/DCTN5 and p27/DCTN6 are not essential for dynactin complex integrity, but are required for early and recycling endosome transport during the interphase and regulation of the spindle assembly checkpoint in mitosis. Interaction with dynein Dynein and dynactin were reported to interact directly by the binding of dynein intermediate chains with p150Glued. The affinity of this interaction is around 3.5μM. Dynein and dynactin do not run together in a sucrose gradient, but can be induced to form a tight complex in the presence of the N-terminal 400 amino acids of Bicaudal D2 (BICD2), a cargo adaptor that links dynein and dynactin to Golgi derived vesicles. In the presence of BICD2, dynactin binds to dynein and activates it to move for long distances along microtubules. A cryo-EM structure of dynein, dynactin and BICD2 showed that the BICD2 coiled coil runs along the dynactin filament. The tail of dynein also binds to the Arp1 filament, sitting in the equivalent site that myosin uses to bind actin. The contacts between the dynein tail and dynactin all involve BICD, explaining why it is needed to bring them together. The dynein/dynactin/BICD2 (DDB) complex has also been observed, by negative stain EM, on microtubules. This shows that the cargo (Rab6) binding end of BICD2 extends out through the pointed end complex at the opposite end away from the dynein motor domains. Functions Dynactin is often essential for dynein activity and can be thought of as a "dynein receptor" that modulates binding of dynein to cell organelles which are to be transported along microtubules. Dynactin also enhances the processivity of cytoplasmic dynein and kinesin-2 motors. Dynactin is involved in various processes like chromosome alignment and spindle organization in cell division. Dynactin contributes to mitotic spindle pole focusing through its binding to nuclear mitotic apparatus protein (NuMA). Dynactin also targets to the kinetochore through binding between DCTN2/dynamitin and zw10 and has a role in mitotic spindle checkpoint inactivation. During prometaphase, dynactin also helps target polo-like kinase 1 (Plk1) to kinetochores through cyclin dependent kinase 1 (Cdk1)-phosphorylated DCTN6/p27, which is involved in proper microtubule-kinetochore attachment and recruitment of spindle assembly checkpoint protein Mad1. In addition, dynactin has been shown to play an essential role in maintaining nuclear position in Drosophila, zebrafish or in different fungi. Dynein and dynactin concentrate on the nuclear envelope during the prophase and facilitate nuclear envelope breakdown via its DCTN4/p62 and Arp11 subunits. Dynactin is also required for microtubule anchoring at centrosomes and centrosome integrity. Destabilization of the centrosomal pool of dynactin also causes abnormal G1 centriole separation and delayed entry into S phase, suggesting that dynactin contributes to the recruitment of important cell cycle regulators to centrosomes. In addition to transport of various organelles in the cytoplasm, dynactin also links kinesin II to organelles. See also Motor protein Dynein DCTN1 Centractin References Further reading Protein families Motor proteins
Dynactin
[ "Chemistry", "Biology" ]
1,698
[ "Molecular machines", "Protein families", "Motor proteins", "Protein classification" ]
14,457,331
https://en.wikipedia.org/wiki/Lead%E2%80%93lead%20dating
Lead–lead dating is a method for dating geological samples, normally based on 'whole-rock' samples of material such as granite. For most dating requirements it has been superseded by uranium–lead dating (U–Pb dating), but in certain specialized situations (such as dating meteorites and the age of the Earth) it is more important than U–Pb dating. Decay equations for common Pb–Pb dating Three stable "daughter" Pb isotopes result from the radioactive decay of uranium and thorium in nature; they are 206Pb, 207Pb, and 208Pb. 204Pb is the only non-radiogenic lead isotope, therefore is not one of the daughter isotopes. These daughter isotopes are the final decay products of U and Th radioactive decay chains beginning from 238U (half-life 4.5 Gy), 235U (half-life 0.70 Gy) and 232Th (half-life 14 Gy) respectively. With the progress of time, the final decay product accumulates as the parent isotope decays at a constant rate. This shifts the ratio of radiogenic Pb versus non-radiogenic 204Pb (207Pb/204Pb or 206Pb/204Pb) in favor of radiogenic 207Pb or 206Pb. This can be expressed by the following decay equations: where the subscripts P and I refer to present-day and initial Pb isotope ratios, λ235 and λ238 are decay constants for 235U and 238U, and t is the age. The concept of common Pb–Pb dating (also referred to as whole rock lead isotope dating) was deduced through mathematical manipulation of the above equations. It was established by dividing the first equation above by the second, under the assumption that the U/Pb system was undisturbed. This rearranged equation formed: where the factor of 137.88 is the present-day 238U/235U ratio. As evident by the equation, initial Pb isotope ratios, as well as the age of the system are the two factors which determine the present day Pb isotope compositions. If the sample behaved as a closed system then graphing the difference between the present and initial ratios of 207Pb/204Pb versus 206Pb/204Pb should produce a straight line. The distance the point moves along this line is dependent on the U/Pb ratio, whereas the slope of the line depends on the time since Earth's formation. This was first established by Nier et al., 1941. The development of the Geochron database The development of the Geochron database was mainly attributed to Clair Cameron Patterson’s application of PbPb dating on meteorites in 1956. The Pb ratios of three stony and two iron meteorites were measured. The dating of meteorites would then help Patterson in determining not only the age of these meteorites but also the age of Earth's formation. By dating meteorites Patterson was directly dating the age of various planetesimals. Assuming the process of elemental differentiation is identical on Earth as it is on other planets, the core of these planetesimals would be depleted of uranium and thorium, while the crust and mantle would contain higher U/Pb ratios. As planetesimals collided, various fragments were scattered and produced meteorites. Iron meteorites were identified as pieces of the core, while stony meteorites were segments of the mantle and crustal units of these various planetesimals. Samples of iron meteorite from Canyon Diablo (Meteor Crater) Arizona were found to have the least radiogenic composition of any material in the solar system. The U/Pb ratio was so low that no radiogenic decay was detected in the isotopic composition. As illustrated in figure 1, this point defines the lower (left) end of the isochron. Therefore, troilite found in Canyon Diablo represents the primeval lead isotope composition of the solar system, dating back to . Stony meteorites however, exhibited very high 207Pb/204Pb versus 206Pb/204Pb ratios, indicating that these samples came from the crust or mantle of the planetesimal. Together, these samples define an isochron, whose slope gives the age of meteorites as 4.55 Byr. Patterson also analyzed terrestrial sediment collected from the ocean floor, which was believed to be representative of the Bulk Earth composition. Because the isotope composition of this sample plotted on the meteorite isochron, it suggested that earth had the same age and origin as meteorites, therefore solving the age of the Earth and giving rise to the name 'geochron'. Lead isotope isochron diagram used by C. C. Patterson to determine the age of the Earth in 1956. Animation shows progressive growth over 4550 million years (Myr) of the lead isotope ratios for two stony meteorites (Nuevo Laredo and Forest City) from initial lead isotope ratios matching those of the Canyon Diablo iron meteorite. Precise Pb–Pb dating of meteorites Chondrules and calcium–aluminium-rich inclusions (CAIs) are spherical particles that make up chondritic meteorites and are believed to be the oldest objects in the Solar System. Hence precise dating of these objects is important to constrain the early evolution of the Solar System and the age of the Earth. The U–Pb dating method can yield the most precise ages for early Solar System objects due to the optimal half-life of 238U. However, the absence of zircon or other uranium-rich minerals in chondrites, and the presence of initial non-radiogenic Pb (common Pb), rules out direct use of the U–Pb concordia method. Therefore, the most precise dating method for these meteorites is the Pb–Pb method, which allows a correction for common Pb. When the abundance of 204Pb is relatively low, this isotope has larger measurement errors than the other Pb isotopes, leading to very strong correlation of errors between the measured ratios. This makes it difficult to determine the analytical uncertainty on the age. To avoid this problem, researchers developed an 'alternative Pb–Pb isochron diagram' (see figure) with reduced error correlation between the measured ratios. In this diagram the 204Pb/206Pb ratio (the reciprocal of the normal ratio) is plotted on the x-axis, so that a point on the y axis (zero 204Pb/206Pb) would have infinitely radiogenic Pb. The ratio plotted on this axis is the 207Pb/206Pb ratio, corresponding to the slope of a normal Pb/Pb isochron, which yields the age. The most accurate ages are produced by samples near the y-axis, which was achieved by step-wise leaching and analysis of the samples. Previously, when applying the alternative Pb–Pb isochron diagram, the 238U/235U isotope ratios were assumed to be invariant among meteoritic material. However, it has been shown that 238U/235U ratios are variable among meteoritic material. To accommodate this, U-corrected Pb–Pb dating analysis is used to generate ages for the oldest solid material in the Solar System using a revised 238U/235U value of 137.786 ± 0.013 to represent the mean 238U/235U isotope ratio in bulk inner Solar System materials. The result of U-corrected Pb–Pb dating has produced ages of 4567.35 ± 0.28 My for CAIs (A) and chondrules with ages between 4567.32 ± 0.42 and 4564.71 ± 0.30 My (B and C) (see figure). This supports the idea that CAIs crystallization and chondrule formation occurred around the same time during the formation of the solar system. However, chondrules continued to form for approximately 3 My after CAIs. Hence the best age for the original formation of the Solar System is 4567.7 My. This date also represents the time of initiation of planetary accretion. Successive collisions between accreted bodies led to the formation of larger and larger planetesimals, finally forming the Earth–Moon system in a giant impact event. The age difference between CAIs and chondrules measured in these studies verifies the chronology of the early Solar System derived from extinct short-lived nuclide methods such as 26Al–26Mg, thus improving our understanding of the development of the Solar System and the formation of the Earth. References External links Geochronology and Isotopes Data Portal Radiometric dating
Lead–lead dating
[ "Chemistry" ]
1,803
[ "Radiometric dating", "Radioactivity" ]
14,457,354
https://en.wikipedia.org/wiki/Zeeman%E2%80%93Doppler%20imaging
In astrophysics, Zeeman–Doppler imaging is a tomographic technique dedicated to the cartography of stellar magnetic fields, as well as surface brightness or spots and temperature distributions. This method makes use of the ability of magnetic fields to polarize the light emitted (or absorbed) in spectral lines formed in the stellar atmosphere (the Zeeman effect). The periodic modulation of Zeeman signatures during the stellar rotation is employed to make an iterative reconstruction of the vectorial magnetic field at stellar surface. The method was first proposed by Marsh and Horne in 1988, as a way to interpret the emission line variations of cataclysmic variable stars. This techniques is based on the principle of maximum entropy image reconstruction; it yields the simplest magnetic field geometry (as a spherical harmonics expansion) among the various solutions compatible with the data. This technique is the first to enable the reconstruction of the vectorial magnetic geometry of stars similar to the Sun. It now enables systematic studies of stellar magnetism and provides insights into the geometry of large arches formed by magnetic fields above stellar surfaces. To collect the observations related to Zeeman-Doppler Imaging, astronomers use stellar spectropolarimeters like ESPaDOnS at CFHT on Mauna Kea (Hawaii), HARPSpol at the ESO's 3.6m telescope (La Silla Observatory, Chile), as well as NARVAL at Bernard Lyot Telescope (Pic du Midi de Bigorre, France). The technique is very reliable, as the reconstruction of the magnetic field maps with different algorithms yield almost identical results, even with poorly sampled data sets. It makes use of high-resolution time-series spectropolarimetric observations (Stokes parameter spectra). It has however been shown, from both numerical simulations and observations, that the magnetic field strength and complexity is underestimated if no linear polarization spectra is available from observations. Since linear polarization signatures are weaker compared circular polarization their detections are not as reliable, particularly for cool stars. Therefore, the observations are normally limited to only Stokes IV parameters. With more modern spectropolarimeters such as the recently installed SPIRou at CFHT and CRIRES+ at the Very Large Telescope (Chile) the sensitivity to linear polarization will increase, allowing for more detailed studies of cool stars in the future. References External links Zeeman-Doppler Imaging Stellar tomography: when medical imaging helps astronomy Recent examples of using Zeeman-Doppler Imaging Astrophysics Spectroscopy
Zeeman–Doppler imaging
[ "Physics", "Chemistry", "Astronomy" ]
513
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Astrophysics", "Spectroscopy", "Astronomical sub-disciplines" ]
14,457,415
https://en.wikipedia.org/wiki/Manufacturer%27s%20empty%20weight
In aviation, manufacturer's empty weight (MEW) (also known as manufacturer's weight empty (MWE)) is the weight of the aircraft "as built" and includes the weight of the structure, power plant, furnishings, installations, systems, and other equipment that are considered an integral part of an aircraft before additional operator items are added for operation. Basic aircraft empty weight is essentially the same and excludes any baggage, passengers, or usable fuel. Some manufacturers define this empty weight as including optional equipment, i.e. GPS units, cargo baskets, or spotlights. Specification MEW This is the MEW quoted in the manufacturer's standard specification documents and is the aircraft standard basic dry weight upon which all other standard specifications and aircraft performance are based by the manufacturer. The Specification MEW includes the weight of: Airframe structure – primary and secondary structures (fuselage, wing, tail, control surfaces, nacelles, landing gear). Powerplant. Auxiliary power unit (APU). Systems (instruments, navigation, hydraulics, pneumatics, fuel systems (but not fuel itself), electrical system, electronics, fixed furnishings (but not operator specific), air conditioning, anti-ice system, etc.). Fixed equipment and services considered an integral part of the aircraft. Fixed ballast (if present). Closed system fluids (such as hydraulic fluids). For small aircraft, the MEW may include unusable fuel and oil. The Specification MEW excludes the weight of: All fuel (both usable and unusable). Potable water, anti-ice, and chemicals in toilets. Engine oil and APU oil. All specification items, selections, and installations which are non-basic (i.e. optional selections). Customer specific selections, installations, and options. Operator/operating items. Removable equipment and services. Payload. For small aircraft, the specification MEW is known as the standard empty weight (or standard weight empty). See also Aircraft gross weight Operating empty weight References External links Aircraft weight and balance Aircraft weight measurements
Manufacturer's empty weight
[ "Physics", "Engineering" ]
425
[ "Aircraft weight measurements", "Mass", "Matter", "Aerospace engineering" ]
14,457,671
https://en.wikipedia.org/wiki/Sedoheptulose-bisphosphatase
Sedoheptulose-bisphosphatase (also sedoheptulose-1,7-bisphosphatase or SBPase, EC number 3.1.3.37; systematic name sedoheptulose-1,7-bisphosphate 1-phosphohydrolase) is an enzyme that catalyzes the removal of a phosphate group from sedoheptulose 1,7-bisphosphate to produce sedoheptulose 7-phosphate. SBPase is an example of a phosphatase, or, more generally, a hydrolase. This enzyme participates in the Calvin cycle. Structure SBPase is a homodimeric protein, meaning that it is made up of two identical subunits. The size of this protein varies between species, but is about 92,000 Da (two 46,000 Da subunits) in cucumber plant leaves. The key functional domain controlling SBPase function involves a disulfide bond between two cysteine residues. These two cysteine residues, Cys52 and Cys57, appear to be located in a flexible loop between the two subunits of the homodimer, near the active site of the enzyme. Reduction of this regulatory disulfide bond by thioredoxin incites a conformational change in the active site, activating the enzyme. Additionally, SBPase requires the presence of magnesium (Mg2+) to be functionally active. SBPase is bound to the stroma-facing side of the thylakoid membrane in the chloroplast in a plant. Some studies have suggested the SBPase may be part of a large (900 kDa) multi-enzyme complex along with a number of other photosynthetic enzymes. Regulation SBPase is involved in the regeneration of 5-carbon sugars during the Calvin cycle. Although SBPase has not been emphasized as an important control point in the Calvin cycle historically, it plays a large part in controlling the flux of carbon through the Calvin cycle. Additionally, SBPase activity has been found to have a strong correlation with the amount of photosynthetic carbon fixation. Like many Calvin cycle enzymes, SBPase is activated in the presence of light through a ferredoxin/thioredoxin system. In the light reactions of photosynthesis, light energy powers the transport of electrons to eventually reduce ferredoxin. The enzyme ferredoxin-thioredoxin reductase uses reduced ferredoxin to reduce thioredoxin from the disulfide form to the dithiol. Finally, the reduced thioredoxin is used to reduced a cysteine-cysteine disulfide bond in SBPase to a dithiol, which converts the SBPase into its active form. SBPase has additional levels of regulation beyond the ferredoxin/thioredoxin system. Mg2+ concentration has a significant impact on the activity of SBPase and the rate of the reactions it catalyzes. SBPase is inhibited by acidic conditions (low pH). This is a large contributor to the overall inhibition of carbon fixation when the pH is low inside the stroma of the chloroplast. Finally, SBPase is subject to negative feedback regulation by sedoheptulose-7-phosphate and inorganic phosphate, the products of the reaction it catalyzes. Evolutionary origin SBPase and FBPase (fructose-1,6-bisphosphatase, EC 3.1.3.11) are both phosphatases that catalyze similar during the Calvin cycle. The genes for SBPase and FBPase are related. Both genes are found in the nucleus in plants, and have bacterial ancestry. SBPase is found across many species. In addition to being universally present in photosynthetic organism, SBPase is found in a number of evolutionarily-related, non-photosynthetic microorganisms. SBPase likely originated in red algae. Horticultural Relevance Moreso than other enzymes in the Calvin cycle, SBPase levels have a significant impact on plant growth, photosynthetic ability, and response to environmental stresses. Small decreases in SBPase activity result in decreased photosynthetic carbon fixation and reduced plant biomass. Specifically, decreased SBPase levels result in stunted plant organ growth and development compared to wild-type plants, and starch levels decrease linearly with decreases in SBPase activity, suggesting that SBPase activity is a limiting factor to carbon assimilation. This sensitivity of plants to decreased SBPase activity is significant, as SBPase itself is sensitive to oxidative damage and inactivation from environmental stresses. SBPase contains several catalytically relevant cysteine residues that are vulnerable to irreversible oxidative carbonylation by reactive oxygen species (ROS), particularly from hydroxyl radicals created during the production of hydrogen peroxide. Carbonylation results in SBPase enzyme inactivation and subsequent growth retardation due to inhibition of carbon assimilation. Oxidative carbonylation of SBPase can be induced by environmental pressures such as chilling, which causes an imbalance in metabolic processes leading to increased production of reactive oxygen species, particularly hydrogen peroxide.  Notably, chilling inhibits SBPase and a related enzyme, fructose bisphosphatase, but does not affect other reductively activated Calvin cycle enzymes. The sensitivity of plants to synthetically reduced or inhibited SBPase levels provides an opportunity for crop engineering. There are significant indications that transgenic plants which overexpress SBPase may be useful in improving food production efficiency by producing crops that are more resilient to environmental stresses, as well as have earlier maturation and higher yield. Overexpression of SBPase in transgenic tomato plants provided resistance to chilling stress, with the transgenic plants maintaining higher SBPase activity, increased carbon dioxide fixation, reduced electrolyte leakage and increased carbohydrate accumulation relative to wild-type plants under the same chilling stress. It is also likely that transgenic plants would be more resilient to osmotic stress caused by drought or salinity, as the activation of SBPase is shown to be inhibited in chloroplasts exposed to hypertonic conditions, though this has not been directly tested. Overexpression of SBPase in transgenic tobacco plants resulted in enhanced photosynthetic efficiency and growth. Specifically, transgenic plants exhibited greater biomass and improved carbon dioxide fixation, as well as an increase in RuBisCO activity. The plants grew significantly faster and larger than wild-type plants, with increased sucrose and starch levels. References Further reading Photosynthesis EC 3.1.3
Sedoheptulose-bisphosphatase
[ "Chemistry", "Biology" ]
1,433
[ "Biochemistry", "Photosynthesis" ]
14,458,653
https://en.wikipedia.org/wiki/Coalescence%20%28chemistry%29
In chemistry, coalescence is a process in which two phase domains of the same composition come together and form a larger phase domain. In other words, the process by which two or more separate masses of miscible substances seem to "pull" each other together should they make the slightest contact. References External links IUPAC Gold Book Physical chemistry
Coalescence (chemistry)
[ "Physics", "Chemistry" ]
71
[ "Physical chemistry", "Applied and interdisciplinary physics", "Physical chemistry stubs", "nan" ]
14,459,043
https://en.wikipedia.org/wiki/Influence%20line
In engineering, an influence line graphs the variation of a function (such as the shear, moment etc. felt in a structural member) at a specific point on a beam or truss caused by a unit load placed at any point along the structure. Common functions studied with influence lines include reactions (forces that the structure's supports must apply for the structure to remain static), shear, moment, and deflection (Deformation). Influence lines are important in designing beams and trusses used in bridges, crane rails, conveyor belts, floor girders, and other structures where loads will move along their span. The influence lines show where a load will create the maximum effect for any of the functions studied. Influence lines are both scalar and additive. This means that they can be used even when the load that will be applied is not a unit load or if there are multiple loads applied. To find the effect of any non-unit load on a structure, the ordinate results obtained by the influence line are multiplied by the magnitude of the actual load to be applied. The entire influence line can be scaled, or just the maximum and minimum effects experienced along the line. The scaled maximum and minimum are the critical magnitudes that must be designed for in the beam or truss. In cases where multiple loads may be in effect, influence lines for the individual loads may be added together to obtain the total effect felt the structure bears at a given point. When adding the influence lines together, it is necessary to include the appropriate offsets due to the spacing of loads across the structure. For example, a truck load is applied to the structure. Rear axle, B, is three feet behind front axle, A, then the effect of A at x feet along the structure must be added to the effect of B at (x – 3) feet along the structure—not the effect of B at x feet along the structure. Many loads are distributed rather than concentrated. Influence lines can be used with either concentrated or distributed loadings. For a concentrated (or point) load, a unit point load is moved along the structure. For a distributed load of a given width, a unit-distributed load of the same width is moved along the structure, noting that as the load nears the ends and moves off the structure only part of the total load is carried by the structure. The effect of the distributed unit load can also be obtained by integrating the point load's influence line over the corresponding length of the structures. The Influence lines of determinate structures becomes a mechanism whereas the Influence lines of indeterminate structures become just determinate. Demonstration from Betti's theorem Influence lines are based on Betti's theorem. From there, consider two external force systems, and , each one associated with a displacement field whose displacements measured in the force's point of application are represented by and . Consider that the system represents actual forces applied to the structure, which are in equilibrium. Consider that the system is formed by a single force, . The displacement field associated with this forced is defined by releasing the structural restraints acting on the point where is applied and imposing a relative unit displacement that is kinematically admissible in the negative direction, represented as . From Betti's theorem, we obtain the following result: Concept When designing a beam or truss, it is necessary to design for the scenarios causing the maximum expected reactions, shears, and moments within the structure members to ensure that no member fails during the life of the structure. When dealing with dead loads (loads that never move, such as the weight of the structure itself), this is relatively easy because the loads are easy to predict and plan for. For live loads (any load that moves during the life of the structure, such as furniture and people), it becomes much harder to predict where the loads will be or how concentrated or distributed they will be throughout the life of the structure. Influence lines graph the response of a beam or truss as a unit load travels across it. The influence line helps designers find where to place a live load in order to calculate the maximum resulting response for each of the following functions: reaction, shear, or moment. The designer can then scale the influence line by the greatest expected load to calculate the maximum response of each function for which the beam or truss must be designed. Influence lines can also be used to find the responses of other functions (such as deflection or axial force) to the applied unit load, but these uses of influence lines are less common. Methods for constructing influence lines There are three methods used for constructing the influence line. The first is to tabulate the influence values for multiple points along the structure, then use those points to create the influence line. The second is to determine the influence-line equations that apply to the structure, thereby solving for all points along the influence line in terms of x, where x is the number of feet from the start of the structure to the point where the unit load is applied. The third method is called the Müller-Breslau's principle. It creates a qualitative influence line. This influence line will still provide the designer with an accurate idea of where the unit load will produce the largest response of a function at the point being studied, but it cannot be used directly to calculate what the magnitude that response will be, whereas the influence lines produced by the first two methods can. Tabulate values To tabulate the influence values with respect to some point A on the structure, a unit load must be placed at various points along the structure. Statics is used to calculate what the value of the function (reaction, shear, or moment) is at point A. Typically an upwards reaction is seen as positive. Shear and moments are given positive or negative values according to the same conventions used for shear and moment diagrams. R. C. Hibbeler states, in his book Structural Analysis, “All statically determinate beams will have influence lines that consist of straight line segments.” Therefore, it is possible to minimize the number of computations by recognizing the points that will cause a change in the slope of the influence line and only calculating the values at those points. The slope of the inflection line can change at supports, mid-spans, and joints. An influence line for a given function, such as a reaction, axial force, shear force, or bending moment, is a graph that shows the variation of that function at any given point on a structure due to the application of a unit load at any point on the structure. An influence line for a function differs from a shear, axial, or bending moment diagram. Influence lines can be generated by independently applying a unit load at several points on a structure and determining the value of the function due to this load, i.e. shear, axial, and moment at the desired location. The calculated values for each function are then plotted where the load was applied and then connected together to generate the influence line for the function. Once the influence values have been tabulated, the influence line for the function at point A can be drawn in terms of x. First, the tabulated values must be located. For the sections in between the tabulated points, interpolation is required. Therefore, straight lines may be drawn to connect the points. Once this is done, the influence line is complete. Influence-line equations It is possible to create equations defining the influence line across the entire span of a structure. This is done by solving for the reaction, shear, or moment at the point A caused by a unit load placed at x feet along the structure instead of a specific distance. This method is similar to the tabulated values method, but rather than obtaining a numeric solution, the outcome is an equation in terms of x. It is important to understanding where the slope of the influence line changes for this method because the influence-line equation will change for each linear section of the influence line. Therefore, the complete equation is a piecewise linear function with a separate influence-line equation for each linear section of the influence line. Müller-Breslau's Principle According to www.public.iastate.edu, “The Müller-Breslau Principle can be utilized to draw qualitative influence lines, which are directly proportional to the actual influence line.” Instead of moving a unit load along a beam, the Müller-Breslau Principle finds the deflected shape of the beam caused by first releasing the beam at the point being studied, and then applying the function (reaction, shear, or moment) being studied to that point. The principle states that the influence line of a function will have a scaled shape that is the same as the deflected shape of the beam when the beam is acted upon by the function. To understand how the beam deflects under the function, it is necessary to remove the beam's capacity to resist the function. Below are explanations of how to find the influence lines of a simply supported, rigid beam (such as the one displayed in Figure 1). When determining the reaction caused at a support, the support is replaced with a roller, which cannot resist a vertical reaction. Then an upward (positive) reaction is applied to the point where the support was. Since the support has been removed, the beam will rotate upwards, and since the beam is rigid, it will create a triangle with the point at the second support. If the beam extends beyond the second support as a cantilever, a similar triangle will be formed below the cantilevers position. This means that the reaction’s influence line will be a straight, sloping line with a value of zero at the location of the second support. When determining the shear caused at some point B along the beam, the beam must be cut and a roller-guide (which is able to resist moments but not shear) must be inserted at point B. Then, by applying a positive shear to that point, it can be seen that the left side will rotate down, but the right side will rotate up. This creates a discontinuous influence line that reaches zero at the supports and whose slope is equal on either side of the discontinuity. If point B is at a support, then the deflection between point B and any other supports will still create a triangle, but if the beam is cantilevered, then the entire cantilevered side will move up or down creating a rectangle. When determining the moment caused by at some point B along the beam, a hinge will be placed at point B, releasing it to moments but resisting shear. Then when a positive moment is placed at point B, both sides of the beam will rotate up. This will create a continuous influence line, but the slopes will be equal and opposite on either side of the hinge at point B. Since the beam is simply supported, its end supports (pins) cannot resist moment; therefore, it can be observed that the supports will never experience moments in a static situation regardless of where the load is placed. The Müller-Breslau Principle can only produce qualitative influence lines. This means that engineers can use it to determine where to place a load to incur the maximum of a function, but the magnitude of that maximum cannot be calculated from the influence line. Instead, the engineer must use statics to solve for the functions value in that loading case. Alternate loading cases Multiple loads The simplest loading case is a single point load, but influence lines can also be used to determine responses due to multiple loads and distributed loads. Sometimes it is known that multiple loads will occur at some fixed distance apart. For example, on a bridge the wheels of cars or trucks create point loads that act at relatively standard distances. To calculate the response of a function to all these point loads using an influence line, the results found with the influence line can be scaled for each load, and then the scaled magnitudes can be summed to find the total response that the structure must withstand. The point loads can have different magnitudes themselves, but even if they apply the same force to the structure, it will be necessary to scale them separately because they act at different distances along the structure. For example, if a car's wheels are 10 feet apart, then when the first set is 13 feet onto the bridge, the second set will be only 3 feet onto the bridge. If the first set of wheels is 7 feet onto the bridge, the second set has not yet reached the bridge, and therefore only the first set is placing a load on the bridge. Also, if, between two loads, one of the loads is heavier, the loads must be examined in both loading orders (the larger load on the right and the larger load on the left) to ensure that the maximum load is found. If there are three or more loads, then the number of cases to be examined increases. Distributed loads Many loads do not act as point loads, but instead act over an extended length or area as distributed loads. For example, a tractor with continuous tracks will apply a load distributed over the length of each track. To find the effect of a distributed load, the designer can integrate an influence line, found using a point load, over the affected distance of the structure. For example, if a three-foot-long track acts between 5 feet and 8 feet along a beam, the influence line of that beam must be integrated between 5 and 8 feet. The integration of the influence line gives the effect that would be felt if the distributed load had a unit magnitude. Therefore, after integrating, the designer must still scale the results to get the actual effect of the distributed load. Indeterminate structures While the influence lines of statically determinate structures (as mentioned above) are made up of straight line segments, the same is not true for indeterminate structures. Indeterminate structures are not considered rigid; therefore, the influence lines drawn for them will not be straight lines but rather curves. The methods above can still be used to determine the influence lines for the structure, but the work becomes much more complex as the properties of the beam itself must be taken into consideration. See also Beam Shear and Moment Diagram Dead and Live Loads Müller-Breslau's principle References Beam theory Structural analysis Structural engineering
Influence line
[ "Engineering" ]
2,903
[ "Structural engineering", "Structural analysis", "Construction", "Civil engineering", "Mechanical engineering", "Aerospace engineering" ]
14,460,507
https://en.wikipedia.org/wiki/Clapotis
In hydrodynamics, a clapotis (from French for "lapping of water") is a non-breaking standing wave pattern, caused for example, by the reflection of a traveling surface wave train from a near vertical shoreline like a breakwater, seawall or steep cliff. The resulting clapotic wave does not travel horizontally, but has a fixed pattern of nodes and antinodes. These waves promote erosion at the toe of the wall, and can cause severe damage to shore structures. The term was coined in 1877 by French mathematician and physicist Joseph Valentin Boussinesq who called these waves 'le clapotis' meaning "the lapping". In the idealized case of "full clapotis" where a purely monotonic incoming wave is completely reflected normal to a solid vertical wall, the standing wave height is twice the height of the incoming waves at a distance of one half wavelength from the wall. In this case, the circular orbits of the water particles in the deep-water wave are converted to purely linear motion, with vertical velocities at the antinodes, and horizontal velocities at the nodes. The standing waves alternately rise and fall in a mirror image pattern, as kinetic energy is converted to potential energy, and vice versa. In his 1907 text, Naval Architecture, Cecil Peabody described this phenomenon: Related phenomena True clapotis is very rare, because the depth of the water or the precipitousness of the shore are unlikely to completely satisfy the idealized requirements. In the more realistic case of partial clapotis, where some of the incoming wave energy is dissipated at the shore, the incident wave is less than 100% reflected, and only a partial standing wave is formed where the water particle motions are elliptical. This may also occur at sea between two different wave trains of near equal wavelength moving in opposite directions, but with unequal amplitudes. In partial clapotis the wave envelope contains some vertical motion at the nodes. When a wave train strikes a wall at an oblique angle, the reflected wave train departs at the supplementary angle causing a cross-hatched wave interference pattern known as the clapotis gaufré ("waffled clapotis"). In this situation, the individual crests formed at the intersection of the incident and reflected wave train crests move parallel to the structure. This wave motion, when combined with the resultant vortices, can erode material from the seabed and transport it along the wall, undermining the structure until it fails. Clapotic waves on the sea surface also radiate infrasonic microbaroms into the atmosphere, and seismic signals called microseisms coupled through the ocean floor to the solid Earth. Clapotis has been called the bane and the pleasure of sea kayaking. See also Rogue wave Seiche References Further reading External links Wave mechanics Coastal engineering Water waves
Clapotis
[ "Physics", "Chemistry", "Engineering" ]
589
[ "Physical phenomena", "Water waves", "Coastal engineering", "Classical mechanics", "Waves", "Wave mechanics", "Civil engineering", "Fluid dynamics" ]
14,461,309
https://en.wikipedia.org/wiki/Nitrogen-vacancy%20center
The nitrogen-vacancy center (N-V center or NV center) is one of numerous photoluminescent point defects in diamond. Its most explored and useful properties include its spin-dependent photoluminescence (which enables measurement of the electronic spin state using optically detected magnetic resonance), and its relatively long (millisecond) spin coherence at room temperature, lasting up to milliseconds. The NV center energy levels are modified by magnetic fields, electric fields, temperature, and strain, which allow it to serve as a sensor of a variety of physical phenomena. Its atomic size and spin properties can form the basis for useful quantum sensors. NV centers enable nanoscale measurements of magnetic and electric fields, temperature, and mechanical strain with improved precision. External perturbation sensitivity makes NV centers ideal for applications in biomedicine—such as single-molecule imaging and cellular process modeling. NV centers can also be initialized as qubits and enable the implementation of quantum algorithms and networks. It has also been explored for applications in quantum computing (e.g. for entanglement generation), quantum simulation, and spintronics. Structure The nitrogen-vacancy center is a point defect in the diamond lattice. It consists of a nearest-neighbor pair of a nitrogen atom, which substitutes for a carbon atom, and a lattice vacancy. Two charge states of this defect, neutral NV0 and negative NV−, are known from spectroscopic studies using optical absorption, photoluminescence (PL), electron paramagnetic resonance (EPR) and optically detected magnetic resonance (ODMR), which can be viewed as a hybrid of PL and EPR; most details of the structure originate from EPR. The nitrogen atom on one hand has five valence electrons. Three of them are covalently bonded to the carbon atoms, while the other two remain non-bonded and are called a lone pair. The vacancy on the other hand has three unpaired electrons. Two of them form a quasi covalent bond and one remains unpaired. The overall symmetry, however, is axial (trigonal C3V); one can visualize this by imagining the three unpaired vacancy electrons continuously exchanging their roles. The NV0 thus has one unpaired electron and is paramagnetic. However, despite extensive efforts, electron paramagnetic resonance signals from NV0 avoided detection for decades until 2008. Optical excitation is required to bring the NV0 defect into the EPR-detectable excited state; the signals from the ground state are presumably too broad for EPR detection. The NV0 centers can be converted into NV− by changing the Fermi level position. This can be achieved by applying external voltage to a p-n junction made from doped diamond, e.g., in a Schottky diode. In the negative charge state NV−, an extra electron is located at the vacancy site forming a spin S=1 pair with one of the vacancy electrons. This extra electron induces spin triplet ground states of the form |3A⟩ and excited states of the form |3E⟩. There is an additional metastable state that exists between these spin triplets, that often manifests as a singlet. These states play a crucial role in enabling ground state depletion (GSD) microscopy. As in NV0, the vacancy electrons are "exchanging roles" preserving the overall trigonal symmetry. This NV− state is what is commonly, and somewhat incorrectly, called "the nitrogen-vacancy center". The neutral state is not generally used for quantum technology. The NV centers are randomly oriented within a diamond crystal. Ion implantation techniques can enable their artificial creation in predetermined positions. Production Nitrogen-vacancy centers are typically produced from single substitutional nitrogen centers (called C or P1 centers in diamond literature) by irradiation followed by annealing at temperatures above 700 °C. A wide range of high-energy particles is suitable for such irradiation, including electrons, protons, neutrons, ions, and gamma photons. Irradiation produces lattice vacancies, which are a part of NV centers. Those vacancies are immobile at room temperature, and annealing is required to move them. Single substitutional nitrogen produces strain in the diamond lattice; it therefore efficiently captures moving vacancies, producing the NV centers. During chemical vapor deposition of diamond, a small fraction of single substitutional nitrogen impurity (typically <0.5%) traps vacancies generated as a result of the plasma synthesis. Such nitrogen-vacancy centers are preferentially aligned to the growth direction. Delta doping of nitrogen during CVD growth can be used to create two-dimensional ensembles of NV centers near the diamond surface for enhanced sensing or simulation. Diamond is notorious for having a relatively large lattice strain. Strain splits and shifts optical transitions from individual centers resulting in broad lines in the ensembles of centers. Special care is taken to produce extremely sharp NV lines (line width ~10 MHz) required for most experiments: high-quality, pure natural or better synthetic diamonds (type IIa) are selected. Many of them already have sufficient concentrations of grown-in NV centers and are suitable for applications. If not, they are irradiated by high-energy particles and annealed. Selection of a certain irradiation dose allows tuning the concentration of produced NV centers such that individual NV centers are separated by micrometre-large distances. Then, individual NV centers can be studied with standard optical microscopes or, better, near-field scanning optical microscopes having sub-micrometre resolution. Energy level structure The NV center has a ground-state triplet (3A), an excited-state triplet (3E) and two intermediate-state singlets (1A and 1E). Both 3A and 3E contain ms = ±1 spin states, in which the two electron spins are aligned (either up, such that ms = +1 or down, such that ms = -1), and an ms = 0 spin state where the electron spins are antiparallel. Due to the magnetic interaction, the energy of the ms = ±1 states is higher than that of the ms = 0 state. 1A and 1E only contain a spin state singlet each with ms = 0. If an external magnetic field is applied along the defect axis (the axis which aligns with the nitrogen atom and the vacancy) of the NV center, it does not affect the ms = 0 states, but it splits the ms = ±1 levels (Zeeman effect). Similarly the following other properties of the environment influence the energy level diagram : Amplitude and orientation of a static magnetic field splits the ms = ±1 levels in the ground and excited states. Amplitude and orientation of elastic (strain) or electric fields have a much smaller but also more complex effects on the different levels. Continuous-wave microwave radiation (applied in resonance with the transition between ms = 0 and (one of the) ms = ±1 states) changes the population of the sublevels within the ground and excited state. A tunable laser can selectively excite certain sublevels of the ground and excited states. Surrounding spins and spin–orbit interaction will modulate the magnetic field experienced by the NV center. Temperature and pressure affect different parts of the spectrum including the shift between ground and excited states. The above-described energy structure is by no means exceptional for a defect in diamond or other semiconductor. It was not this structure alone, but a combination of several favorable factors (previous knowledge, easy production, biocompatibility, simple initialisation, use at room temperature etc.) which suggested the use of the NV center as a qubit and quantum sensor. Optical properties NV centers emit bright red light (3E→3A transitions), if excited off-resonantly by visible green light (3A →3E transitions). This can be done with convenient light sources such as argon or krypton lasers, frequency doubled Nd:YAG lasers, dye lasers, or He-Ne lasers. Excitation can also be achieved at energies below that of zero phonon emission. As the relaxation time from the excited state is small (~10 ns), the emission happens almost instantly after the excitation. At room temperature the NV center's optical spectrum exhibits no sharp peaks due to thermal broadening. However, cooling the NV centers with liquid nitrogen or liquid helium dramatically narrows the lines down to a width of a few MHz. At low temperature it also becomes possible to specifically address the zero-phonon line (ZPL). An important property of the luminescence from individual NV centers is its high temporal stability. Whereas many single-molecular emitters bleach (i.e. change their charge state and become dark) after emission of 106–108 photons, bleaching is unlikely for NV centers at room temperature. Strong laser illumination, however, may also convert some NV− into NV0 centers. Because of these properties, the ideal technique to address the NV centers is confocal microscopy, both at room temperature and at low temperature. State manipulation Optical spin manipulation Optical transitions must preserve the total spin and occur only between levels of the same total spin. Specifically, transitions between the ground and excited states (with equal spin) can be induced using a green laser with a wavelength of 546 nm. Transitions 3E→1A and 1E→3A are non-radiative, while 1A →1E has both a non-radiative and infrared decay path. The diagram on the right shows the multi-electronic states of the NV center labeled according to their symmetry (E or A) and their spin state (3 for a triplet (S=1) and 1 for a singlet (S=0)). There are two triplet states and two intermediate singlet states. Spin-state initialisation An important property of the non-radiative transition between 3E and 1A is that it is stronger for ms = ±1 and weaker for ms = 0. This provides the basis a very useful manipulation strategy, which is called spin state initialisation (or optical spin-polarization). To understand the process, first consider an off-resonance excitation which has a higher frequency (typically 2.32 eV (532 nm)) than the frequencies of all transitions and thus lies in the vibronic bands for all transitions. By using a pulse of this wavelength, one can excite all spin states from 3A to 3E. An NV center in the ground state with ms = 0 will be excited to the corresponding excited state with ms = 0 due to the conservation of spin. Afterwards it decays back to its original state. For a ground state with ms = ±1, the situation is different. After the excitation, it has a relatively high probability to decay into the intermediate state 1A by non-radiative transition and further into the ground state with ms = 0. After many cycles, the state of the NV center (independently of whether it started in ms = 0 or ms = ±1) will end up in the ms = 0 ground state. This process can be used to initialize the quantum state of a qubit for quantum information processing or quantum sensing. Sometimes the polarisability of the NV center is explained by the claim that the transition from 1E to the ground state with ms = ±1 is small, compared to the transition to ms = 0. However, it has been shown that the comparatively low decay probability for ms = 0 states w.r.t. ms = ±1 states into 1A is enough to explain the polarization. Effects of external fields Microwave spin manipulation The energy difference between the ms = 0 and ms = ±1 states corresponds to the microwave regime. Population can be transferred between the states by applying a resonant magnetic field perpendicular to the defect axis. Numerous dynamic effects (spin echo, Rabi oscillations, etc.) can be exploited by applying a carefully designed sequence of microwave pulses. Such protocols are rather important for the practical realization of quantum computers. By manipulating the population, it is possible to shift the NV center into a more sensitive or stable state. Its own resulting fluctuating fields may also be used to influence the surrounding nuclei or protect the NV center itself from noise. This is typically done using a wire loop (microwave antenna) which creates an oscillating magnetic field. Optical manipulation There are inherent difficulties in achieving miniaturization and effective error reduction in microwave and radio frequency driven spin manipulation techniques. This poses special challenge on application of spin based quantum sensors on sensing electric and magnetic field or any physical phenomena at nanoscale level. The recent developments in microwave-free and optically driven methods pave the way towards energy efficient and coherent quantum sensing. This technique is based on coherent mapping of the spin states of the Nitrogen nucleus to that of the NV center under the application of external magnetic field transverse to the NV symmetry axis. The optical pumping then prepares the system in a coherent superposition state which is the key element in a quantum network. Influence of external factors If a magnetic field is oriented along the defect axis it leads to Zeeman splitting separating the ms = +1 from the ms = -1 states. This technique is used to lift the degeneracy and use only two of the spin states (usually the ground states with ms = -1 and ms = 0) as a qubit. Population can then be transferred between them using a microwave field. In the specific instance that the magnetic field reaches 1027 G (or 508 G) then the ms = –1 and ms = 0 states in the ground (or excited) state become equal in energy (Ground/Excited State Level Anticrossing). The following strong interaction results in so-called spin polarization, which strongly affects the intensity of optical absorption and luminescence transitions involving those states. Importantly, this splitting can be modulated by applying an external electric field, in a similar fashion to the magnetic field mechanism outlined above, though the physics of the splitting is somewhat more complex. Nevertheless, an important practical outcome is that the intensity and position of the luminescence lines is modulated. Strain has a similar effect on the NV center as electric fields. There is an additional splitting of the ms = ±1 energy levels, which originates from the hyperfine interaction between surrounding nuclear spins and the NV center. These nuclear spins create magnetic and electric fields of their own leading to further distortions of the NV spectrum (see nuclear Zeeman and quadrupole interaction). Also the NV center's own spin–orbit interaction and orbital degeneracy leads to additional level splitting in the excited 3E state. Temperature and pressure directly influence the zero-field term of the NV center leading to a shift between the ground and excited state levels. The Hamiltonian, a quantum mechanical equation describing the dynamics of a system, which shows the influence of different factors on the NV center can be found below. Although it can be challenging, all of these effects are measurable, making the NV center a perfect candidate for a quantum sensor. Charge state manipulation It is also possible to switch the charge state of the NV center (i.e. between NV−, NV+ and NV0) by applying a gate voltage. The gate voltage electrically shifts the Fermi level at the diamond surface and changes its surface band bending. Upon varying the gate voltage, individual centers are allowed to switch from an unknown non-fluorescent state to the neutral charge state NV0. The ensemble of centers can be transitioned from NV0 to the qubit state NV−. The diamond surface termination additionally influences the charge state of near-surface NV centers. Oxygen termination is known to stabilize the NV−state by reducing surface conductivity and mitigating band bending This improves charge state stability and coherence. In a similar capacity, nitrogen termination also affects surface properties and can optimize NV centers for specific sensing applications. Optical excitation methods additionally play a role in charge state manipulation. Illumination with specific wavelengths can induce transitions between charge states. Near-infrared light at 1064 nm has been shown to convert NV0 to NV−, enhancing photoluminescence. Applications The spectral shape and intensity of the optical signals from the NV− centers are sensitive to external perturbation, such as temperature, strain, electric and magnetic field. However, the use of spectral shape for sensing those perturbation is impractical, as the diamond would have to be cooled to cryogenic temperatures to sharpen the NV− signals. A more realistic approach is to use luminescence intensity (rather than lineshape), which exhibits a sharp resonance when a microwave frequency is applied to diamond that matches the splitting of the ground-state levels. The resulting optically detected magnetic resonance signals are sharp even at room temperature, and can be used in miniature sensors. Such sensors can detect magnetic fields of a few nanotesla or electric fields of about 10 V/cm at kilohertz frequencies after 100 seconds of averaging. This sensitivity allows detecting a magnetic or electric field produced by a single electron located tens of nanometers away from an NV− center. Using the same mechanism, the NV− centers were employed in scanning thermal microscopy to measure high-resolution spatial maps of temperature and thermal conductivity (see image). Because the NV center is sensitive to magnetic fields, it is being actively used in scanning probe measurements to study myriad condensed matter phenomena both through measuring a spatially varying magnetic field or inferring local currents in a device. Another possible use of the NV− centers is as a detector to measure the full mechanical stress tensor in the bulk of the crystal. For this application, the stress-induced splitting of the zero-phonon-line is exploited, and its polarization properties. A robust frequency-modulated radio receiver using the electron-spin-dependent photoluminescence that operated up to 350 °C demonstrates the possibility for use in extreme conditions. In addition to the quantum optical applications, luminescence from the NV− centers can be applied for imaging biological processes, such as fluid flow in living cells. This application relies on good compatibility of diamond nano-particles with the living cells and on favorable properties of photoluminescence from the NV− centers (strong intensity, easy excitation and detection, temporal stability, etc.). Compared with large single-crystal diamonds, nanodiamonds are cheap (about US$1 per gram) and available from various suppliers. NV− centers are produced in diamond powders with sub-micrometre particle size using the standard process of irradiation and annealing described above. Due to the relatively small size of nanodiamond, NV centers can be produced by irradiating nanodiamond of 100 nm or less with medium energy H+ beam. This method reduces the required ion dose and reaction, making it possible to mass-produce fluorescent nanodiamonds in ordinary laboratory. Fluorescent nanodiamond produced with such method is bright and photostable, making it excellent for long-term, three dimensional tracking of single particle in living cell. Those nanodiamonds are introduced in a cell, and their luminescence is monitored using a standard fluorescence microscope. Stimulated emission from the NV− center has been demonstrated, though it could be achieved only from the phonon side-band (i.e. broadband light) and not from the ZPL. For this purpose, the center has to be excited at a wavelength longer than ~650 nm, as higher-energy excitation ionizes the center. The first continuous-wave room-temperature maser has been demonstrated. It used 532-nm pumped NV− centers held within a high Purcell factor microwave cavity and an external magnetic field of 4300 G. Continuous maser oscillation generated a coherent signal at ~9.2 GHz. The NV center can have a very long spin coherence time approaching the second regime. This is advantageous for applications in quantum sensing and quantum communication. Disadvantageous for these applications is the long radiative lifetime (~12 ns ) of the NV center and the strong phonon sideband in its emission spectrum. Both issues can be addressed by putting the NV center in an optical cavity. Historical remarks The microscopic model and most optical properties of ensembles of the NV− centers have been firmly established in the 1970s based on the optical measurements combined with uniaxial stress and on the electron paramagnetic resonance. However, a minor error in EPR results (it was assumed that illumination is required to observe NV− EPR signals) resulted in the incorrect multiplicity assignments in the energy level structure. In 1991 it was shown that EPR can be observed without illumination, which established the energy level scheme shown above. The magnetic splitting in the excited state has been measured only recently. The characterization of single NV− centers has become a very competitive field nowadays, with many dozens of papers published in the most prestigious scientific journals. One of the first results was reported back in 1997. In that paper, it was demonstrated that the fluorescence of single NV− centers can be detected by room-temperature fluorescence microscopy and that the defect shows perfect photostability. Also one of the outstanding properties of the NV center was demonstrated, namely room-temperature optically detected magnetic resonance. See also Crystallographic defects in diamond Crystallographic defect Material properties of diamond Notes References Diamond Spintronics Spectroscopy Crystallographic defects Quantum computing
Nitrogen-vacancy center
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,496
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Spintronics", "Crystallographic defects", "Materials science", "Crystallography", "Condensed matter physics", "Materials degradation", "Spectroscopy" ]
6,993,953
https://en.wikipedia.org/wiki/Immersion%20%28mathematics%29
In mathematics, an immersion is a differentiable function between differentiable manifolds whose differential pushforward is everywhere injective. Explicitly, is an immersion if is an injective function at every point of (where denotes the tangent space of a manifold at a point in and is the derivative (pushforward) of the map at point ). Equivalently, is an immersion if its derivative has constant rank equal to the dimension of : The function itself need not be injective, only its derivative must be. Vs. embedding A related concept is that of an embedding. A smooth embedding is an injective immersion that is also a topological embedding, so that is diffeomorphic to its image in . An immersion is precisely a local embedding – that is, for any point there is a neighbourhood, , of such that is an embedding, and conversely a local embedding is an immersion. For infinite dimensional manifolds, this is sometimes taken to be the definition of an immersion. If is compact, an injective immersion is an embedding, but if is not compact then injective immersions need not be embeddings; compare to continuous bijections versus homeomorphisms. Regular homotopy A regular homotopy between two immersions and from a manifold to a manifold is defined to be a differentiable function such that for all in the function defined by for all is an immersion, with , . A regular homotopy is thus a homotopy through immersions. Classification Hassler Whitney initiated the systematic study of immersions and regular homotopies in the 1940s, proving that for every map of an -dimensional manifold to an -dimensional manifold is homotopic to an immersion, and in fact to an embedding for ; these are the Whitney immersion theorem and Whitney embedding theorem. Stephen Smale expressed the regular homotopy classes of immersions as the homotopy groups of a certain Stiefel manifold. The sphere eversion was a particularly striking consequence. Morris Hirsch generalized Smale's expression to a homotopy theory description of the regular homotopy classes of immersions of any -dimensional manifold in any -dimensional manifold . The Hirsch-Smale classification of immersions was generalized by Mikhail Gromov. Existence The primary obstruction to the existence of an immersion is the stable normal bundle of , as detected by its characteristic classes, notably its Stiefel–Whitney classes. That is, since is parallelizable, the pullback of its tangent bundle to is trivial; since this pullback is the direct sum of the (intrinsically defined) tangent bundle on , , which has dimension , and of the normal bundle of the immersion , which has dimension , for there to be a codimension immersion of , there must be a vector bundle of dimension , , standing in for the normal bundle , such that is trivial. Conversely, given such a bundle, an immersion of with this normal bundle is equivalent to a codimension 0 immersion of the total space of this bundle, which is an open manifold. The stable normal bundle is the class of normal bundles plus trivial bundles, and thus if the stable normal bundle has cohomological dimension , it cannot come from an (unstable) normal bundle of dimension less than . Thus, the cohomology dimension of the stable normal bundle, as detected by its highest non-vanishing characteristic class, is an obstruction to immersions. Since characteristic classes multiply under direct sum of vector bundles, this obstruction can be stated intrinsically in terms of the space and its tangent bundle and cohomology algebra. This obstruction was stated (in terms of the tangent bundle, not stable normal bundle) by Whitney. For example, the Möbius strip has non-trivial tangent bundle, so it cannot immerse in codimension 0 (in ), though it embeds in codimension 1 (in ). showed that these characteristic classes (the Stiefel–Whitney classes of the stable normal bundle) vanish above degree , where is the number of "1" digits when is written in binary; this bound is sharp, as realized by real projective space. This gave evidence to the immersion conjecture, namely that every -manifold could be immersed in codimension , i.e., in This conjecture was proven by . Codimension 0 Codimension 0 immersions are equivalently relative dimension 0 submersions, and are better thought of as submersions. A codimension 0 immersion of a closed manifold is precisely a covering map, i.e., a fiber bundle with 0-dimensional (discrete) fiber. By Ehresmann's theorem and Phillips' theorem on submersions, a proper submersion of manifolds is a fiber bundle, hence codimension/relative dimension 0 immersions/submersions behave like submersions. Further, codimension 0 immersions do not behave like other immersions, which are largely determined by the stable normal bundle: in codimension 0 one has issues of fundamental class and cover spaces. For instance, there is no codimension 0 immersion despite the circle being parallelizable, which can be proven because the line has no fundamental class, so one does not get the required map on top cohomology. Alternatively, this is by invariance of domain. Similarly, although and the 3-torus are both parallelizable, there is no immersion – any such cover would have to be ramified at some points, since the sphere is simply connected. Another way of understanding this is that a codimension immersion of a manifold corresponds to a codimension 0 immersion of a -dimensional vector bundle, which is an open manifold if the codimension is greater than 0, but to a closed manifold in codimension 0 (if the original manifold is closed). Multiple points A -tuple point (double, triple, etc.) of an immersion is an unordered set of distinct points with the same image . If is an -dimensional manifold and is an n-dimensional manifold then for an immersion in general position the set of -tuple points is an -dimensional manifold. Every embedding is an immersion without multiple points (where ). Note, however, that the converse is false: there are injective immersions that are not embeddings. The nature of the multiple points classifies immersions; for example, immersions of a circle in the plane are classified up to regular homotopy by the number of double points. At a key point in surgery theory it is necessary to decide if an immersion of an -sphere in a -dimensional manifold is regular homotopic to an embedding, in which case it can be killed by surgery. Wall associated to an invariant in a quotient of the fundamental group ring which counts the double points of in the universal cover of . For , is regular homotopic to an embedding if and only if by the Whitney trick. One can study embeddings as "immersions without multiple points", since immersions are easier to classify. Thus, one can start from immersions and try to eliminate multiple points, seeing if one can do this without introducing other singularities – studying "multiple disjunctions". This was first done by André Haefliger, and this approach is fruitful in codimension 3 or more – from the point of view of surgery theory, this is "high (co)dimension", unlike codimension 2 which is the knotting dimension, as in knot theory. It is studied categorically via the "calculus of functors" by Thomas Goodwillie , John Klein, and Michael S. Weiss. Examples and properties A mathematical rose with petals is an immersion of the circle in the plane with a single -tuple point; can be any odd number, but if even must be a multiple of 4, so the figure 8, with , is not a rose. The Klein bottle, and all other non-orientable closed surfaces, can be immersed in 3-space but not embedded. By the Whitney–Graustein theorem, the regular homotopy classes of immersions of the circle in the plane are classified by the winding number, which is also the number of double points counted algebraically (i.e. with signs). The sphere can be turned inside out: the standard embedding is related to by a regular homotopy of immersions Boy's surface is an immersion of the real projective plane in 3-space; thus also a 2-to-1 immersion of the sphere. The Morin surface is an immersion of the sphere; both it and Boy's surface arise as midway models in sphere eversion. Immersed plane curves Immersed plane curves have a well-defined turning number, which can be defined as the total curvature divided by 2. This is invariant under regular homotopy, by the Whitney–Graustein theorem – topologically, it is the degree of the Gauss map, or equivalently the winding number of the unit tangent (which does not vanish) about the origin. Further, this is a complete set of invariants – any two plane curves with the same turning number are regular homotopic. Every immersed plane curve lifts to an embedded space curve via separating the intersection points, which is not true in higher dimensions. With added data (which strand is on top), immersed plane curves yield knot diagrams, which are of central interest in knot theory. While immersed plane curves, up to regular homotopy, are determined by their turning number, knots have a very rich and complex structure. Immersed surfaces in 3-space The study of immersed surfaces in 3-space is closely connected with the study of knotted (embedded) surfaces in 4-space, by analogy with the theory of knot diagrams (immersed plane curves (2-space) as projections of knotted curves in 3-space): given a knotted surface in 4-space, one can project it to an immersed surface in 3-space, and conversely, given an immersed surface in 3-space, one may ask if it lifts to 4-space – is it the projection of a knotted surface in 4-space? This allows one to relate questions about these objects. A basic result, in contrast to the case of plane curves, is that not every immersed surface lifts to a knotted surface. In some cases the obstruction is 2-torsion, such as in Koschorke's example, which is an immersed surface (formed from 3 Möbius bands, with a triple point) that does not lift to a knotted surface, but it has a double cover that does lift. A detailed analysis is given in , while a more recent survey is given in . Generalizations A far-reaching generalization of immersion theory is the homotopy principle: one may consider the immersion condition (the rank of the derivative is always ) as a partial differential relation (PDR), as it can be stated in terms of the partial derivatives of the function. Then Smale–Hirsch immersion theory is the result that this reduces to homotopy theory, and the homotopy principle gives general conditions and reasons for PDRs to reduce to homotopy theory. See also Immersed submanifold Isometric immersion Submersion Notes References . . . . . . . . . . . External links Immersion at the Manifold Atlas Immersion of a manifold at the Encyclopedia of Mathematics Differential geometry Differential topology Maps of manifolds Smooth functions
Immersion (mathematics)
[ "Mathematics" ]
2,379
[ "Topology", "Differential topology" ]
6,994,353
https://en.wikipedia.org/wiki/Fractional%20factorial%20design
In statistics, fractional factorial designs are experimental designs consisting of a carefully chosen subset (fraction) of the experimental runs of a full factorial design. The subset is chosen so as to exploit the sparsity-of-effects principle to expose information about the most important features of the problem studied, while using a fraction of the effort of a full factorial design in terms of experimental runs and resources. In other words, it makes use of the fact that many experiments in full factorial design are often redundant, giving little or no new information about the system. The design of fractional factorial experiments must be deliberate, as certain effects are confounded and cannot be separated from others. History Fractional factorial design was introduced by British statistician David John Finney in 1945, extending previous work by Ronald Fisher on the full factorial experiment at Rothamsted Experimental Station. Developed originally for agricultural applications, it has since been applied to other areas of engineering, science, and business. Basic working principle Similar to a full factorial experiment, a fractional factorial experiment investigates the effects of independent variables, known as factors, on a response variable. Each factor is investigated at different values, known as levels. The response variable is measured using a combination of factors at different levels, and each unique combination is known as a run. To reduce the number of runs in comparison to a full factorial, the experiments are designed to confound different effects and interactions, so that their impacts cannot be distinguished. Higher-order interactions between main effects are typically negligible, making this a reasonable method of studying main effects. This is the sparsity of effects principle. Confounding is controlled by a systematic selection of runs from a full-factorial table. Notation Fractional designs are expressed using the notation lk − p, where l is the number of levels of each factor, k is the number of factors, and p describes the size of the fraction of the full factorial used. Formally, p is the number of generators; relationships that determine the intentionally confounded effects that reduce the number of runs needed. Each generator halves the number of runs required. A design with p such generators is a 1/(lp)=l−p fraction of the full factorial design. For example, a 25 − 2 design is 1/4 of a two-level, five-factor factorial design. Rather than the 32 runs that would be required for the full 25 factorial experiment, this experiment requires only eight runs. With two generators, the number of experiments has been halved twice. In practice, one rarely encounters l > 2 levels in fractional factorial designs as the methodology to generate such designs for more than two levels is much more cumbersome. In cases requiring 3 levels for each factor, potential fractional designs to pursue are Latin squares, mutually orthogonal Latin squares, and Taguchi methods. Response surface methodology can also be a much more experimentally efficient way to determine the relationship between the experimental response and factors at multiple levels, but it requires that the levels are continuous. In determining whether more than two levels are needed, experimenters should consider whether they expect the outcome to be nonlinear with the addition of a third level. Another consideration is the number of factors, which can significantly change the experimental labor demand. The levels of a factor are commonly coded as +1 for the higher level, and −1 for the lower level. For a three-level factor, the intermediate value is coded as 0. To save space, the points in a factorial experiment are often abbreviated with strings of plus and minus signs. The strings have as many symbols as factors, and their values dictate the level of each factor: conventionally, for the first (or low) level, and for the second (or high) level. The points in a two-level experiment with two factors can thus be represented as , , , and . The factorial points can also be abbreviated by (1), a, b, and ab, where the presence of a letter indicates that the specified factor is at its high (or second) level and the absence of a letter indicates that the specified factor is at its low (or first) level (for example, "a" indicates that factor A is on its high setting, while all other factors are at their low (or first) setting). (1) is used to indicate that all factors are at their lowest (or first) values. Factorial points are typically arranged in a table using Yates’ standard order: 1, a, b, ab, c, ac, bc, abc, which is created when the level of the first factor alternates with each run. Generation In practice, experimenters typically rely on statistical reference books to supply the "standard" fractional factorial designs, consisting of the principal fraction. The principal fraction is the set of treatment combinations for which the generators evaluate to + under the treatment combination algebra. However, in some situations, experimenters may take it upon themselves to generate their own fractional design. A fractional factorial experiment is generated from a full factorial experiment by choosing an alias structure. The alias structure determines which effects are confounded with each other. For example, the five-factor 25 − 2 can be generated by using a full three-factor factorial experiment involving three factors (say A, B, and C) and then choosing to confound the two remaining factors D and E with interactions generated by D = A*B and E = A*C. These two expressions are called the generators of the design. So for example, when the experiment is run and the experimenter estimates the effects for factor D, what is really being estimated is a combination of the main effect of D and the two-factor interaction involving A and B. An important characteristic of a fractional design is the defining relation, which gives the set of interaction columns equal in the design matrix to a column of plus signs, denoted by I. For the above example, since D = AB and E = AC, then ABD and ACE are both columns of plus signs, and consequently so is BDCE: D*D = AB*D = I E*E = AC*E = I I= ABD*ACE= A*ABCDE = BCDE In this case, the defining relation of the fractional design is I = ABD = ACE = BCDE. The defining relation allows the alias pattern of the design to be determined and includes 2p words. Notice that in this case, the interaction effects ABD, ACE, and BCDE cannot be studied at all. As the number of generators and the degree of fractionation increases, more and more effects become confounded. The alias pattern can then be determined through multiplying by each factor column. To determine how main effect A is confounded, multiply all terms in the defining relation by A: A*I = A*ABD = A*ACE = A*BCDE A = BC = CE = ABCDE Thus main effect A is confounded with interaction effects BC, CE, and ABCDE. Other main effects can be computed following a similar method. Resolution An important property of a fractional design is its resolution or ability to separate main effects and low-order interactions from one another. Formally, if the factors are binary then the resolution of the design is the minimum word length in the defining relation excluding (I). The resolution is denoted using Roman numerals, and it increases with the number. The most important fractional designs are those of resolution III, IV, and V: Resolutions below III are not useful and resolutions above V are wasteful (with binary factors) in that the expanded experimentation has no practical benefit in most cases—the bulk of the additional effort goes into the estimation of very high-order interactions which rarely occur in practice. The 25 − 2 design above is resolution III since its defining relation is I = ABD = ACE = BCDE. The resolution classification system described is only used for regular designs. Regular designs have run size that equal a power of two, and only full aliasing is present. Non-regular designs, sometimes known as Plackett-Burman designs, are designs where run size is a multiple of 4; these designs introduce partial aliasing, and generalized resolution is used as design criterion instead of the resolution described previously. Resolution III designs can be used to construct saturated designs, where N-1 factors can be investigated in only N runs. These saturated designs can be used for quick screening when many factors are involved. Example fractional factorial experiment Montgomery gives the following example of a fractional factorial experiment. An engineer performed an experiment to increase the filtration rate (output) of a process to produce a chemical, and to reduce the amount of formaldehyde used in the process. The full factorial experiment is described in the Wikipedia page Factorial experiment. Four factors were considered: temperature (A), pressure (B), formaldehyde concentration (C), and stirring rate (D). The results in that example were that the main effects A, C, and D and the AC and AD interactions were significant. The results of that example may be used to simulate a fractional factorial experiment using a half-fraction of the original 24 = 16 run design. The table shows the 24-1 = 8 run half-fraction experiment design and the resulting filtration rate, extracted from the table for the full 16 run factorial experiment. In this fractional design, each main effect is aliased with a 3-factor interaction (e.g., A = BCD), and every 2-factor interaction is aliased with another 2-factor interaction (e.g., AB = CD). The aliasing relationships are shown in the table. This is a resolution IV design, meaning that main effects are aliased with 3-way interactions, and 2-way interactions are aliased with 2-way interactions. The analysis of variance estimates of the effects are shown in the table below. From inspection of the table, there appear to be large effects due to A, C, and D. The coefficient for the AB interaction is quite small. Unless the AB and CD interactions have approximately equal but opposite effects, these two interactions appear to be negligible. If A, C, and D have large effects, but B has little effect, then the AC and AD interactions are most likely significant. These conclusions are consistent with the results of the full-factorial 16-run experiment. Because B and its interactions appear to be insignificant, B may be dropped from the model. Dropping B results in a full factorial 23 design for the factors A, C, and D. Performing the anova using factors A, C, and D, and the interaction terms A:C and A:D, gives the results shown in the table, which are very similar to the results for the full factorial experiment experiment, but have the advantage of requiring only a half-fraction 8 runs rather than 16. External links Fractional Factorial Designs (National Institute of Standards and Technology) See also Robust parameter designs References Design of experiments Statistical process control
Fractional factorial design
[ "Engineering" ]
2,281
[ "Statistical process control", "Engineering statistics" ]
6,994,700
https://en.wikipedia.org/wiki/Picoplankton
Picoplankton is the fraction of plankton composed by cells between 0.2 and 2 μm that can be either prokaryotic and eukaryotic phototrophs and heterotrophs: photosynthetic heterotrophic They are prevalent amongst microbial plankton communities of both freshwater and marine ecosystems. They have an important role in making up a significant portion of the total biomass of phytoplankton communities. Classification In general, plankton can be categorized on the basis of physiological, taxonomic, or dimensional characteristics. Subsequently, a generic classification of a plankton includes: Bacterioplankton Phytoplankton Zooplankton However, there is a simpler scheme that categorizes plankton based on a logarithmic size scale: Macroplankton (200–2000 μm) Micro-plankton (20–200 μm) Nanoplankton (2–20 μm) This was even further expanded to include picoplankton (0.2–2 μm) and fem-toplankton (0.02–0.2 μm), as well as net plankton, ultraplankton. Now that picoplankton have been characterized, they have their own further subdivisions such as prokaryotic and eukaryotic phototrophs and heterotrophs that are spread throughout the world in various types of lakes and trophic states. In order to differentiate between autotrophic picoplankton and heterotrophic picoplankton, the autotrophs could have photosynthetic pigments and the ability to show autofluorescence, which would allow for their enumeration under epifluorescence microscopy. This is how minute eukaryotes first became known. Overall, picoplankton play an essential role in oligotrophic dimicitc lakes because they are able to produce and then accordingly recycle dissolved organic matter (DOM) in a very efficient manner under circumstance when competition of other phytoplankters is disturbed by factors such as limiting nutrients and predators. Picoplankton are responsible for the most primary productivity in oligotrophic gyres, and are distinguished from nanoplankton and microplankton. Because they are small, they have a greater surface to volume ratio, enabling them to obtain the scarce nutrients in these ecosystems. Furthermore, some species can also be mixotrophic. The smallest of cells (200 nm) are on the order of nanometers, not picometers. The SI prefix pico- is used quite loosely here, as nanoplankton and microplankton are only 10 and 100 times larger, respectively, although it is somewhat more accurate when considering the volume rather than the length. Role in ecosystems Picoplankton contribute greatly to the biomass and primary production in both marine and freshwater lake ecosystems. In the ocean, the concentration of picoplankton is 105–107 cells per millilitre of ocean water. Algal picoplankton is responsible for up to 90 percent of the total carbon production daily and annually in oligotrophic marine ecosystems. The amount of total carbon production by picoplankton in oligotrophic freshwater systems is also high, making up 70 percent of total annual carbon production. Marine picoplankton make up a higher percentage of biomass and carbon production in zones that are oligotrophic, like the open ocean, versus regions near the shore that are more nutrient rich. Their biomass and carbon production percentage also increases as the depth into the euphotic zone increases. This is due to their use of photopigments and efficiency at using blue-green light at these depths. Picoplankton population densities do not fluctuate throughout the year except in a few cases of smaller lakes where their biomass increases as the temperature of the lake water increases. Picoplankton also play an important role in the microbial loop of these systems by aiding in providing energy to higher trophic levels. They are grazed by a various number of organisms such as flagellates, ciliates, rotifers and copepods. Flagellates are their main predator due to their ability to swim towards picoplankton in order to consume them. Oceanic picoplankton Picoplankton are important in nutrient cycling in all major oceans, where they exist in their highest abundances. They have many features that allow them to survive in these oligotrophic (low-nutrient) and low-light regions, such as the use several nitrogen sources, including nitrate, ammonium, and urea. Their small size and large surface area allows for efficient nutrient acquisition, incident light absorption, and organism growth. A small size also allows for minimal metabolic maintenance. Picoplankton, specifically phototrophic picoplankton, play a significant role in the carbon production of open oceanic environments, which largely contributes to the global carbon production. Their carbon production contributes to at least 10% of global aquatic net primary productivity. High primary productivity contributions are made in both oligotrophic and deep zones in oceans. Picoplankton are dominant in biomass in open ocean regions. Picoplankton also form the base of aquatic microbial food webs and are an energy source in the microbial loop. All trophic levels in a marine food web are affected by picoplankton carbon production and the gain or loss of picoplankton in the environment, especially in oligotrophic conditions. Marine predators of picoplankton include heterotrophic flagellates and ciliates. Protozoa are a dominant predator of picoplankton. Picoplankton are often lost through processes such as grazing, parasitism, and viral lysis. Measurement Marine scientists have slowly begun to understand in the last 10 or 15 years the importance of even the smallest subdivisions of plankton and their role in aquatic food webs and in organic and inorganic nutrient recycling. Therefore, being able to accurately measure the biomass and size distribution of picoplankton communities has now become rather essential. Two of the prevalent methods used to identify and enumerate picoplankton are fluorescence microscopy and visual counting. However, both methods are outdated because of their time-consuming and inaccurate nature. As a result, newer, faster, and more accurate methods have emerged lately, including flow cytometry and image-analyzed fluorescence microscopy. Both techniques are efficient in measuring nano plankton and auto-fluorescing phototrophic picoplankton. However, measuring very minute size ranges of picoplankton has often proven to be difficult to measure, which is why charge-coupled devices (CCD) and video cameras are now being used to measure small picoplankton, although a slow-scan CCD-based camera is more effective at detecting and sizing tiny particles such as bacteria that is fluorochrome-stained. See also Picocyanobacteria Picoeukaryote Picozoa References Biological oceanography Planktology Aquatic ecology
Picoplankton
[ "Biology" ]
1,492
[ "Aquatic ecology", "Ecosystems" ]
6,997,105
https://en.wikipedia.org/wiki/MedicAlert
The MedicAlert Foundation is a non-profit company founded in 1956 and headquartered in Turlock, California. It maintains a database of members' medical information that is made available to medical authorities in the event of a medical emergency. Members supply critical medical data to the organization and receive a distinctive metal bracelet or necklace tag which is worn at all times. It can be used by first responders, such as emergency medical personnel or law-enforcement agents, to access wearers' medical history and special medical needs. The name MedicAlert may be interpreted either as the two separate words "medic alert" or as a blended form of the phrase "medical alert". Protocol and publicity The MedicAlert IDs worn by members are designed to mimic regular jewelry (such as bracelets, necklaces, ID tags, etc.) with the addition of the distinctive MedicAlert engraved tag. The personalized jewelry bears the words "Medic Alert" and the Staff of Asclepius, the universal symbol of the medical profession, on the obverse side, and important medical information and a personalized MedicAlert ID number on the back of the tag. Medical personnel can call the MedicAlert 24-hour Emergency Hotline and provide the ID number on the back of the ID to get more detailed medical information on the member. Members' conditions and allergies are reviewed by medically trained staff and prioritized in the order of importance that an emergency health professional would assess a patient. The prioritized conditions are then transferred onto a members emblem and wallet card, while more detailed information is contained at MedicAlert ready to pass on in an emergency situation. While IDs may change depending on country and availability, the two main MedicAlert IDs are bracelets and necklaces, the former being the most popular. MedicAlert has teamed up with Citizen Watch Co. to provide a line of watches that include the Citizen Watch Co. Eco-Drive watch with the customized engraving and logo of MedicAlert. In the 1980s the IDs were publicized in conjunction with the insurance industry, The Epilepsy Foundation, and The American Diabetes Association, amongst other foundations. Celebrities also participated in the campaign (including comedian Carol Burnett, whose bracelet is symbolically the one-millionth). Common medical conditions The medical conditions and prescriptions covered include, but are not limited to: Adrenal insufficiency Allergies (food, latex, insects, seasonal, environmental etc.) Alzheimer's disease Asthma Autism Diabetes Epilepsy Heart disease Hemophilia Hypertension Drug-induced long QT syndrome Medications with serious interactions, e.g. MAOIs and Lamotrigine Parkinson's disease Devices/implants (artificial heart valves, pacemaker) The MedicAlert Foundation of Australia permits organ donation directions to be engraved on their IDs. Advance directives An advance directive covers specific directives as to the course of treatment that is to be taken by caregivers should the patient be unable to give informed consent due to incapacity. Currently in the United States, MedicAlert will hold on to signed advance directives which can be provided to first responders and medical personnel when they contact MedicAlert. A common advance directive is a do not resuscitate order which states that member has requested that resuscitation should not be attempted if the member suffers cardiac or respiratory arrest. International affiliates MedicAlert has international affiliates in nine countries. United States Australia Canada Cyprus Iceland Malaysia New Zealand South Africa United Kingdom Zimbabwe References External links U.S. MedicAlert Foundation MedicAlert Foundation Australia MedicAlert Foundation Canada MedicAlert Foundation Cyprus MedicAlert Foundation Iceland MedicAlert Foundation Malaysia MedicAlert Foundation New Zealand MedicAlert Foundation South Africa MedicAlert Foundation United Kingdom MedicAlert Foundation Zimbabwe Medical equipment First aid Organizations established in 1956 Non-profit organizations based in California
MedicAlert
[ "Biology" ]
774
[ "Medical equipment", "Medical technology" ]
8,651,021
https://en.wikipedia.org/wiki/Micellar%20liquid%20chromatography
Micellar liquid chromatography (MLC) is a form of reversed phase liquid chromatography that uses an aqueous micellar solutions as the mobile phase. Theory The use of micelles in high performance liquid chromatography was first introduced by Armstrong and Henry in 1980. The technique is used mainly to enhance retention and selectivity of various solutes that would otherwise be inseparable or poorly resolved. Micellar liquid chromatography (MLC) has been used in a variety of applications including separation of mixtures of charged and neutral solutes, direct injection of serum and other physiological fluids, analysis of pharmaceutical compounds, separation of enantiomers, analysis of inorganic organometallics, and a host of others. One of the main drawbacks of the technique is the reduced efficiency that is caused by the micelles. Despite the sometimes poor efficiency, MLC is a better choice than ion-exchange LC or ion-pairing LC for separation of charged molecules and mixtures of charged and neutral species. Some of the aspects which will be discussed are the theoretical aspects of MLC, the use of models in predicting retentive characteristics of MLC, the effect of micelles on efficiency and selectivity, and general applications of MLC. Reverse phase high-performance liquid chromatography (RP-HPLC) involves a non-polar stationary phase, often a hydrocarbon chain, and a polar mobile or liquid phase. The mobile phase generally consists of an aqueous portion with an organic addition, such as methanol or acetonitrile. When a solution of analytes is injected into the system, the components begin to partition out of the mobile phase and interact with the stationary phase. Each component interacts with the stationary phase in a different manner depending upon its polarity and hydrophobicity. In reverse phase HPLC, the solute with the greatest polarity will interact less with the stationary phase and spend more time in the mobile phase. As the polarity of the components decreases, the time spent in the column increases. Thus, a separation of components is achieved based on polarity. The addition of micelles to the mobile phase introduces a third phase into which the solutes may partition. Micelles Micelles are composed of surfactant, or detergent, monomers with a hydrophobic moiety, or tail, on one end, and a hydrophilic moiety, or head group, on the other. The polar head group may be anionic, cationic, zwitterionic, or non-ionic. When the concentration of a surfactant in solution reaches its critical micelle concentration (CMC), it forms micelles which are aggregates of the monomers. The CMC is different for each surfactant, as is the number of monomers which make up the micelle, termed the aggregation number (AN). Table 1 lists some common detergents used to form micelles along with their CMC and AN where available. Many of the characteristics of micelles differ from those of bulk solvents. For example, the micelles are, by nature, spatially heterogeneous with a hydrocarbon, nearly anhydrous core and a highly solvated, polar head group. They have a high surface-to-volume ratio due to their small size and generally spherical shape. Their surrounding environment (pH, ionic strength, buffer ion, presence of a co-solvent, and temperature) has an influence on their size, shape, critical micelle concentration, aggregation number and other properties. Another important property of micelles is the Krafft point, the temperature at which the solubility of the surfactant is equal to its CMC. For HPLC applications involving micelles, it is best to choose a surfactant with a low Krafft point and CMC. A high CMC would require a high concentration of surfactant which would increase the viscosity of the mobile phase, an undesirable condition. Additionally, a Krafft point should be well below room temperature to avoid having to apply heat to the mobile phase. To avoid potential interference with absorption detectors, a surfactant should also have a small molar absorptivity at the chosen wavelength of analysis. Light scattering should not be a concern due to the small size, a few nanometers, of the micelle. The effect of organic additives on micellar properties is another important consideration. A small amount of organic solvent is often added to the mobile phase to help improve efficiency and to improve separations of compounds. Care needs to be taken when determining how much organic to add. Too high a concentration of the organic may cause the micelle to disperse, as it relies on hydrophobic effects for its formation. The maximum concentration of organic depends on the organic solvent itself, and on the micelle. This information is generally not known precisely, but a generally accepted practice is to keep the volume percentage of organic below 15–20%. Research Fischer and Jandera studied the effect of changing the concentration of methanol on CMC values for three commonly used surfactants. Two cationic, hexadecyltrimethylammonium bromide (CTAB), and N-(a-carbethoxypentadecyl) trimethylammonium bromide (Septonex), and one anionic surfactant, sodium dodecyl sulphate (SDS) were chosen for the experiment. Generally speaking, the CMC increased as the concentration of methanol increased. It was then concluded that the distribution of the surfactant between the bulk mobile phase and the micellar phase shifts toward the bulk as the methanol concentration increases. For CTAB, the rise in CMC is greatest from 0–10% methanol, and is nearly constant from 10–20%. Above 20% methanol, the micelles disaggregate and do not exist. For SDS, the CMC values remain unaffected below 10% methanol, but begin to increase as the methanol concentration is further increased. Disaggregation occurs above 30% methanol. Finally, for Septonex, only a slight increase in CMC is observed up to 20%, with disaggregation occurring above 25%. As has been asserted, the mobile phase in MLC consists of micelles in an aqueous solvent, usually with a small amount of organic modifier added to complete the mobile phase. A typical reverse phase alkyl-bonded stationary phase is used. The first discussion of the thermodynamics involved in the retention mechanism was published by Armstrong and Nome in 1981. In MLC, there are three partition coefficients which must be taken into account. The solute will partition between the water and the stationary phase (KSW), the water and the micelles (KMW), and the micelles and the stationary phase (KSM). Armstrong and Nome derived an equation describing the partition coefficients in terms of the retention factor, formally capacity factor, k¢. In HPLC, the capacity factor represents the molar ratio of the solute in the stationary phase to the mobile phase. The capacity factor is easily measure based on retention times of the compound and any unretained compound. The equation rewritten by Guermouche et al. is presented here: 1/k¢ = [n • (KMW-1)/(f • KSW)] • CM +1/(f • KSW) Where: k¢ is the capacity factor of the solute KSW is the partition coefficient of the solute between the stationary phase and the water KMW is the partition coefficient of the solute between the micelles and the water f is the phase volume ratio (stationary phase volume/mobile phase volume) n is the molar volume of the surfactant CM is the concentration of the micelle in the mobile phase (total surfactant concentration - critical micelle concentration) A plot of 1/k¢ verses CM gives a straight line in which KSW can be calculated from the intercept and KMW can be obtained from the ratio of the slope to the intercept. Finally, KSM can be obtained from the ratio of the other two partition coefficients: KSM = KSW/ KMW As can be observed from Figure 1, KMW is independent of any effects from the stationary phase, assuming the same micellar mobile phase. The validity of the retention mechanism proposed by Armstrong and Nome has been successfully, and repeated confirmed experimentally. However, some variations and alternate theories have also been proposed. Jandera and Fischer developed equations to describe the dependence of retention behavior on the change in micellar concentrations. They found that the retention of most compounds tested decreased with increasing concentrations of micelles. From this, it can be surmised that the compounds associate with the micelles as they spend less time associated with the stationary phase. Foley proposed a similar retentive model to that of Armstrong and Nome which was a general model for secondary chemical equilibria in liquid chromatography. While this model was developed in a previous reference, and could be used for any secondary chemical equilibria such as acid-base equilibria, and ion-pairing, Foley further refined the model for MLC. When an equilibrant (X), in this case surfactant, is added to the mobile phase, a secondary equilibria is created in which an analyte will exist as free analyte (A), and complexed with the equilibrant (AX). The two forms will be retained by the stationary phase to different extents, thus allowing the retention to be varied by adjusting the concentration of equilibrant (micelles). The resulting equation solved for capacity factor in terms of partition coefficients is much the same as that of Armstrong and Nome: 1/k¢ = (KSM/k¢S) • [M] + 1/k¢S Where: k¢ is the capacity factor of the complexed solute and the free solute k¢S is the capacity factor of the free solute KSM is the partition coefficient of the solute between the stationary phase and the micelle [M] may be either the concentration of surfactant or the concentration of micelle Foley used the above equation to determine the solute-micelle association constants and free solute retention factors for a variety of solutes with different surfactants and stationary phases. From this data, it is possible to predict the type and optimum surfactant concentrations needed for a given solute or solutes. Foley has not been the only researcher interested in determining the solute-micelle association constants. A review article by Marina and Garcia with 53 references discusses the usefulness of obtaining solute-micelle association constants. The association constants for two solutes can be used to help understand the retention mechanism. The separation factor of two solutes, a, can be expressed as KSM1/KSM2. If the experimental a coincides with the ratio of the two solute-micelle partition coefficients, it can be assumed that their retention occurs through a direct transfer from the micellar phase to the stationary phase. In addition, calculation of a would allow for prediction of separation selectivity before the analysis is performed, provided the two coefficients are known. The desire to predict retention behavior and selectivity has led to the development of several mathematical models. Changes in pH, surfactant concentration, and concentration of organic modifier play a significant role in determining the chromatographic separation. Often one or more of these parameters need to be optimized to achieve the desired separation, yet the optimum parameters must take all three variables into account simultaneously. The review by Garcia-Alvarez-Coque et al. mentioned several successful models for varying scenarios, a few of which will be mentioned here. The classic models by Armstrong and Nome and Foley are used to describe the general cases. Foley's model applies to many cases and has been experimentally verified for ionic, neutral, polar and nonpolar solutes; anionic, cationic, and non-ionic surfactants, and C8, C¬18, and cyano stationary phases. The model begins to deviate for highly and lowly retained solutes. Highly retained solutes may become irreversibly bound to the stationary phase, where lowly retained solutes may elute in the column void volume. Other models proposed by Arunyanart and Cline-Love and Rodgers and Khaledi describe the effect of pH on the retention of weak acids and bases. These authors derived equations relating pH and micellar concentration to retention. As the pH varies, sigmoidal behavior is observed for the retention of acidic and basic species. This model has been shown to accurately predict retention behavior. Still other models predict behavior in hybrid micellar systems using equations or modeling behavior based on controlled experimentation. Additionally, models accounting for the simultaneous effect of pH, micelle and organic concentration have been suggested. These models allow for further enhancement of the optimization of the separation of weak acids and bases. One research group, Rukhadze, et al. derived a first order linear relationship describing the influence of micelle and organic concentration, and pH on the selectivity and resolution of seven barbiturates. The researchers discovered that a second order mathematical equation would more precisely fit the data. The derivations and experimental details are beyond the scope of this discussion. The model was successful in predicting the experimental conditions necessary to achieve a separation for compounds which are traditionally difficult to resolve. Jandera, Fischer, and Effenberger approached the modeling problem in yet another way. The model used was based on lipophilicity and polarity indices of solutes. The lipophilicity index relates a given solute to a hypothetical number of carbon atoms in an alkyl chain. It is based and depends on a given calibration series determined experimentally. The lipophilicity index should be independent of the stationary phase and organic modifier concentration. The polarity index is a measure of the polarity of the solute-solvent interactions. It depends strongly on the organic solvent, and somewhat on the polar groups present in the stationary phase. 23 compounds were analyzed with varying mobile phases and compared to the lipophilicity and polarity indices. The results showed that the model could be applied to MLC, but better predictive behavior was found with concentrations of surfactant below the CMC, sub-micellar. A final type of model based on molecular properties of a solute is a branch of quantitative structure-activity relationships (QSAR). QSAR studies attempt to correlate biological activity of drugs, or a class of drugs, with structures. The normally accepted means of uptake for a drug, or its metabolite, is through partitioning into lipid bilayers. The descriptor most often used in QSAR to determine the hydrophobicity of a compound is the octanol-water partition coefficient, log P. MLC provides an attractive and practical alternative to QSAR. When micelles are added to a mobile phase, many similarities exist between the micellar mobile phase/stationary phase and the biological membrane/water interface. In MLC, the stationary phase become modified by the adsorption of surfactant monomers which are structurally similar to the membranous hydrocarbon chains in the biological model. Additionally, the hydrophilic/hydrophobic interactions of the micelles are similar to that in the polar regions of a membrane. Thus, the development of quantitative structure-retention relationships (QRAR) has become widespread. Escuder-Gilabert et al. tested three different QRAR retention models on ionic compounds. Several classes of compounds were tested including catecholamines, local anesthetics, diuretics, and amino acids. The best model relating log K and log P was found to be one in which the total molar charge of a compound at a given pH is included as a variable. This model proved to give fairly accurate predictions of log P, R > 0.9. Other studies have been performed which develop predictive QRAR models for tricyclic antidepressants and barbiturates. Efficiency The main limitation in the use of MLC is the reduction in efficiency (peak broadening) that is observed when purely aqueous micellar mobile phases are used. Several explanations for the poor efficiency have been theorized. Poor wetting of the stationary phase by the micellar aqueous mobile phase, slow mass transfer between the micelles and the stationary phase, and poor mass transfer within the stationary phase have all been postulated as possible causes. To enhance efficiency, the most common approaches have been the addition of small amounts of isopropyl alcohol and increase in temperature. A review by Berthod studied the combined theories presented above and applied the Knox equation to independently determine the cause of the reduced efficiency. The Knox equation is commonly used in HPLC to describe the different contributions to overall band broadening of a solute. The Knox equation is expressed as: h = An^(1/3)+ B/n + Cn Where: h = the reduced plate height count (plate height/stationary phase particle diameter) n = the reduced mobile phase linear velocity (velocity times stationary phase particle diameter/solute diffusion coefficient in the mobile phase) A, B, and C are constants related to solute flow anisotropy (eddy diffusion), molecular longitudinal diffusion, and mass transfer properties respectively. Berthod's use of the Knox equation to experimentally determine which of the proposed theories was most correct led him to the following conclusions. The flow anisotropy in micellar phase seems to be much greater than in traditional hydro-organic mobile phases of similar viscosity. This is likely due to the partial clogging of the stationary phase pores by adsorbed surfactant molecules. Raising the column temperature served to both decrease viscosity of the mobile phase and the amount of adsorbed surfactant. Both results reduce the A term and the amount of eddy diffusion, and thereby increase efficiency. The increase in the B term, as related to longitudinal diffusion, is associated with the decrease in the solute diffusion coefficient in the mobile phase, DM, due to the presence of the micelles, and an increase in the capacity factor, k¢. Again, this is related to surfactant adsorption on the stationary phase causing a dramatic decrease in the solute diffusion coefficient in the stationary phase, DS. Again an increase in temperature, now coupled with an addition of alcohol to the mobile phase, drastically decreases the amount of the absorbed surfactant. In turn, both actions reduce the C term caused by a slow mass transfer from the stationary phase to the mobile phase. Further optimization of efficiency can be gained by reducing the flow rate to one closely matched to that derived from the Knox equation. Overall, the three proposed theories seemed to have contributing effects of the poor efficiency observed, and can be partially countered by the addition of organic modifiers, particularly alcohol, and increasing the column temperature. Applications Despite the reduced efficiency verses reversed phase HPLC, hundreds of applications have been reported using MLC. One of the most advantageous is the ability to directly inject physiological fluids. Micelles have an ability to solubilize proteins which enables MLC to be useful in analyzing untreated biological fluids such as plasma, serum, and urine. Martinez et al. found MLC to be highly useful in analyzing a class of drugs called b-antagonists, so called beta-blockers, in urine samples. The main advantage of the use of MLC with this type of sample, is the great time savings in sample preparation. Alternative methods of analysis including reversed phase HPLC require lengthy extraction and sample work up procedures before analysis can begin. With MLC, direct injection is often possible, with retention times of less than 15 minutes for the separation of up to nine b-antagonists. Another application compared reversed phase HPLC with MLC for the analysis of desferrioxamine in serum. Desferrioxamine (DFO) is a commonly used drug for removal of excess iron in patients with chronic and acute levels. The analysis of DFO along with its chelated complexes, Fe(III) DFO and Al(III) DFO has proven to be difficult at best in previous attempts. This study found that direct injection of the serum was possible for MLC, verses an ultrafiltration step necessary in HPLC. This analysis proved to have difficulties with the separation of the chelated DFO compounds and with the sensitivity levels for DFO itself when MLC was applied. The researcher found that, in this case, reverse phase HPLC, was a better, more sensitive technique despite the time savings in direct injection. Analysis of pharmaceuticals by MLC is also gaining popularity. The selectivity and peak shape of MLC over commonly used ion-pair chromatography is much enhanced. MLC mimics, yet enhances, the selectivity offered by ion-pairing reagents for the separation of active ingredients in pharmaceutical drugs. For basic drugs, MLC improves the excessive peak tailing frequently observed in ion-pairing. Hydrophilic drugs are often unretained using conventional HPLC, are retained by MLC due to solubilization into the micelles. Commonly found drugs in cold medications such as acetaminophen, L-ascorbic acid, phenylpropanolamine HCL, tipepidine hibenzate, and chlorpheniramine maleate have been successfully separated with good peak shape using MLC. Additional basic drugs like many narcotics, such as codeine and morphine, have also been successfully separated using MLC. Another novel application of MLC involves the separation and analysis of inorganic compounds, mostly simple ions. This is a relatively new area for MLC, but has seen some promising results. MLC has been observed to provide better selectivity of inorganic ions that ion-exchange or ion-pairing chromatography. While this application is still in the beginning stages of development, the possibilities exist for novel, much enhanced separations of inorganic species. Since the technique was first reported on in 1980, micellar liquid chromatography has been used in hundreds of applications. This micelle controlled technique provides for unique opportunities for solving complicated separation problems. Despite the poor efficiency of MLC, it has been successfully used in many applications. The use of MLC in the future appears to be extremely advantages in the areas of physiological fluids, pharmaceuticals, and even inorganic ions. The technique has proven to be superior over ion-pairing and ion-exchange for many applications. As new approaches are developed to combat the poor efficiency of MLC, its application is sure to spread and gain more acceptance. References Chromatography
Micellar liquid chromatography
[ "Chemistry" ]
4,759
[ "Chromatography", "Separation processes" ]
8,652,895
https://en.wikipedia.org/wiki/CCL21
Chemokine (C-C motif) ligand 21 (CCL21) is a small cytokine belonging to the CC chemokine family. This chemokine is also known as 6Ckine (because it has six conserved cysteine residues instead of the four cysteines typical to chemokines), exodus-2, and secondary lymphoid-tissue chemokine (SLC). CCL21 elicits its effects by binding to a cell surface chemokine receptor known as CCR7. The main function of CCL21 is to guide CCR7 expressing leukocytes to the secondary lymphoid organs, such as lymph nodes and Peyer´s patches. Gene The gene for CCL21 is located on human chromosome 9. CCL21 is classified as a homeostatic chemokine, it is produced constitutively. However, its expression increases during inflammation. Protein structure Chemokine CCL21 contains an extended C-terminus which is not found in CCL19, another ligand of CCR7. C-terminal tail is composed of 37 amino acids rich in positively charged residues and therefore, it has high affinity for negatively charged molecules of the extracellular matrix. The cleavage of the C-terminal tail by peptidases produces a soluble form of CCL21. The soluble CCL21 occurs also in physiological conditions. It does not bind to extracellular matrix and therefore, its function differs from the function of the full-length CCL21. Function Migration to secondary lymphoid organs Naïve T cells circulate through secondary lymphoid organs until they encounter the antigen. CCL21 is a chemokine involved in the recruitment of T cells into secondary lymphoid organs. It is produced by lymphatic endothelial cells and lymph node stromal cells. Full-length CCL21 is bound to glycosaminoglycans, and endothelial cells and it induces the chemotactic migration of T cells and the cell adhesion caused by integrin activation. In contrast, the soluble CCL21 is not involved in the induction of the cell adhesion. After T cells enter the lymph nodes through high endothelial venules, they are attracted to the T cell zone, where fibroblastic reticular cells are the abundant source of CCL21. CCL21/CCR7 interaction also plays a role in the migration of dendritic cells to the secondary lymphoid organs. Dendritic cells upregulate the expression of CCR7 during their maturation. CCL21 is bound to the lymphatic vessels and attracts CCR7 expressing dendritic cells from peripheral tissues. Then they migrate along the chemokine gradient to the lymph node where they present the antigen to T cells. Interactions between dendritic cells and T cells are necessary for the initiation of the adaptive immune response. When CCL21 is not recognized by the cells (for example in CCR7-deficient mice), a delayed and reduced adaptive immune response occurs due to reduced interactions between dendritic cells and T cells in the lymph nodes. Semi-mature dendritic cells express CCR7 in the absence of a danger signal. They use CCL21 chemokine gradient for the migration to the lymph nodes even when there is no inflammation in the body, and they contribute to peripheral tolerance. Other cells using chemokine CCL21 for the migration to the lymph nodes are B cells. However, they are less dependent on it in comparison to T cells. T cell development in the thymus CCL21/CCR7 interaction plays a role in the T cell development in the thymus. CCL21 is produced in the thymus medulla by medullary thymic epithelial cells, and it attracts single positive thymocytes from the thymus cortex to the medulla, where they undergo negative selection to delete autoreactive thymocytes. References External links Further reading Cytokines
CCL21
[ "Chemistry" ]
860
[ "Cytokines", "Signal transduction" ]
13,293,546
https://en.wikipedia.org/wiki/Feynman%20checkerboard
The Feynman checkerboard, or relativistic chessboard model, was Richard Feynman's sum-over-paths formulation of the kernel for a free spin- particle moving in one spatial dimension. It provides a representation of solutions of the Dirac equation in (1+1)-dimensional spacetime as discrete sums. The model can be visualised by considering relativistic random walks on a two-dimensional spacetime checkerboard. At each discrete timestep the particle of mass moves a distance to the left or right ( being the speed of light). For such a discrete motion, the Feynman path integral reduces to a sum over the possible paths. Feynman demonstrated that if each "turn" (change of moving from left to right or conversely) of the space–time path is weighted by (with denoting the reduced Planck constant), in the limit of infinitely small checkerboard squares the sum of all weighted paths yields a propagator that satisfies the one-dimensional Dirac equation. As a result, helicity (the one-dimensional equivalent of spin) is obtained from a simple cellular-automata-type rule. The checkerboard model is important because it connects aspects of spin and chirality with propagation in spacetime and is the only sum-over-path formulation in which quantum phase is discrete at the level of the paths, taking only values corresponding to the 4th roots of unity. History Richard Feynman invented the model in the 1940s while developing his spacetime approach to quantum mechanics. He did not publish the result until it appeared in a text on path integrals coauthored by Albert Hibbs in the mid 1960s. The model was not included with the original path-integral article because a suitable generalization to a four-dimensional spacetime had not been found. One of the first connections between the amplitudes prescribed by Feynman for the Dirac particle in 1+1 dimensions, and the standard interpretation of amplitudes in terms of the kernel, or propagator, was established by Jayant Narlikar in a detailed analysis. The name "Feynman chessboard model" was coined by Harold A. Gersch when he demonstrated its relationship to the one-dimensional Ising model. B. Gaveau et al. discovered a relationship between the model and a stochastic model of the telegraph equations due to Mark Kac through analytic continuation. Ted Jacobson and Lawrence Schulman examined the passage from the relativistic to the non-relativistic path integral. Subsequently, G. N. Ord showed that the chessboard model was embedded in correlations in Kac's original stochastic model and so had a purely classical context, free of formal analytic continuation. In the same year, Louis Kauffman and Pierres Noyes produced a fully discrete version related to bit-string physics, which has been developed into a general approach to discrete physics. Extensions Although Feynman did not live to publish extensions to the chessboard model, it is evident from his archived notes that he was interested in establishing a link between the 4th roots of unity (used as statistical weights in chessboard paths) and his discovery, with John Archibald Wheeler, that antiparticles are equivalent to particles moving backwards in time. His notes contain several sketches of chessboard paths with added spacetime loops. The first extension of the model to explicitly contain such loops was the "spiral model", in which chessboard paths were allowed to spiral in spacetime. Unlike the chessboard case, causality had to be implemented explicitly to avoid divergences, however with this restriction the Dirac equation emerged as a continuum limit. Subsequently, the roles of zitterbewegung, antiparticles and the Dirac sea in the chessboard model have been elucidated, and the implications for the Schrödinger equation considered through the non-relativistic limit. Further extensions of the original 2-dimensional spacetime model include features such as improved summation rules and generalized lattices. There has been no consensus on an optimal extension of the chessboard model to a fully four-dimensional spacetime. Two distinct classes of extensions exist, those working with a fixed underlying lattice and those that embed the two-dimensional case in higher dimension. The advantage of the former is that the sum-over-paths is closer to the non-relativistic case, however the simple picture of a single directionally independent speed of light is lost. In the latter extensions the fixed-speed property is maintained at the expense of variable directions at each step. References Quantum field theory Spinors Dirac equation Lattice models Richard Feynman
Feynman checkerboard
[ "Physics", "Materials_science" ]
968
[ "Quantum field theory", "Equations of physics", "Eponymous equations of physics", "Quantum mechanics", "Lattice models", "Computational physics", "Condensed matter physics", "Statistical mechanics", "Dirac equation" ]
13,298,565
https://en.wikipedia.org/wiki/Skid-to-turn
Skid-to-turn is an aeronautical vehicle reference for how such a vehicle may be turned. It applies to vehicles such as aircraft and missiles. In skid-to-turn, the vehicle does not roll to a preferred angle. Instead commands to the control surfaces are mixed to produce the maneuver in the desired direction. This is distinct from the coordinated turn used by aircraft pilots. For instance, a vehicle flying horizontally may be turned in the horizontal plane by the application of rudder controls to place the body at a sideslip angle relative to the airflow. This sideslip flow then produces a force in the horizontal plane to turn the vehicle's velocity vector. The benefit of the skid-to-turn maneuver is that it can be performed much quicker than a coordinated turn. This is useful when trying to correct for small errors. The disadvantage occurs if the vehicle has greater maneuverability in one body plane than another. In that case the turns are less efficient and either consume greater thrust or cause a greater loss of aircraft specific energy than coordinated turns. See also Skid steer References External links Automatic control of aircraft and missiles By John H. Blakelock Aerodynamics
Skid-to-turn
[ "Chemistry", "Engineering" ]
235
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
13,298,795
https://en.wikipedia.org/wiki/Isomalathion
Isomalathion is an impurity found in some batches of malathion. Whereas the structure of malation is, generically, RSP(S)(OCH3)2, the connectivity of isomalathion is RSPO(SCH3)(OCH3). It arises by heating malathion. Being significantly more toxic to humans than malathion, it has resulted in human poisonings. In 1976, numerous malaria workers in Pakistan were poisoned by isomalathion. It is an inhibitor of carboxyesterase. References Phosphorodithioates Succinate esters
Isomalathion
[ "Chemistry", "Biology" ]
130
[ "Biotechnology stubs", "Functional groups", "Phosphorodithioates", "Biochemistry stubs", "Biochemistry" ]
13,300,194
https://en.wikipedia.org/wiki/Advanced%20Traffic%20Management%20System
The advanced traffic management system (ATMS) field is a primary subfield within the intelligent transportation system (ITS) domain, and is used in the United States. The ATMS view is a top-down management perspective that integrates technology primarily to improve the flow of vehicle traffic and improve safety. Real-time traffic data from cameras, speed sensors, etc. flows into a transportation management center (TMC) where it is integrated and processed (e.g. for incident detection), and may result in actions taken (e.g. traffic routing, DMS messages) with the goal of improving traffic flow. The National ITS Architecture defines the following primary goals and metrics for ITS: Increase transportation system efficiency Enhance mobility Improve safety Reduce fuel consumption and environmental cost Increase economic productivity Create an environment for an ITS market History In 1956, the National Interstate and Defense Highways Act initiated a 35-year $114 billion program that designed and constructed the interstate highway system. This hugely successful program was mostly complete by 1991, and the era of build-out was over. In the mid to late 1980s transportation officials from federal and state governments, the private sector, and universities began a series of informal meetings discussing the future of transportation. This included meetings held by the California Department of Transportation (Caltrans) in October 1986 to discuss technology applied to future advanced highways. In June 1988 in Washington, DC, the group formalized its structure and chose the name "Mobility 2000". In 1990, Mobility 2000 morphed into ITS America, the main ITS advocacy and policy group in the US. The initial name of ITS America was IVHS America and was changed in 1994 to reflect a broader intermodal perspective. The 1991 Intermodal Surface Transportation Efficiency Act (ISTEA) was the first post-build-out transportation act. It initiated a new approach focused on efficiency, intelligence, and intermodalism. It had a primary goal of providing "the foundation for the nation to compete in the global economy". This new mixture of infrastructure and technology was identified as an intelligent transportation system (ITS) and was the centerpiece of the 1991 ISTEA act. ITS is loosely defined as "the application of computers, communications, and sensor technology to surface transportation". Subsequent surface transportation bills have continued ITS funding and development. In 2005 the SAFETEA-LU (Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users) surface transportation spending bill was signed into law. Functional areas Real-time traffic monitoring Dynamic message sign monitoring and control Incident monitoring Traffic camera monitoring and control Active traffic management (ATM) Chain control Ramp meter monitoring and control Arterial management Traffic signal monitoring and control Automated warning systems Road Weather Information System (RWIS) monitoring Highway advisory radio Urban traffic management and control Systems IRIS open-source ATMS Project Georgia Navigator Kimley-Horn integrated transportation system (KITS) See also Traffic optimization Speed limit: variable speed limits Variable-message sign PTV Group References Transportation engineering Intelligent transportation systems Road traffic management
Advanced Traffic Management System
[ "Technology", "Engineering" ]
607
[ "Transport systems", "Industrial engineering", "Information systems", "Transportation engineering", "Civil engineering", "Warning systems", "Intelligent transportation systems" ]
2,546
https://en.wikipedia.org/wiki/Automated%20theorem%20proving
Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major motivating factor for the development of computer science. Logical foundations While the roots of formalized logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalized mathematics. Frege's Begriffsschrift (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic. His Foundations of Arithmetic, published in 1884, expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems. In 1929, Mojżesz Presburger showed that the first-order theory of the natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false. However, shortly after this positive result, Kurt Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931), showing that in any sufficiently strong axiomatic system, there are true statements that cannot be proved in the system. This topic was further developed in the 1930s by Alonzo Church and Alan Turing, who on the one hand gave two independent but equivalent definitions of computability, and on the other gave concrete examples of undecidable questions. First implementations Shortly after World War II, the first general-purpose computers became available. In 1954, Martin Davis programmed Presburger's algorithm for a JOHNNIAC vacuum-tube computer at the Institute for Advanced Study in Princeton, New Jersey. According to Davis, "Its great triumph was to prove that the sum of two even numbers is even". More ambitious was the Logic Theorist in 1956, a deduction system for the propositional logic of the Principia Mathematica, developed by Allen Newell, Herbert A. Simon and J. C. Shaw. Also running on a JOHNNIAC, the Logic Theorist constructed proofs from a small set of propositional axioms and three deduction rules: modus ponens, (propositional) variable substitution, and the replacement of formulas by their definition. The system used heuristic guidance, and managed to prove 38 of the first 52 theorems of the Principia. The "heuristic" approach of the Logic Theorist tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically, completeness for first-order logic. Initial approaches relied on the results of Herbrand and Skolem to convert a first-order formula into successively larger sets of propositional formulae by instantiating variables with terms from the Herbrand universe. The propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmore's program used conversion to disjunctive normal form, a form in which the satisfiability of a formula is obvious. Decidability of the problem Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the common case of propositional logic, the problem is decidable but co-NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first-order predicate calculus, Gödel's completeness theorem states that the theorems (provable statements) are exactly the semantically valid well-formed formulas, so the valid formulas are computably enumerable: given unbounded resources, any valid formula can eventually be proven. However, invalid formulas (those that are not entailed by a given theory), cannot always be recognized. The above applies to first-order theories, such as Peano arithmetic. However, for a specific model that may be described by a first-order theory, some statements may be true but undecidable in the theory used to describe the model. For example, by Gödel's incompleteness theorem, we know that any consistent theory whose axioms are true for the natural numbers cannot prove all first-order statements true for the natural numbers, even if the list of axioms is allowed to be infinite enumerable. It follows that an automated theorem prover will fail to terminate while searching for a proof precisely when the statement being investigated is undecidable in the theory being used, even if it is true in the model of interest. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, even in models that are not fully described by any first-order theory (such as the integers). Related problems A simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable. Since the proofs generated by automated theorem provers are typically very large, the problem of proof compression is crucial, and various techniques aiming at making the prover's output smaller, and consequently more easily understandable and checkable, have been developed. Proof assistants require a human user to give hints to the system. Depending on the degree of automation, the prover can essentially be reduced to a proof checker, with the user providing the proof in a formal way, or significant proof tasks can be performed automatically. Interactive provers are used for a variety of tasks, but even fully automatic systems have proved a number of interesting and hard theorems, including at least one that has eluded human mathematicians for a long time, namely the Robbins conjecture. However, these successes are sporadic, and work on hard problems usually requires a proficient user. Another distinction is sometimes drawn between theorem proving and other techniques, where a process is considered to be theorem proving if it consists of a traditional proof, starting with axioms and producing new inference steps using rules of inference. Other techniques would include model checking, which, in the simplest case, involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much cleverness, and does not simply reduce to brute force). There are hybrid theorem proving systems that use model checking as an inference rule. There are also programs that were written to prove a particular theorem, with a (usually informal) proof that if the program finishes with a certain result, then the theorem is true. A good example of this was the machine-aided proof of the four color theorem, which was very controversial as the first claimed mathematical proof that was essentially impossible to verify by humans due to the enormous size of the program's calculation (such proofs are called non-surveyable proofs). Another example of a program-assisted proof is the one that shows that the game of Connect Four can always be won by the first player. Applications Commercial use of automated theorem proving is mostly concentrated in integrated circuit design and verification. Since the Pentium FDIV bug, the complicated floating point units of modern microprocessors have been designed with extra scrutiny. AMD, Intel and others use automated theorem proving to verify that division and other operations are correctly implemented in their processors. Other uses of theorem provers include program synthesis, constructing programs that satisfy a formal specification. Automated theorem provers have been integrated with proof assistants, including Isabelle/HOL. Applications of theorem provers are also found in natural language processing and formal semantics, where they are used to analyze discourse representations. First-order theorem proving In the late 1960s agencies funding research in automated deduction began to emphasize the need for practical applications. One of the first fruitful areas was that of program verification whereby first-order theorem provers were applied to the problem of verifying the correctness of computer programs in languages such as Pascal, Ada, etc. Notable among early program verification systems was the Stanford Pascal Verifier developed by David Luckham at Stanford University. This was based on the Stanford Resolution Prover also developed at Stanford using John Alan Robinson's resolution principle. This was the first automated deduction system to demonstrate an ability to solve mathematical problems that were announced in the Notices of the American Mathematical Society before solutions were formally published. First-order theorem proving is one of the most mature subfields of automated theorem proving. The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still semi-decidable, and a number of sound and complete calculi have been developed, enabling fully automated systems. More expressive logics, such as higher-order logics, allow the convenient expression of a wider range of problems than first-order logic, but theorem proving for these logics is less well developed. Relationship with SMT There is substantial overlap between first-order automated theorem provers and SMT solvers. Generally, automated theorem provers focus on supporting full first-order logic with quantifiers, whereas SMT solvers focus more on supporting various theories (interpreted predicate symbols). ATPs excel at problems with lots of quantifiers, whereas SMT solvers do well on large problems without quantifiers. The line is blurry enough that some ATPs participate in SMT-COMP, while some SMT solvers participate in CASC. Benchmarks, competitions, and sources The quality of implemented systems has benefited from the existence of a large library of standard benchmark examples—the Thousands of Problems for Theorem Provers (TPTP) Problem Library—as well as from the CADE ATP System Competition (CASC), a yearly competition of first-order systems for many important classes of first-order problems. Some important systems (all have won at least one CASC competition division) are listed below. E is a high-performance prover for full first-order logic, but built on a purely equational calculus, originally developed in the automated reasoning group of Technical University of Munich under the direction of Wolfgang Bibel, and now at Baden-Württemberg Cooperative State University in Stuttgart. Otter, developed at the Argonne National Laboratory, is based on first-order resolution and paramodulation. Otter has since been replaced by Prover9, which is paired with Mace4. SETHEO is a high-performance system based on the goal-directed model elimination calculus, originally developed by a team under direction of Wolfgang Bibel. E and SETHEO have been combined (with other systems) in the composite theorem prover E-SETHEO. Vampire was originally developed and implemented at Manchester University by Andrei Voronkov and Kryštof Hoder. It is now developed by a growing international team. It has won the FOF division (among other divisions) at the CADE ATP System Competition regularly since 2001. Waldmeister is a specialized system for unit-equational first-order logic developed by Arnim Buch and Thomas Hillenbrand. It won the CASC UEQ division for fourteen consecutive years (1997–2010). SPASS is a first-order logic theorem prover with equality. This is developed by the research group Automation of Logic, Max Planck Institute for Computer Science. The Theorem Prover Museum is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above. Popular techniques First-order resolution with unification Model elimination Method of analytic tableaux Superposition and term rewriting Model checking Mathematical induction Binary decision diagrams DPLL Higher-order unification Quantifier elimination Software systems Free software Alt-Ergo Automath CVC E IsaPlanner LCF Mizar NuPRL Paradox Prover9 PVS SPARK (programming language) Twelf Z3 Theorem Prover Proprietary software CARINE Wolfram Mathematica ResearchCyc See also Curry–Howard correspondence Symbolic computation Ramanujan machine Computer-aided proof Formal verification Logic programming Proof checking Model checking Proof complexity Computer algebra system Program analysis (computer science) General Problem Solver Metamath language for formalized mathematics De Bruijn factor Notes References II . External links A list of theorem proving tools Formal methods
Automated theorem proving
[ "Mathematics", "Engineering" ]
2,680
[ "Automated theorem proving", "Mathematical logic", "Computational mathematics", "Software engineering", "Formal methods" ]
2,551
https://en.wikipedia.org/wiki/Astronomical%20year%20numbering
Astronomical year numbering is based on AD/CE year numbering, but follows normal decimal integer numbering more strictly. Thus, it has a year 0; the years before that are designated with negative numbers and the years after that are designated with positive numbers. Astronomers use the Julian calendar for years before 1582, including the year 0, and the Gregorian calendar for years after 1582, as exemplified by Jacques Cassini (1740), Simon Newcomb (1898) and Fred Espenak (2007). The prefix AD and the suffixes CE, BC or BCE (Common Era, Before Christ or Before Common Era) are dropped. The year 1 BC/BCE is numbered 0, the year 2 BC is numbered −1, and in general the year n BC/BCE is numbered "−(n − 1)" (a negative number equal to 1 − n). The numbers of AD/CE years are not changed and are written with either no sign or a positive sign; thus in general n AD/CE is simply n or +n. For normal calculation a number zero is often needed, here most notably when calculating the number of years in a period that spans the epoch; the end years need only be subtracted from each other. The system is so named due to its use in astronomy. Few other disciplines outside history deal with the time before year 1, some exceptions being dendrochronology, archaeology and geology, the latter two of which use 'years before the present'. Although the absolute numerical values of astronomical and historical years only differ by one before year 1, this difference is critical when calculating astronomical events like eclipses or planetary conjunctions to determine when historical events which mention them occurred. Usage of the year zero In his Rudolphine Tables (1627), Johannes Kepler used a prototype of year zero which he labeled Christi (Christ's) between years labeled Ante Christum (Before Christ) and Post Christum (After Christ) on the mean motion tables for the Sun, Moon, Saturn, Jupiter, Mars, Venus and Mercury. In 1702, the French astronomer Philippe de la Hire used a year he labeled at the end of years labeled ante Christum (BC), and immediately before years labeled post Christum (AD) on the mean motion pages in his Tabulæ Astronomicæ, thus adding the designation 0 to Kepler's Christi. Finally, in 1740 the French astronomer Jacques Cassini , who is traditionally credited with the invention of year zero, completed the transition in his Tables astronomiques, simply labeling this year 0, which he placed at the end of Julian years labeled avant Jesus-Christ (before Jesus Christ or BC), and immediately before Julian years labeled après Jesus-Christ (after Jesus Christ or AD). Cassini gave the following reasons for using a year 0: Fred Espenak of NASA lists 50 phases of the Moon within year 0, showing that it is a full year, not an instant in time. Jean Meeus gives the following explanation: Signed years without the year zero Although he used the usual French terms "avant J.-C." (before Jesus Christ) and "après J.-C." (after Jesus Christ) to label years elsewhere in his book, the Byzantine historian Venance Grumel (1890–1967) used negative years (identified by a minus sign, −) to label BC years and unsigned positive years to label AD years in a table. He may have done so to save space and he put no year 0 between them. Version 1.0 of the XML Schema language, often used to describe data interchanged between computers in XML, includes built-in primitive datatypes date and dateTime. Although these are defined in terms of ISO 8601 which uses the proleptic Gregorian calendar and therefore should include a year 0, the XML Schema specification states that there is no year zero. Version 1.1 of the defining recommendation realigned the specification with ISO 8601 by including a year zero, despite the problems arising from the lack of backward compatibility. See also Julian day, another calendar commonly used by astronomers Astronomical chronology Holocene calendar ISO 8601 References Calendar eras Chronology Specific calendars Year numbering
Astronomical year numbering
[ "Physics", "Astronomy" ]
860
[ "Time in astronomy", "Chronology", "Physical quantities", "Time", "Spacetime" ]
2,703
https://en.wikipedia.org/wiki/Aberration%20%28astronomy%29
In astronomy, aberration (also referred to as astronomical aberration, stellar aberration, or velocity aberration) is a phenomenon where celestial objects exhibit an apparent motion about their true positions based on the velocity of the observer: It causes objects to appear to be displaced towards the observer's direction of motion. The change in angle is of the order of where is the speed of light and the velocity of the observer. In the case of "stellar" or "annual" aberration, the apparent position of a star to an observer on Earth varies periodically over the course of a year as the Earth's velocity changes as it revolves around the Sun, by a maximum angle of approximately 20 arcseconds in right ascension or declination. The term aberration has historically been used to refer to a number of related phenomena concerning the propagation of light in moving bodies. Aberration is distinct from parallax, which is a change in the apparent position of a relatively nearby object, as measured by a moving observer, relative to more distant objects that define a reference frame. The amount of parallax depends on the distance of the object from the observer, whereas aberration does not. Aberration is also related to light-time correction and relativistic beaming, although it is often considered separately from these effects. Aberration is historically significant because of its role in the development of the theories of light, electromagnetism and, ultimately, the theory of special relativity. It was first observed in the late 1600s by astronomers searching for stellar parallax in order to confirm the heliocentric model of the Solar System. However, it was not understood at the time to be a different phenomenon. In 1727, James Bradley provided a classical explanation for it in terms of the finite speed of light relative to the motion of the Earth in its orbit around the Sun, which he used to make one of the earliest measurements of the speed of light. However, Bradley's theory was incompatible with 19th-century theories of light, and aberration became a major motivation for the aether drag theories of Augustin Fresnel (in 1818) and G. G. Stokes (in 1845), and for Hendrik Lorentz's aether theory of electromagnetism in 1892. The aberration of light, together with Lorentz's elaboration of Maxwell's electrodynamics, the moving magnet and conductor problem, the negative aether drift experiments, as well as the Fizeau experiment, led Albert Einstein to develop the theory of special relativity in 1905, which presents a general form of the equation for aberration in terms of such theory. Explanation Aberration may be explained as the difference in angle of a beam of light in different inertial frames of reference. A common analogy is to consider the apparent direction of falling rain. If rain is falling vertically in the frame of reference of a person standing still, then to a person moving forwards the rain will appear to arrive at an angle, requiring the moving observer to tilt their umbrella forwards. The faster the observer moves, the more tilt is needed. The net effect is that light rays striking the moving observer from the sides in a stationary frame will come angled from ahead in the moving observer's frame. This effect is sometimes called the "searchlight" or "headlight" effect. In the case of annual aberration of starlight, the direction of incoming starlight as seen in the Earth's moving frame is tilted relative to the angle observed in the Sun's frame. Since the direction of motion of the Earth changes during its orbit, the direction of this tilting changes during the course of the year, and causes the apparent position of the star to differ from its true position as measured in the inertial frame of the Sun. While classical reasoning gives intuition for aberration, it leads to a number of physical paradoxes observable even at the classical level (see history). The theory of special relativity is required to correctly account for aberration. The relativistic explanation is very similar to the classical one however, and in both theories aberration may be understood as a case of addition of velocities. Classical explanation In the Sun's frame, consider a beam of light with velocity equal to the speed of light , with x and y velocity components and , and thus at an angle such that . If the Earth is moving at velocity in the x direction relative to the Sun, then by velocity addition the x component of the beam's velocity in the Earth's frame of reference is , and the y velocity is unchanged, . Thus the angle of the light in the Earth's frame in terms of the angle in the Sun's frame is In the case of , this result reduces to , which in the limit may be approximated by . Relativistic explanation The reasoning in the relativistic case is the same except that the relativistic velocity addition formulas must be used, which can be derived from Lorentz transformations between different frames of reference. These formulas are where , giving the components of the light beam in the Earth's frame in terms of the components in the Sun's frame. The angle of the beam in the Earth's frame is thus In the case of , this result reduces to , and in the limit this may be approximated by . This relativistic derivation keeps the speed of light constant in all frames of reference, unlike the classical derivation above. Relationship to light-time correction and relativistic beaming Aberration is related to two other phenomena, light-time correction, which is due to the motion of an observed object during the time taken by its light to reach an observer, and relativistic beaming, which is an angling of the light emitted by a moving light source. It can be considered equivalent to them but in a different inertial frame of reference. In aberration, the observer is considered to be moving relative to a (for the sake of simplicity) stationary light source, while in light-time correction and relativistic beaming the light source is considered to be moving relative to a stationary observer. Consider the case of an observer and a light source moving relative to each other at constant velocity, with a light beam moving from the source to the observer. At the moment of emission, the beam in the observer's rest frame is tilted compared to the one in the source's rest frame, as understood through relativistic beaming. During the time it takes the light beam to reach the observer the light source moves in the observer's frame, and the 'true position' of the light source is displaced relative to the apparent position the observer sees, as explained by light-time correction. Finally, the beam in the observer's frame at the moment of observation is tilted compared to the beam in source's frame, which can be understood as an aberrational effect. Thus, a person in the light source's frame would describe the apparent tilting of the beam in terms of aberration, while a person in the observer's frame would describe it as a light-time effect. The relationship between these phenomena is only valid if the observer and source's frames are inertial frames. In practice, because the Earth is not an inertial rest frame but experiences centripetal acceleration towards the Sun, many aberrational effects such as annual aberration on Earth cannot be considered light-time corrections. However, if the time between emission and detection of the light is short compared to the orbital period of the Earth, the Earth may be approximated as an inertial frame and aberrational effects are equivalent to light-time corrections. Types The Astronomical Almanac describes several different types of aberration, arising from differing components of the Earth's and observed object's motion: Stellar aberration: "The apparent angular displacement of the observed position of a celestial body resulting from the motion of the observer. Stellar aberration is divided into diurnal, annual, and secular components." Annual aberration: "The component of stellar aberration resulting from the motion of the Earth about the Sun." Diurnal aberration: "The component of stellar aberration resulting from the observer's diurnal motion about the center of the Earth due to the Earth's rotation." Secular aberration: "The component of stellar aberration resulting from the essentially uniform and almost rectilinear motion of the entire solar system in space. Secular aberration is usually disregarded." Planetary aberration: "The apparent angular displacement of the observed position of a solar system body from its instantaneous geocentric direction as would be seen by an observer at the geocenter. This displacement is caused by the aberration of light and light-time displacement." Annual aberration Annual aberration is caused by the motion of an observer on Earth as the planet revolves around the Sun. Due to orbital eccentricity, the orbital velocity of Earth (in the Sun's rest frame) varies periodically during the year as the planet traverses its elliptic orbit and consequently the aberration also varies periodically, typically causing stars to appear to move in small ellipses. Approximating Earth's orbit as circular, the maximum displacement of a star due to annual aberration is known as the constant of aberration, conventionally represented by . It may be calculated using the relation substituting the Earth's average speed in the Sun's frame for and the speed of light . Its accepted value is 20.49552 arcseconds (sec) or 0.000099365 radians (rad) (at J2000). Assuming a circular orbit, annual aberration causes stars exactly on the ecliptic (the plane of Earth's orbit) to appear to move back and forth along a straight line, varying by on either side of their position in the Sun's frame. A star that is precisely at one of the ecliptic poles (at 90° from the ecliptic plane) will appear to move in a circle of radius about its true position, and stars at intermediate ecliptic latitudes will appear to move along a small ellipse. For illustration, consider a star at the northern ecliptic pole viewed by an observer at a point on the Arctic Circle. Such an observer will see the star transit at the zenith, once every day (strictly speaking sidereal day). At the time of the March equinox, Earth's orbit carries the observer in a southwards direction, and the star's apparent declination is therefore displaced to the south by an angle of . On the September equinox, the star's position is displaced to the north by an equal and opposite amount. On either solstice, the displacement in declination is 0. Conversely, the amount of displacement in right ascension is 0 on either equinox and at maximum on either solstice. In actuality, Earth's orbit is slightly elliptic rather than circular, and its speed varies somewhat over the course of its orbit, which means the description above is only approximate. Aberration is more accurately calculated using Earth's instantaneous velocity relative to the barycenter of the Solar System. Note that the displacement due to aberration is orthogonal to any displacement due to parallax. If parallax is detectable, the maximum displacement to the south would occur in December, and the maximum displacement to the north in June. It is this apparently anomalous motion that so mystified early astronomers. Solar annual aberration A special case of annual aberration is the nearly constant deflection of the Sun from its position in the Sun's rest frame by towards the west (as viewed from Earth), opposite to the apparent motion of the Sun along the ecliptic (which is from west to east, as seen from Earth). The deflection thus makes the Sun appear to be behind (or retarded) from its rest-frame position on the ecliptic by a position or angle . This deflection may equivalently be described as a light-time effect due to motion of the Earth during the 8.3 minutes that it takes light to travel from the Sun to Earth. The relation with is : [0.000099365 rad / 2 π rad] x [365.25 d x 24 h/d x 60 min/h] = 8.3167 min ≈ 8 min 19 sec = 499 sec. This is possible since the transit time of sunlight is short relative to the orbital period of the Earth, so the Earth's frame may be approximated as inertial. In the Earth's frame, the Sun moves, at a mean velocity v = 29.789 km/s, by a distance ≈ 14,864.7 km in the time it takes light to reach Earth, ≈ 499 sec for the orbit of mean radius = 1 AU = 149,597,870.7 km. This gives an angular correction ≈ 0.000099364 rad = 20.49539 sec, which can be solved to give ≈ 0.000099365 rad = 20.49559 sec, very nearly the same as the aberrational correction (here is in radian and not in arcsecond). Diurnal aberration Diurnal aberration is caused by the velocity of the observer on the surface of the rotating Earth. It is therefore dependent not only on the time of the observation, but also the latitude and longitude of the observer. Its effect is much smaller than that of annual aberration, and is only 0.32 arcseconds in the case of an observer at the Equator, where the rotational velocity is greatest. Secular aberration The secular component of aberration, caused by the motion of the Solar System in space, has been further subdivided into several components: aberration resulting from the motion of the solar system barycenter around the center of our Galaxy, aberration resulting from the motion of the Galaxy relative to the Local Group, and aberration resulting from the motion of the Local Group relative to the cosmic microwave background. Secular aberration affects the apparent positions of stars and extragalactic objects. The large, constant part of secular aberration cannot be directly observed and "It has been standard practice to absorb this large, nearly constant effect into the reported" positions of stars. In about 200 million years, the Sun circles the galactic center, whose measured location is near right ascension (α = 266.4°) and declination (δ = −29.0°). The constant, unobservable, effect of the solar system's motion around the galactic center has been computed variously as 150 or 165 arcseconds. The other, observable, part is an acceleration toward the galactic center of approximately 2.5 × 10−10 m/s2, which yields a change of aberration of about 5 μas/yr. Highly precise measurements extending over several years can observe this change in secular aberration, often called the secular aberration drift or the acceleration of the Solar System, as a small apparent proper motion. Recently, highly precise astrometry of extragalactic objects using both Very Long Baseline Interferometry and the Gaia space observatory have successfully measured this small effect. The first VLBI measurement of the apparent motion, over a period of 20 years, of 555 extragalactic objects towards the center of our galaxy at equatorial coordinates of α = 263° and δ = −20° indicated a secular aberration drift 6.4 ±1.5 μas/yr. Later determinations using a series of VLBI measurements extending over almost 40 years determined the secular aberration drift to be 5.83 ± 0.23 μas/yr in the direction α = 270.2 ± 2.3° and δ = −20.2° ± 3.6°. Optical observations using only 33 months of Gaia satellite data of 1.6 million extragalactic sources indicated an acceleration of the solar system of 2.32 ± 0.16 × 10−10 m/s2 and a corresponding secular aberration drift of 5.05 ± 0.35 μas/yr in the direction of α = 269.1° ± 5.4°, δ = −31.6° ± 4.1°. It is expected that later Gaia data releases, incorporating about 66 and 120 months of data, will reduce the random errors of these results by factors of 0.35 and 0.15. The latest edition of the International Celestial Reference Frame (ICRF3) adopted a recommended galactocentric aberration constant of 5.8 μas/yr and recommended a correction for secular aberration to obtain the highest positional accuracy for times other than the reference epoch 2015.0. Planetary aberration Planetary aberration is the combination of the aberration of light (due to Earth's velocity) and light-time correction (due to the object's motion and distance), as calculated in the rest frame of the Solar System. Both are determined at the instant when the moving object's light reaches the moving observer on Earth. It is so called because it is usually applied to planets and other objects in the Solar System whose motion and distance are accurately known. Discovery and first observations The discovery of the aberration of light was totally unexpected, and it was only by considerable perseverance and perspicacity that Bradley was able to explain it in 1727. It originated from attempts to discover whether stars possessed appreciable parallaxes. Search for stellar parallax The Copernican heliocentric theory of the Solar System had received confirmation by the observations of Galileo and Tycho Brahe and the mathematical investigations of Kepler and Newton. As early as 1573, Thomas Digges had suggested that parallactic shifting of the stars should occur according to the heliocentric model, and consequently if stellar parallax could be observed it would help confirm this theory. Many observers claimed to have determined such parallaxes, but Tycho Brahe and Giovanni Battista Riccioli concluded that they existed only in the minds of the observers, and were due to instrumental and personal errors. However, in 1680 Jean Picard, in his Voyage d'Uranibourg, stated, as a result of ten years' observations, that Polaris, the Pole Star, exhibited variations in its position amounting to 40 annually. Some astronomers endeavoured to explain this by parallax, but these attempts failed because the motion differed from that which parallax would produce. John Flamsteed, from measurements made in 1689 and succeeding years with his mural quadrant, similarly concluded that the declination of Polaris was 40 less in July than in September. Robert Hooke, in 1674, published his observations of γ Draconis, a star of magnitude 2m which passes practically overhead at the latitude of London (hence its observations are largely free from the complex corrections due to atmospheric refraction), and concluded that this star was 23 more northerly in July than in October. James Bradley's observations Consequently, when Bradley and Samuel Molyneux entered this sphere of research in 1725, there was still considerable uncertainty as to whether stellar parallaxes had been observed or not, and it was with the intention of definitely answering this question that they erected a large telescope at Molyneux's house at Kew. They decided to reinvestigate the motion of γ Draconis with a telescope constructed by George Graham (1675–1751), a celebrated instrument-maker. This was fixed to a vertical chimney stack in such manner as to permit a small oscillation of the eyepiece, the amount of which (i.e. the deviation from the vertical) was regulated and measured by the introduction of a screw and a plumb line. The instrument was set up in November 1725, and observations on γ Draconis were made starting in December. The star was observed to move 40 southwards between September and March, and then reversed its course from March to September. At the same time, 35 Camelopardalis, a star with a right ascension nearly exactly opposite to that of γ Draconis, was 19" more northerly at the beginning of March than in September. The asymmetry of these results, which were expected to be mirror images of each other, were completely unexpected and inexplicable by existing theories. Early hypotheses Bradley and Molyneux discussed several hypotheses in the hope of finding the solution. Since the apparent motion was evidently caused neither by parallax nor observational errors, Bradley first hypothesized that it could be due to oscillations in the orientation of the Earth's axis relative to the celestial sphere – a phenomenon known as nutation. 35 Camelopardalis was seen to possess an apparent motion which could be consistent with nutation, but since its declination varied only one half as much as that of γ Draconis, it was obvious that nutation did not supply the answer (however, Bradley later went on to discover that the Earth does indeed nutate). He also investigated the possibility that the motion was due to an irregular distribution of the Earth's atmosphere, thus involving abnormal variations in the refractive index, but again obtained negative results. On August 19, 1727, Bradley embarked upon a further series of observations using a telescope of his own erected at the Rectory, Wanstead. This instrument had the advantage of a larger field of view and he was able to obtain precise positions of a large number of stars over the course of about twenty years. During his first two years at Wanstead, he established the existence of the phenomenon of aberration beyond all doubt, and this also enabled him to formulate a set of rules that would allow the calculation of the effect on any given star at a specified date. Development of the theory of aberration Bradley eventually developed his explanation of aberration in about September 1728 and this theory was presented to the Royal Society in mid January the following year. One well-known story was that he saw the change of direction of a wind vane on a boat on the Thames, caused not by an alteration of the wind itself, but by a change of course of the boat relative to the wind direction. However, there is no record of this incident in Bradley's own account of the discovery, and it may therefore be apocryphal. The following table shows the magnitude of deviation from true declination for γ Draconis and the direction, on the planes of the solstitial colure and ecliptic prime meridian, of the tangent of the velocity of the Earth in its orbit for each of the four months where the extremes are found, as well as expected deviation from true ecliptic longitude if Bradley had measured its deviation from right ascension: Bradley proposed that the aberration of light not only affected declination, but right ascension as well, so that a star in the pole of the ecliptic would describe a little ellipse with a diameter of about 40", but for simplicity, he assumed it to be a circle. Since he only observed the deviation in declination, and not in right ascension, his calculations for the maximum deviation of a star in the pole of the ecliptic are for its declination only, which will coincide with the diameter of the little circle described by such star. For eight different stars, his calculations are as follows: Based on these calculations, Bradley was able to estimate the constant of aberration at 20.2", which is equal to 0.00009793 radians, and with this was able to estimate the speed of light at per second. By projecting the little circle for a star in the pole of the ecliptic, he could simplify the calculation of the relationship between the speed of light and the speed of the Earth's annual motion in its orbit as follows: Thus, the speed of light to the speed of the Earth's annual motion in its orbit is 10,210 to one, from whence it would follow, that light moves, or is propagated as far as from the Sun to the Earth in 8 minutes 12 seconds. The original motivation of the search for stellar parallax was to test the Copernican theory that the Earth revolves around the Sun. The change of aberration in the course of the year demonstrates the relative motion of the Earth and the stars. Retrodiction on Descartes' lightspeed argument In the prior century, René Descartes argued that if light were not instantaneous, then shadows of moving objects would lag; and if propagation times over terrestrial distances were appreciable, then during a lunar eclipse the Sun, Earth, and Moon would be out of alignment by hours' motion, contrary to observation. Huygens commented that, on Rømer's lightspeed data (yielding an earth-moon round-trip time of only seconds), the lag angle would be imperceptible. What they both overlooked is that aberration (as understood only later) would exactly counteract the lag even if large, leaving this eclipse method completely insensitive to light speed. (Otherwise, shadow-lag methods could be made to sense absolute translational motion, contrary to a basic principle of relativity.) Historical theories of aberration The phenomenon of aberration became a driving force for many physical theories during the 200 years between its observation and the explanation by Albert Einstein. The first classical explanation was provided in 1729, by James Bradley as described above, who attributed it to the finite speed of light and the motion of Earth in its orbit around the Sun. However, this explanation proved inaccurate once the wave nature of light was better understood, and correcting it became a major goal of the 19th century theories of luminiferous aether. Augustin-Jean Fresnel proposed a correction due to the motion of a medium (the aether) through which light propagated, known as "partial aether drag". He proposed that objects partially drag the aether along with them as they move, and this became the accepted explanation for aberration for some time. George Stokes proposed a similar theory, explaining that aberration occurs due to the flow of aether induced by the motion of the Earth. Accumulated evidence against these explanations, combined with new understanding of the electromagnetic nature of light, led Hendrik Lorentz to develop an electron theory which featured an immobile aether, and he explained that objects contract in length as they move through the aether. Motivated by these previous theories, Albert Einstein then developed the theory of special relativity in 1905, which provides the modern account of aberration. Bradley's classical explanation Bradley conceived of an explanation in terms of a corpuscular theory of light in which light is made of particles. His classical explanation appeals to the motion of the earth relative to a beam of light-particles moving at a finite velocity, and is developed in the Sun's frame of reference, unlike the classical derivation given above. Consider the case where a distant star is motionless relative to the Sun, and the star is extremely far away, so that parallax may be ignored. In the rest frame of the Sun, this means light from the star travels in parallel paths to the Earth observer, and arrives at the same angle regardless of where the Earth is in its orbit. Suppose the star is observed on Earth with a telescope, idealized as a narrow tube. The light enters the tube from the star at angle and travels at speed taking a time to reach the bottom of the tube, where it is detected. Suppose observations are made from Earth, which is moving with a speed . During the transit of the light, the tube moves a distance . Consequently, for the particles of light to reach the bottom of the tube, the tube must be inclined at an angle different from , resulting in an apparent position of the star at angle . As the Earth proceeds in its orbit it changes direction, so changes with the time of year the observation is made. The apparent angle and true angle are related using trigonometry as: . In the case of , this gives . While this is different from the more accurate relativistic result described above, in the limit of small angle and low velocity they are approximately the same, within the error of the measurements of Bradley's day. These results allowed Bradley to make one of the earliest measurements of the speed of light. Luminiferous aether In the early nineteenth century the wave theory of light was being rediscovered, and in 1804 Thomas Young adapted Bradley's explanation for corpuscular light to wavelike light traveling through a medium known as the luminiferous aether. His reasoning was the same as Bradley's, but it required that this medium be immobile in the Sun's reference frame and must pass through the earth unaffected, otherwise the medium (and therefore the light) would move along with the earth and no aberration would be observed. He wrote: However, it soon became clear Young's theory could not account for aberration when materials with a non-vacuum refractive index were present. An important example is of a telescope filled with water. The speed of light in such a telescope will be slower than in vacuum, and is given by rather than where is the refractive index of the water. Thus, by Bradley and Young's reasoning the aberration angle is given by . which predicts a medium-dependent angle of aberration. When refraction at the telescope's objective is taken into account this result deviates even more from the vacuum result. In 1810 François Arago performed a similar experiment and found that the aberration was unaffected by the medium in the telescope, providing solid evidence against Young's theory. This experiment was subsequently verified by many others in the following decades, most accurately by Airy in 1871, with the same result. Aether drag models Fresnel's aether drag In 1818, Augustin Fresnel developed a modified explanation to account for the water telescope and for other aberration phenomena. He explained that the aether is generally at rest in the Sun's frame of reference, but objects partially drag the aether along with them as they move. That is, the aether in an object of index of refraction moving at velocity is partially dragged with a velocity bringing the light along with it. This factor is known as "Fresnel's dragging coefficient". This dragging effect, along with refraction at the telescope's objective, compensates for the slower speed of light in the water telescope in Bradley's explanation. With this modification Fresnel obtained Bradley's vacuum result even for non-vacuum telescopes, and was also able to predict many other phenomena related to the propagation of light in moving bodies. Fresnel's dragging coefficient became the dominant explanation of aberration for the next decades. Stokes' aether drag However, the fact that light is polarized (discovered by Fresnel himself) led scientists such as Cauchy and Green to believe that the aether was a totally immobile elastic solid as opposed to Fresnel's fluid aether. There was thus renewed need for an explanation of aberration consistent both with Fresnel's predictions (and Arago's observations) as well as polarization. In 1845, Stokes proposed a 'putty-like' aether which acts as a liquid on large scales but as a solid on small scales, thus supporting both the transverse vibrations required for polarized light and the aether flow required to explain aberration. Making only the assumptions that the fluid is irrotational and that the boundary conditions of the flow are such that the aether has zero velocity far from the Earth, but moves at the Earth's velocity at its surface and within it, he was able to completely account for aberration. The velocity of the aether outside of the Earth would decrease as a function of distance from the Earth so light rays from stars would be progressively dragged as they approached the surface of the Earth. The Earth's motion would be unaffected by the aether due to D'Alembert's paradox. Both Fresnel and Stokes' theories were popular. However, the question of aberration was put aside during much of the second half of the 19th century as focus of inquiry turned to the electromagnetic properties of aether. Lorentz' length contraction In the 1880s once electromagnetism was better understood, interest turned again to the problem of aberration. By this time flaws were known to both Fresnel's and Stokes' theories. Fresnel's theory required that the relative velocity of aether and matter to be different for light of different colors, and it was shown that the boundary conditions Stokes had assumed in his theory were inconsistent with his assumption of irrotational flow. At the same time, the modern theories of electromagnetic aether could not account for aberration at all. Many scientists such as Maxwell, Heaviside and Hertz unsuccessfully attempted to solve these problems by incorporating either Fresnel or Stokes' theories into Maxwell's new electromagnetic laws. Hendrik Lorentz spent considerable effort along these lines. After working on this problem for a decade, the issues with Stokes' theory caused him to abandon it and to follow Fresnel's suggestion of a (mostly) stationary aether (1892, 1895). However, in Lorentz's model the aether was completely immobile, like the electromagnetic aethers of Cauchy, Green and Maxwell and unlike Fresnel's aether. He obtained Fresnel's dragging coefficient from modifications of Maxwell's electromagnetic theory, including a modification of the time coordinates in moving frames ("local time"). In order to explain the Michelson–Morley experiment (1887), which apparently contradicted both Fresnel's and Lorentz's immobile aether theories, and apparently confirmed Stokes' complete aether drag, Lorentz theorized (1892) that objects undergo "length contraction" by a factor of in the direction of their motion through the aether. In this way, aberration (and all related optical phenomena) can be accounted for in the context of an immobile aether. Lorentz' theory became the basis for much research in the next decade, and beyond. Its predictions for aberration are identical to those of the relativistic theory. Special relativity Lorentz' theory matched experiment well, but it was complicated and made many unsubstantiated physical assumptions about the microscopic nature of electromagnetic media. In his 1905 theory of special relativity, Albert Einstein reinterpreted the results of Lorentz' theory in a much simpler and more natural conceptual framework which disposed of the idea of an aether. His derivation is given above, and is now the accepted explanation. Robert S. Shankland reported some conversations with Einstein, in which Einstein emphasized the importance of aberration: Other important motivations for Einstein's development of relativity were the moving magnet and conductor problem and (indirectly) the negative aether drift experiments, already mentioned by him in the introduction of his first relativity paper. Einstein wrote in a note in 1952: While Einstein's result is the same as Bradley's original equation except for an extra factor of , Bradley's result does not merely give the classical limit of the relativistic case, in the sense that it gives incorrect predictions even at low relative velocities. Bradley's explanation cannot account for situations such as the water telescope, nor for many other optical effects (such as interference) that might occur within the telescope. This is because in the Earth's frame it predicts that the direction of propagation of the light beam in the telescope is not normal to the wavefronts of the beam, in contradiction with Maxwell's theory of electromagnetism. It also does not preserve the speed of light c between frames. However, Bradley did correctly infer that the effect was due to relative velocities. See also Apparent place Stellar parallax Astronomical nutation Proper motion Timeline of electromagnetism and classical optics Relativistic aberration Notes References Further reading P. Kenneth Seidelmann (Ed.), Explanatory Supplement to the Astronomical Almanac (University Science Books, 1992), 127–135, 700. Stephen Peter Rigaud, Miscellaneous Works and Correspondence of the Rev. James Bradley, D.D. F.R.S. (1832). Charles Hutton, Mathematical and Philosophical Dictionary (1795). H. H. Turner, Astronomical Discovery (1904). Thomas Simpson, Essays on Several Curious and Useful Subjects in Speculative and Mix'd Mathematicks (1740). :de:August Ludwig Busch, Reduction of the Observations Made by Bradley at Kew and Wansted to Determine the Quantities of Aberration and Nutation (1838). External links Courtney Seligman on Bradley's observations Electromagnetic radiation Astrometry Radiation
Aberration (astronomy)
[ "Physics", "Chemistry", "Astronomy" ]
7,712
[ "Transport phenomena", "Physical phenomena", "Electromagnetic radiation", "Observational astronomy", "Astrometry", "Waves", "Radiation", "Astronomical sub-disciplines" ]
2,787
https://en.wikipedia.org/wiki/Astrobiology
Astrobiology (also xenology or exobiology) is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth. Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth. The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline. Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications. The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missions. Astrobiology also concerns the study of the origin and early evolution of life on Earth to try to understand the conditions that are necessary for life to form on other planets. This research seeks to understand how life emerged from non-living matter and how it evolved to become the diverse array of organisms we see today. Research within this topic is conducted utilising the methodology of paleosciences, especially paleobiology, for astrobiological applications. Astrobiology is a rapidly developing field with a strong interdisciplinary aspect that holds many challenges and opportunities for scientists. Astrobiology programs and research centres are present in many universities and research institutions around the world, and space agencies like NASA and ESA have dedicated departments and programs for astrobiology research. Overview The term astrobiology was first proposed by the Russian astronomer Gavriil Tikhov in 1953. It is etymologically derived from the Greek , "star"; , "life"; and , -logia, "study". A close synonym is exobiology from the Greek Έξω, "external"; , "life"; and , -logia, "study", coined by American molecular biologist Joshua Lederberg; exobiology is considered to have a narrow scope limited to search of life external to Earth. Another associated term is xenobiology, from the Greek ξένος, "foreign"; , "life"; and -λογία, "study", coined by American science fiction writer Robert Heinlein in his work The Star Beast; xenobiology is now used in a more specialised sense, referring to 'biology based on foreign chemistry', whether of extraterrestrial or terrestrial (typically synthetic) origin. While the potential for extraterrestrial life, especially intelligent life, has been explored throughout human history within philosophy and narrative, the question is a verifiable hypothesis and thus a valid line of scientific inquiry; planetary scientist David Grinspoon calls it a field of natural philosophy, grounding speculation on the unknown in known scientific theory. The modern field of astrobiology can be traced back to the 1950s and 1960s with the advent of space exploration, when scientists began to seriously consider the possibility of life on other planets. In 1957, the Soviet Union launched Sputnik 1, the first artificial satellite, which marked the beginning of the Space Age. This event led to an increase in the study of the potential for life on other planets, as scientists began to consider the possibilities opened up by the new technology of space exploration. In 1959, NASA funded its first exobiology project, and in 1960, NASA founded the Exobiology Program, now one of four main elements of NASA's current Astrobiology Program. In 1971, NASA funded Project Cyclops, part of the search for extraterrestrial intelligence, to search radio frequencies of the electromagnetic spectrum for interstellar communications transmitted by extraterrestrial life outside the Solar System. In the 1960s-1970s, NASA established the Viking program, which was the first US mission to land on Mars and search for metabolic signs of present life; the results were inconclusive. In the 1980s and 1990s, the field began to expand and diversify as new discoveries and technologies emerged. The discovery of microbial life in extreme environments on Earth, such as deep-sea hydrothermal vents, helped to clarify the feasibility of potential life existing in harsh conditions. The development of new techniques for the detection of biosignatures, such as the use of stable isotopes, also played a significant role in the evolution of the field. The contemporary landscape of astrobiology emerged in the early 21st century, focused on utilising Earth and environmental science for applications within comparate space environments. Missions included the ESA's Beagle 2, which failed minutes after landing on Mars, NASA's Phoenix lander, which probed the environment for past and present planetary habitability of microbial life on Mars and researched the history of water, and NASA's Curiosity rover, currently probing the environment for past and present planetary habitability of microbial life on Mars. Theoretical foundations Planetary habitability Astrobiological research makes a number of simplifying assumptions when studying the necessary components for planetary habitability. Carbon and Organic Compounds: Carbon is the fourth most abundant element in the universe and the energy required to make or break a bond is at just the appropriate level for building molecules which are not only stable, but also reactive. The fact that carbon atoms bond readily to other carbon atoms allows for the building of extremely long and complex molecules. As such, astrobiological research presumes that the vast majority of life forms in the Milky Way galaxy are based on carbon chemistries, as are all life forms on Earth. However, theoretical astrobiology entertains the potential for other organic molecular bases for life, thus astrobiological research often focuses on identifying environments that have the potential to support life based on the presence of organic compounds. Liquid water: Liquid water is a common molecule that provides an excellent environment for the formation of complicated carbon-based molecules, and is generally considered necessary for life as we know it to exist. Thus, astrobiological research presumes that extraterrestrial life similarly depends upon access to liquid water, and often focuses on identifying environments that have the potential to support liquid water. Some researchers posit environments of water-ammonia mixtures as possible solvents for hypothetical types of biochemistry. Environmental stability: Where organisms adaptively evolve to the conditions of the environments in which they reside, environmental stability is considered necessary for life to exist. This presupposes the necessity of a stable temperature, pressure, and radiation levels; resultantly, astrobiological research focuses on planets orbiting Sun-like red dwarf stars. This is because very large stars have relatively short lifetimes, meaning that life might not have time to emerge on planets orbiting them; very small stars provide so little heat and warmth that only planets in very close orbits around them would not be frozen solid, and in such close orbits these planets would be tidally locked to the star; whereas the long lifetimes of red dwarfs could allow the development of habitable environments on planets with thick atmospheres. This is significant as red dwarfs are extremely common. (See also: Habitability of red dwarf systems). Energy source: It is assumed that any life elsewhere in the universe would also require an energy source. Previously, it was assumed that this would necessarily be from a sun-like star, however with developments within extremophile research contemporary astrobiological research often focuses on identifying environments that have the potential to support life based on the availability of an energy source, such as the presence of volcanic activity on a planet or moon that could provide a source of heat and energy. It is important to note that these assumptions are based on our current understanding of life on Earth and the conditions under which it can exist. As our understanding of life and the potential for it to exist in different environments evolves, these assumptions may change. Methods Studying terrestrial extremophiles Astrobiological research concerning the study of habitable environments in our solar system and beyond utilises methods within the geosciences. Research within this branch primarily concerns the geobiology of organisms that can survive in extreme environments on Earth, such as in volcanic or deep sea environments, to understand the limits of life, and the conditions under which life might be able to survive on other planets. This includes, but is not limited to: Deep-sea extremophiles: Researchers are studying organisms that live in the extreme environments of deep-sea hydrothermal vents and cold seeps. These organisms survive in the absence of sunlight, and some are able to survive in high temperatures and pressures, and use chemical energy instead of sunlight to produce food. Desert extremophiles: Researchers are studying organisms that can survive in extreme dry, high temperature conditions, such as in deserts. Microbes in extreme environments: Researchers are investigating the diversity and activity of microorganisms in environments such as deep mines, subsurface soil, cold glaciers and polar ice, and high-altitude environments. Researching Earth's present environment Research also regards the long-term survival of life on Earth, and the possibilities and hazards of life on other planets, including: Biodiversity and ecosystem resilience: Scientists are studying how the diversity of life and the interactions between different species contribute to the resilience of ecosystems and their ability to recover from disturbances. Climate change and extinction: Researchers are investigating the impacts of climate change on different species and ecosystems, and how they may lead to extinction or adaptation. This includes the evolution of Earth's climate and geology, and their potential impact on the habitability of the planet in the future, especially for humans. Human impact on the biosphere: Scientists are studying the ways in which human activities, such as deforestation, pollution, and the introduction of invasive species, are affecting the biosphere and the long-term survival of life on Earth. Long-term preservation of life: Researchers are exploring ways to preserve samples of life on Earth for long periods of time, such as cryopreservation and genomic preservation, in the event of a catastrophic event that could wipe out most of life on Earth. Finding biosignatures on other worlds Emerging astrobiological research concerning the search for planetary biosignatures of past or present extraterrestrial life utilise methodologies within planetary sciences. These include: The study of microbial life in the subsurface of Mars: Scientists are using data from Mars rover missions to study the composition of the subsurface of Mars, searching for biosignatures of past or present microbial life. The study of liquid bodies on icy moons: Discoveries of surface and subsurface bodies of liquid on moons such as Europa, Titan and Enceladus showed possible habitability zones, making them viable targets for the search for extraterrestrial life. , missions like Europa Clipper and Dragonfly are planned to search for biosignatures within these environments. The study of the atmospheres of planets: Scientists are studying the potential for life to exist in the atmospheres of planets, with a focus on the study of the physical and chemical conditions necessary for such life to exist, namely the detection of organic molecules and biosignature gases; for example, the study of the possibility of life in the atmospheres of exoplanets that orbit red dwarfs and the study of the potential for microbial life in the upper atmosphere of Venus. Telescopes and remote sensing of exoplanets: The discovery of thousands of exoplanets has opened up new opportunities for the search for biosignatures. Scientists are using telescopes such as the James Webb Space Telescope and the Transiting Exoplanet Survey Satellite to search for biosignatures on exoplanets. They are also developing new techniques for the detection of biosignatures, such as the use of remote sensing to search for biosignatures in the atmosphere of exoplanets. Talking to extraterrestrials SETI and CETI: Scientists search for signals from intelligent extraterrestrial civilizations using radio and optical telescopes within the discipline of extraterrestrial intelligence communications (CETI). CETI focuses on composing and deciphering messages that could theoretically be understood by another technological civilization. Communication attempts by humans have included broadcasting mathematical languages, pictorial systems such as the Arecibo message, and computational approaches to detecting and deciphering 'natural' language communication. While some high-profile scientists, such as Carl Sagan, have advocated the transmission of messages, theoretical physicist Stephen Hawking warned against it, suggesting that aliens may raid Earth for its resources. Investigating the early Earth Emerging astrobiological research concerning the study of the origin and early evolution of life on Earth utilises methodologies within the palaeosciences. These include: The study of the early atmosphere: Researchers are investigating the role of the early atmosphere in providing the right conditions for the emergence of life, such as the presence of gases that could have helped to stabilise the climate and the formation of organic molecules. The study of the early magnetic field: Researchers are investigating the role of the early magnetic field in protecting the Earth from harmful radiation and helping to stabilise the climate. This research has immense astrobiological implications where the subjects of current astrobiological research like Mars lack such a field. The study of prebiotic chemistry: Scientists are studying the chemical reactions that could have occurred on the early Earth that led to the formation of the building blocks of life- amino acids, nucleotides, and lipids- and how these molecules could have formed spontaneously under early Earth conditions. The study of impact events: Scientists are investigating the potential role of impact events- especially meteorites- in the delivery of water and organic molecules to early Earth. The study of the primordial soup: Researchers are investigating the conditions and ingredients that were present on the early Earth that could have led to the formation of the first living organisms, such as the presence of water and organic molecules, and how these ingredients could have led to the formation of the first living organisms. This includes the role of water in the formation of the first cells and in catalysing chemical reactions. The study of the role of minerals: Scientists are investigating the role of minerals like clay in catalysing the formation of organic molecules, thus playing a role in the emergence of life on Earth. The study of the role of energy and electricity: Scientists are investigating the potential sources of energy and electricity that could have been available on the early Earth, and their role in the formation of organic molecules, thus the emergence of life. The study of the early oceans: Scientists are investigating the composition and chemistry of the early oceans and how it may have played a role in the emergence of life, such as the presence of dissolved minerals that could have helped to catalyse the formation of organic molecules. The study of hydrothermal vents: Scientists are investigating the potential role of hydrothermal vents in the origin of life, as these environments may have provided the energy and chemical building blocks needed for its emergence. The study of plate tectonics: Scientists are investigating the role of plate tectonics in creating a diverse range of environments on the early Earth. The study of the early biosphere: Researchers are investigating the diversity and activity of microorganisms in the early Earth, and how these organisms may have played a role in the emergence of life. The study of microbial fossils: Scientists are investigating the presence of microbial fossils in ancient rocks, which can provide clues about the early evolution of life on Earth and the emergence of the first organisms. Research The systematic search for possible life outside Earth is a valid multidisciplinary scientific endeavor. However, hypotheses and predictions as to its existence and origin vary widely, and at the present, the development of hypotheses firmly grounded on science may be considered astrobiology's most concrete practical application. It has been proposed that viruses are likely to be encountered on other life-bearing planets, and may be present even if there are no biological cells. Research outcomes , no evidence of extraterrestrial life has been identified. Examination of the Allan Hills 84001 meteorite, which was recovered in Antarctica in 1984 and originated from Mars, is thought by David McKay, as well as few other scientists, to contain microfossils of extraterrestrial origin; this interpretation is controversial. Yamato 000593, the second largest meteorite from Mars, was found on Earth in 2000. At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to some NASA scientists. On 5 March 2011, Richard B. Hoover, a scientist with the Marshall Space Flight Center, speculated on the finding of alleged microfossils similar to cyanobacteria in CI1 carbonaceous meteorites in the fringe Journal of Cosmology, a story widely reported on by mainstream media. However, NASA formally distanced itself from Hoover's claim. According to American astrophysicist Neil deGrasse Tyson: "At the moment, life on Earth is the only known life in the universe, but there are compelling arguments to suggest we are not alone." Elements of astrobiology Astronomy Most astronomy-related astrobiology research falls into the category of extrasolar planet (exoplanet) detection, the hypothesis being that if life arose on Earth, then it could also arise on other planets with similar characteristics. To that end, a number of instruments designed to detect Earth-sized exoplanets have been considered, most notably NASA's Terrestrial Planet Finder (TPF) and ESA's Darwin programs, both of which have been cancelled. NASA launched the Kepler mission in March 2009, and the French Space Agency launched the COROT space mission in 2006. There are also several less ambitious ground-based efforts underway. The goal of these missions is not only to detect Earth-sized planets but also to directly detect light from the planet so that it may be studied spectroscopically. By examining planetary spectra, it would be possible to determine the basic composition of an extrasolar planet's atmosphere and/or surface. Given this knowledge, it may be possible to assess the likelihood of life being found on that planet. A NASA research group, the Virtual Planet Laboratory, is using computer modeling to generate a wide variety of virtual planets to see what they would look like if viewed by TPF or Darwin. It is hoped that once these missions come online, their spectra can be cross-checked with these virtual planetary spectra for features that might indicate the presence of life. An estimate for the number of planets with intelligent communicative extraterrestrial life can be gleaned from the Drake equation, essentially an equation expressing the probability of intelligent life as the product of factors such as the fraction of planets that might be habitable and the fraction of planets on which life might arise: where: N = The number of communicative civilizations R* = The rate of formation of suitable stars (stars such as the Sun) fp = The fraction of those stars with planets (current evidence indicates that planetary systems may be common for stars like the Sun) ne = The number of Earth-sized worlds per planetary system fl = The fraction of those Earth-sized planets where life actually develops fi = The fraction of life sites where intelligence develops fc = The fraction of communicative planets (those on which electromagnetic communications technology develops) L = The "lifetime" of communicating civilizations However, whilst the rationale behind the equation is sound, it is unlikely that the equation will be constrained to reasonable limits of error any time soon. The problem with the formula is that it is not used to generate or support hypotheses because it contains factors that can never be verified. The first term, R*, number of stars, is generally constrained within a few orders of magnitude. The second and third terms, fp, stars with planets and fe, planets with habitable conditions, are being evaluated for the star's neighborhood. Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference, but some applications of the formula had been taken literally and related to simplistic or pseudoscientific arguments. Another associated topic is the Fermi paradox, which suggests that if intelligent life is common in the universe, then there should be obvious signs of it. Another active research area in astrobiology is planetary system formation. It has been suggested that the peculiarities of the Solar System (for example, the presence of Jupiter as a protective shield) may have greatly increased the probability of intelligent life arising on Earth. Biology Biology cannot state that a process or phenomenon, by being mathematically possible, has to exist forcibly in an extraterrestrial body. Biologists specify what is speculative and what is not. The discovery of extremophiles, organisms able to survive in extreme environments, became a core research element for astrobiologists, as they are important to understand four areas in the limits of life in planetary context: the potential for panspermia, forward contamination due to human exploration ventures, planetary colonization by humans, and the exploration of extinct and extant extraterrestrial life. Until the 1970s, life was thought to be entirely dependent on energy from the Sun. Plants on Earth's surface capture energy from sunlight to photosynthesize sugars from carbon dioxide and water, releasing oxygen in the process that is then consumed by oxygen-respiring organisms, passing their energy up the food chain. Even life in the ocean depths, where sunlight cannot reach, was thought to obtain its nourishment either from consuming organic detritus rained down from the surface waters or from eating animals that did. The world's ability to support life was thought to depend on its access to sunlight. However, in 1977, during an exploratory dive to the Galapagos Rift in the deep-sea exploration submersible Alvin, scientists discovered colonies of giant tube worms, clams, crustaceans, mussels, and other assorted creatures clustered around undersea volcanic features known as black smokers. These creatures thrive despite having no access to sunlight, and it was soon discovered that they form an entirely independent ecosystem. Although most of these multicellular lifeforms need dissolved oxygen (produced by oxygenic photosynthesis) for their aerobic cellular respiration and thus are not completely independent from sunlight by themselves, the basis for their food chain is a form of bacterium that derives its energy from oxidization of reactive chemicals, such as hydrogen or hydrogen sulfide, that bubble up from the Earth's interior. Other lifeforms entirely decoupled from the energy from sunlight are green sulfur bacteria which are capturing geothermal light for anoxygenic photosynthesis or bacteria running chemolithoautotrophy based on the radioactive decay of uranium. This chemosynthesis revolutionized the study of biology and astrobiology by revealing that life need not be sunlight-dependent; it only requires water and an energy gradient in order to exist. Biologists have found extremophiles that thrive in ice, boiling water, acid, alkali, the water core of nuclear reactors, salt crystals, toxic waste and in a range of other extreme habitats that were previously thought to be inhospitable for life. This opened up a new avenue in astrobiology by massively expanding the number of possible extraterrestrial habitats. Characterization of these organisms, their environments and their evolutionary pathways, is considered a crucial component to understanding how life might evolve elsewhere in the universe. For example, some organisms able to withstand exposure to the vacuum and radiation of outer space include the lichen fungi Rhizocarpon geographicum and Rusavskia elegans, the bacterium Bacillus safensis, Deinococcus radiodurans, Bacillus subtilis, yeast Saccharomyces cerevisiae, seeds from Arabidopsis thaliana ('mouse-ear cress'), as well as the invertebrate animal Tardigrade. While tardigrades are not considered true extremophiles, they are considered extremotolerant microorganisms that have contributed to the field of astrobiology. Their extreme radiation tolerance and presence of DNA protection proteins may provide answers as to whether life can survive away from the protection of the Earth's atmosphere. Jupiter's moon, Europa, and Saturn's moon, Enceladus, are now considered the most likely locations for extant extraterrestrial life in the Solar System due to their subsurface water oceans where radiogenic and tidal heating enables liquid water to exist. The origin of life, known as abiogenesis, distinct from the evolution of life, is another ongoing field of research. Oparin and Haldane postulated that the conditions on the early Earth were conducive to the formation of organic compounds from inorganic elements and thus to the formation of many of the chemicals common to all forms of life we see today. The study of this process, known as prebiotic chemistry, has made some progress, but it is still unclear whether or not life could have formed in such a manner on Earth. The alternative hypothesis of panspermia is that the first elements of life may have formed on another planet with even more favorable conditions (or even in interstellar space, asteroids, etc.) and then have been carried over to Earth. The cosmic dust permeating the universe contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. Further, a scientist suggested that these compounds may have been related to the development of life on Earth and said that, "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." More than 20% of the carbon in the universe may be associated with polycyclic aromatic hydrocarbons (PAHs), possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. PAHs are subjected to interstellar medium conditions and are transformed through hydrogenation, oxygenation and hydroxylation, to more complex organics—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". In October 2020, astronomers proposed the idea of detecting life on distant planets by studying the shadows of trees at certain times of the day to find patterns that could be detected through observation of exoplanets. Philosophy David Grinspoon called astrobiology a field of natural philosophy. Astrobiology intersects with philosophy by raising questions about the nature and existence of life beyond Earth. Philosophical implications include the definition of life itself, issues in the philosophy of mind and cognitive science in case intelligent life is found, epistemological questions about the nature of proof, ethical considerations of space exploration, along with the broader impact of discovering extraterrestrial life on human thought and society. Dunér has emphasized philosophy of astrobiology as an ongoing existential exercise in individual and collective self-understanding, whose major task is constructing and debating concepts such as the concept of life. Key issues, for Dunér, are questions of resource money and monetary planning, epistemological questions regarding astrobiological knowledge, linguistics issues about interstellar communication, cognitive issues such as the definition of intelligence, along with the possibility of interplanetary contamination. Persson also emphasized key philosophical questions in astrobiology. They include ethical justification of resources, the question of life in general, the epistemological issues and knowledge about being alone in the universe, ethics towards extraterrestrial life, the question of politics and governing uninhabited worlds, along with questions of ecology. For von Hegner, the question of astrobiology and the possibility of astrophilosophy differs. For him, the discipline needs to bifurcate into astrobiology and astrophilosophy since discussions made possible by astrobiology, but which have been astrophilosophical in nature, have existed as long as there have been discussions about extraterrestrial life. Astrobiology is a self-corrective interaction among observation, hypothesis, experiment, and theory, pertaining to the exploration of all natural phenomena. Astrophilosophy consists of methods of dialectic analysis and logical argumentation, pertaining to the clarification of the nature of reality. Šekrst argues that astrobiology requires the affirmation of astrophilosophy, but not as a separate cognate to astrobiology. The stance of conceptual speciesm, according to Šekrst, permeates astrobiology since the very name astrobiology tries to talk about not just biology, but about life in a general way, which includes terrestrial life as a subset. This leads us to either redefine philosophy, or consider the need for astrophilosophy as a more general discipline, to which philosophy is just a subset that deals with questions such as the nature of the human mind and other anthropocentric questions. Most of the philosophy of astrobiology deals with two main questions: the question of life and the ethics of space exploration. Kolb specifically emphasizes the question of viruses, for which the question whether they are alive or not is based on the definitions of life that include self-replication. Schneider tried to defined exo-life, but concluded that we often start with our own prejudices and that defining extraterrestrial life seems futile using human concepts. For Dick, astrobiology relies on metaphysical assumption that there is extraterrestrial life, which reaffirms questions in the philosophy of cosmology, such as fine-tuning or the anthropic principle. Rare Earth hypothesis The Rare Earth hypothesis postulates that multicellular life forms found on Earth may actually be more of a rarity than scientists assume. According to this hypothesis, life on Earth (and more, multi-cellular life) is possible because of a conjunction of the right circumstances (galaxy and location within it, planetary system, star, orbit, planetary size, atmosphere, etc.); and the chance for all those circumstances to repeat elsewhere may be rare. It provides a possible answer to the Fermi paradox which suggests, "If extraterrestrial aliens are common, why aren't they obvious?" It is apparently in opposition to the principle of mediocrity, assumed by famed astronomers Frank Drake, Carl Sagan, and others. The principle of mediocrity suggests that life on Earth is not exceptional, and it is more than likely to be found on innumerable other worlds. Missions Research into the environmental limits of life and the workings of extreme ecosystems is ongoing, enabling researchers to better predict what planetary environments might be most likely to harbor life. Missions such as the Phoenix lander, Mars Science Laboratory, ExoMars, Mars 2020 rover to Mars, and the Cassini probe to Saturn's moons aim to further explore the possibilities of life on other planets in the Solar System. Viking program The two Viking landers each carried four types of biological experiments to the surface of Mars in the late 1970s. These were the only Mars landers to carry out experiments looking specifically for metabolism by current microbial life on Mars. The landers used a robotic arm to collect soil samples into sealed test containers on the craft. The two landers were identical, so the same tests were carried out at two places on Mars' surface; Viking 1 near the equator and Viking 2 further north. The result was inconclusive, and is still disputed by some scientists. Norman Horowitz was the chief of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Beagle 2 Beagle 2 was an unsuccessful British Mars lander that formed part of the European Space Agency's 2003 Mars Express mission. Its primary purpose was to search for signs of life on Mars, past or present. Although it landed safely, it was unable to correctly deploy its solar panels and telecom antenna. EXPOSE EXPOSE is a multi-user facility mounted in 2008 outside the International Space Station dedicated to astrobiology. EXPOSE was developed by the European Space Agency (ESA) for long-term spaceflights that allow exposure of organic chemicals and biological samples to outer space in low Earth orbit. Mars Science Laboratory The Mars Science Laboratory (MSL) mission landed the Curiosity rover that is currently in operation on Mars. It was launched 26 November 2011, and landed at Gale Crater on 6 August 2012. Mission objectives are to help assess Mars' habitability and in doing so, determine whether Mars is or has ever been able to support life, collect data for a future human mission, study Martian geology, its climate, and further assess the role that water, an essential ingredient for life as we know it, played in forming minerals on Mars. Tanpopo The Tanpopo mission is an orbital astrobiology experiment investigating the potential interplanetary transfer of life, organic compounds, and possible terrestrial particles in the low Earth orbit. The purpose is to assess the panspermia hypothesis and the possibility of natural interplanetary transport of microbial life as well as prebiotic organic compounds. Early mission results show evidence that some clumps of microorganism can survive for at least one year in space. This may support the idea that clumps greater than 0.5 millimeters of microorganisms could be one way for life to spread from planet to planet. ExoMars rover ExoMars is a robotic mission to Mars to search for possible biosignatures of Martian life, past or present. This astrobiological mission was under development by the European Space Agency (ESA) in partnership with the Russian Federal Space Agency (Roscosmos); it was planned for a 2022 launch; however, technical and funding issues and the Russian invasion of Ukraine have forced ESA to repeatedly delay the rover's delivery to 2028. Mars 2020 Mars 2020 successfully landed its rover Perseverance in Jezero Crater on 18 February 2021. It will investigate environments on Mars relevant to astrobiology, investigate its surface geological processes and history, including the assessment of its past habitability and potential for preservation of biosignatures and biomolecules within accessible geological materials. The Science Definition Team is proposing the rover collect and package at least 31 samples of rock cores and soil for a later mission to bring back for more definitive analysis in laboratories on Earth. The rover could make measurements and technology demonstrations to help designers of a human expedition understand any hazards posed by Martian dust and demonstrate how to collect carbon dioxide (CO2), which could be a resource for making molecular oxygen (O2) and rocket fuel. Europa Clipper Europa Clipper is a mission launched by NASA on 14 October 2024 that will conduct detailed reconnaissance of Jupiter's moon Europa beginning in 2030, and will investigate whether its internal ocean could harbor conditions suitable for life. It will also aid in the selection of future landing sites. Dragonfly Dragonfly is a NASA mission scheduled to land on Titan in 2036 to assess its microbial habitability and study its prebiotic chemistry. Dragonfly is a rotorcraft lander that will perform controlled flights between multiple locations on the surface, which allows sampling of diverse regions and geological contexts. Proposed concepts Icebreaker Life Icebreaker Life is a lander mission that was proposed for NASA's Discovery Program for the 2021 launch opportunity, but it was not selected for development. It would have had a stationary lander that would be a near copy of the successful 2008 Phoenix and it would have carried an upgraded astrobiology scientific payload, including a 1-meter-long core drill to sample ice-cemented ground in the northern plains to conduct a search for organic molecules and evidence of current or past life on Mars. One of the key goals of the Icebreaker Life mission is to test the hypothesis that the ice-rich ground in the polar regions has significant concentrations of organics due to protection by the ice from oxidants and radiation. Journey to Enceladus and Titan Journey to Enceladus and Titan (JET) is an astrobiology mission concept to assess the habitability potential of Saturn's moons Enceladus and Titan by means of an orbiter. Enceladus Life Finder Enceladus Life Finder (ELF) is a proposed astrobiology mission concept for a space probe intended to assess the habitability of the internal aquatic ocean of Enceladus, Saturn's sixth-largest moon. Life Investigation For Enceladus Life Investigation For Enceladus (LIFE) is a proposed astrobiology sample-return mission concept. The spacecraft would enter into Saturn orbit and enable multiple flybys through Enceladus' icy plumes to collect icy plume particles and volatiles and return them to Earth on a capsule. The spacecraft may sample Enceladus' plumes, the E ring of Saturn, and the upper atmosphere of Titan. Oceanus Oceanus is an orbiter proposed in 2017 for the New Frontiers mission No. 4. It would travel to the moon of Saturn, Titan, to assess its habitability. Oceanus objectives are to reveal Titan's organic chemistry, geology, gravity, topography, collect 3D reconnaissance data, catalog the organics and determine where they may interact with liquid water. Explorer of Enceladus and Titan Explorer of Enceladus and Titan (E2T) is an orbiter mission concept that would investigate the evolution and habitability of the Saturnian satellites Enceladus and Titan. The mission concept was proposed in 2017 by the European Space Agency. See also The Living Cosmos Citations General references The International Journal of Astrobiology , published by Cambridge University Press, is the forum for practitioners in this interdisciplinary field. Astrobiology, published by Mary Ann Liebert, Inc., is a peer-reviewed journal that explores the origins of life, evolution, distribution, and destiny in the universe. Loeb, Avi (2021). Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Houghton Mifflin Harcourt. Further reading D. Goldsmith, T. Owen, The Search For Life in the Universe, Addison-Wesley Publishing Company, 2001 (3rd edition). Andy Weir's 2021 novel, Project Hail Mary, centers on astrobiology. External links Astrobiology.nasa.gov UK Centre for Astrobiology Spanish Centro de Astrobiología Astrobiology Research at The Library of Congress Astrobiology Survey – An introductory course on astrobiology Summary - Search For Life Beyond Earth (NASA; 25 June 2021) Origin of life Astronomical sub-disciplines Branches of biology Speculative evolution
Astrobiology
[ "Astronomy", "Biology" ]
8,184
[ "Origin of life", "Hypothetical life forms", "Speculative evolution", "Astrobiology", "nan", "Biological hypotheses", "Astronomical sub-disciplines" ]
2,792
https://en.wikipedia.org/wiki/Anthropic%20principle
The anthropic principle, also known as the observation selection effect, is the proposition that the range of possible observations that could be made about the universe is limited by the fact that observations are only possible in the type of universe that is capable of developing intelligent life. Proponents of the anthropic principle argue that it explains why the universe has the age and the fundamental physical constants necessary to accommodate intelligent life. If either had been significantly different, no one would have been around to make observations. Anthropic reasoning has been used to address the question as to why certain measured physical constants take the values that they do, rather than some other arbitrary values, and to explain a perception that the universe appears to be finely tuned for the existence of life. There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail. Definition and basis The principle was formulated as a response to a series of observations that the laws of nature and parameters of the universe have values that are consistent with conditions for life as it is known rather than values that would not be consistent with life on Earth. The anthropic principle states that this is an a posteriori necessity, because if life were impossible, no living entity would be there to observe it, and thus it would not be known. That is, it must be possible to observe some universe, and hence, the laws and constants of any such universe must accommodate that possibility. The term anthropic in "anthropic principle" has been argued to be a misnomer. While singling out the currently observable kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism. Any form of life or any form of heavy atom, stone, star, or galaxy would do; nothing specifically human or anthropic is involved. The anthropic principle has given rise to some confusion and controversy, partly because the phrase has been applied to several distinct ideas. All versions of the principle have been accused of discouraging the search for a deeper physical understanding of the universe. Critics of the weak anthropic principle point out that its lack of falsifiability entails that it is non-scientific and therefore inherently not useful. Stronger variants of the anthropic principle which are not tautologies can still make claims considered controversial by some; these would be contingent upon empirical verification. Anthropic observations In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random. Instead, biological factors constrain the universe to be more or less in a "golden age", neither too young nor too old. If the universe was one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs, and stable planetary systems would have already come to an end. Thus, Dicke explained the coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence that inspired Dirac's varying-G theory. Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter, account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics"). However, if the cosmological constant were only several orders of magnitude larger than its observed value, the universe would suffer catastrophic inflation, which would preclude the formation of stars, and hence life. The observed values of the dimensionless physical constants (such as the fine-structure constant) governing the four fundamental interactions are balanced as if fine-tuned to permit the formation of commonly found matter and subsequently the emergence of life. A slight increase in the strong interaction (up to 50% for some authors) would bind the dineutron and the diproton and convert all hydrogen in the early universe to helium; likewise, an increase in the weak interaction also would convert all hydrogen to helium. Water, as well as sufficiently long-lived stable stars, both essential for the emergence of life as it is known, would not exist. More generally, small changes in the relative strengths of the four fundamental interactions can greatly affect the universe's age, structure, and capacity for life. Origin The phrase "anthropic principle" first appeared in Brandon Carter's contribution to a 1973 Kraków symposium honouring Copernicus's 500th birthday. Carter, a theoretical astrophysicist, articulated the Anthropic Principle in reaction to the Copernican Principle, which states that humans do not occupy a privileged position in the Universe. Carter said: "Although our situation is not necessarily central, it is inevitably privileged to some extent." Specifically, Carter disagreed with using the Copernican principle to justify the Perfect Cosmological Principle, which states that all large regions and times in the universe must be statistically identical. The latter principle underlies the steady-state theory, which had recently been falsified by the 1965 discovery of the cosmic microwave background radiation. This discovery was unequivocal evidence that the universe has changed radically over time (for example, via the Big Bang). Carter defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privileged spacetime locations in the universe, and a more controversial "strong" form that addressed the values of the fundamental constants of physics. Roger Penrose explained the weak form as follows: One reason this is plausible is that there are many other places and times in which humans could have evolved. But when applying the strong principle, there is only one universe, with one set of fundamental parameters, so what exactly is the point being made? Carter offers two possibilities: First, humans can use their own existence to make "predictions" about the parameters. But second, "as a last resort", humans can convert these predictions into explanations by assuming that there is more than one universe, in fact a large and possibly infinite collection of universes, something that is now called the multiverse ("world ensemble" was Carter's term), in which the parameters (and perhaps the laws of physics) vary across universes. The strong principle then becomes an example of a selection effect, exactly analogous to the weak principle. Postulating a multiverse is certainly a radical step, but taking it could provide at least a partial answer to a question seemingly out of the reach of normal science: "Why do the fundamental laws of physics take the particular form we observe and not another?" Since Carter's 1973 paper, the term anthropic principle has been extended to cover a number of ideas that differ in important ways from his. Particular confusion was caused by the 1986 book The Anthropic Cosmological Principle by John D. Barrow and Frank Tipler, which distinguished between a "weak" and "strong" anthropic principle in a way very different from Carter's, as discussed in the next section. Carter was not the first to invoke some form of the anthropic principle. In fact, the evolutionary biologist Alfred Russel Wallace anticipated the anthropic principle as long ago as 1904: "Such a vast and complex universe as that which we know exists around us, may have been absolutely required [...] in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man." In 1957, Robert Dicke wrote: "The age of the Universe 'now' is not random but conditioned by biological factors [...] [changes in the values of the fundamental constants of physics] would preclude the existence of man to consider the problem." Ludwig Boltzmann may have been one of the first in modern science to use anthropic reasoning. Prior to knowledge of the Big Bang Boltzmann's thermodynamic concepts painted a picture of a universe that had inexplicably low entropy. Boltzmann suggested several explanations, one of which relied on fluctuations that could produce pockets of low entropy or Boltzmann universes. While most of the universe is featureless in this model, to Boltzmann, it is unremarkable that humanity happens to inhabit a Boltzmann universe, as that is the only place where intelligent life could be. Variants Weak anthropic principle (WAP) (Carter): "... our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers." For Carter, "location" refers to our location in time as well as space. Strong anthropic principle (SAP) (Carter): "[T]he universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase Descartes, cogito ergo mundus talis est."The Latin tag ("I think, therefore the world is such [as it is]") makes it clear that "must" indicates a deduction from the fact of our existence; the statement is thus a truism. In their 1986 book, The anthropic cosmological principle, John Barrow and Frank Tipler depart from Carter and define the WAP and SAP as follows: Weak anthropic principle (WAP) (Barrow and Tipler): "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the universe be old enough for it to have already done so."Unlike Carter they restrict the principle to carbon-based life, rather than just "observers". A more important difference is that they apply the WAP to the fundamental physical constants, such as the fine-structure constant, the number of spacetime dimensions, and the cosmological constant—topics that fall under Carter's SAP. Strong anthropic principle (SAP) (Barrow and Tipler): "The Universe must have those properties which allow life to develop within it at some stage in its history."This looks very similar to Carter's SAP, but unlike the case with Carter's SAP, the "must" is an imperative, as shown by the following three possible elaborations of the SAP, each proposed by Barrow and Tipler: "There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'." This can be seen as simply the classic design argument restated in the garb of contemporary cosmology. It implies that the purpose of the universe is to give rise to intelligent life, with the laws of nature and their fundamental physical constants set to ensure that life emerges and evolves. "Observers are necessary to bring the Universe into being." Barrow and Tipler believe that this is a valid conclusion from quantum mechanics, as John Archibald Wheeler has suggested, especially via his idea that information is the fundamental reality (see It from bit) and his Participatory anthropic principle (PAP) which is an interpretation of quantum mechanics associated with the ideas of John von Neumann and Eugene Wigner. "An ensemble of other different universes is necessary for the existence of our Universe." By contrast, Carter merely says that an ensemble of universes is necessary for the SAP to count as an explanation. The philosophers John Leslie and Nick Bostrom reject the Barrow and Tipler SAP as a fundamental misreading of Carter. For Bostrom, Carter's anthropic principle just warns us to make allowance for anthropic bias—that is, the bias created by anthropic selection effects (which Bostrom calls "observation" selection effects)—the necessity for observers to exist in order to get a result. He writes: Strong self-sampling assumption (SSSA) (Bostrom): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class." Analysing an observer's experience into a sequence of "observer-moments" helps avoid certain paradoxes; but the main ambiguity is the selection of the appropriate "reference class": for Carter's WAP this might correspond to all real or potential observer-moments in our universe; for the SAP, to all in the multiverse. Bostrom's mathematical development shows that choosing either too broad or too narrow a reference class leads to counter-intuitive results, but he is not able to prescribe an ideal choice. According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1. It does not allow for any additional nontrivial predictions such as "gravity won't change tomorrow". To gain more predictive power, additional assumptions on the prior distribution of alternative universes are necessary. Playwright and novelist Michael Frayn describes a form of the strong anthropic principle in his 2006 book The Human Touch, which explores what he characterises as "the central oddity of the Universe": Character of anthropic reasoning Carter chose to focus on a tautological aspect of his ideas, which has resulted in much confusion. In fact, anthropic reasoning interests scientists because of something that is only implicit in the above formal definitions, namely that humans should give serious consideration to there being other universes with different values of the "fundamental parameters"—that is, the dimensionless physical constants and initial conditions for the Big Bang. Carter and others have argued that life would not be possible in most such universes. In other words, the universe humans live in is fine tuned to permit life. Collins & Hawking (1973) characterized Carter's then-unpublished big idea as the postulate that "there is not one universe but a whole infinite ensemble of universes with all possible initial conditions". If this is granted, the anthropic principle provides a plausible explanation for the fine tuning of our universe: the "typical" universe is not fine-tuned, but given enough universes, a small fraction will be capable of supporting intelligent life. Ours must be one of these, and so the observed fine tuning should be no cause for wonder. Although philosophers have discussed related concepts for centuries, in the early 1970s the only genuine physical theory yielding a multiverse of sorts was the many-worlds interpretation of quantum mechanics. This would allow variation in initial conditions, but not in the truly fundamental constants. Since that time a number of mechanisms for producing a multiverse have been suggested: see the review by Max Tegmark. An important development in the 1980s was the combination of inflation theory with the hypothesis that some parameters are determined by symmetry breaking in the early universe, which allows parameters previously thought of as "fundamental constants" to vary over very large distances, thus eroding the distinction between Carter's weak and strong principles. At the beginning of the 21st century, the string landscape emerged as a mechanism for varying essentially all the constants, including the number of spatial dimensions. The anthropic idea that fundamental parameters are selected from a multitude of different possibilities (each actual in some universe or other) contrasts with the traditional hope of physicists for a theory of everything having no free parameters. As Albert Einstein said: "What really interests me is whether God had any choice in the creation of the world." In 2002, some proponents of the leading candidate for a "theory of everything", string theory, proclaimed "the end of the anthropic principle" since there would be no free parameters to select. In 2003, however, Leonard Susskind stated: "... it seems plausible that the landscape is unimaginably large and diverse. This is the behavior that gives credence to the anthropic principle." The modern form of a design argument is put forth by intelligent design. Proponents of intelligent design often cite the fine-tuning observations that (in part) preceded the formulation of the anthropic principle by Carter as a proof of an intelligent designer. Opponents of intelligent design are not limited to those who hypothesize that other universes exist; they may also argue, anti-anthropically, that the universe is less fine-tuned than often claimed, or that accepting fine tuning as a brute fact is less astonishing than the idea of an intelligent creator. Furthermore, even accepting fine tuning, Sober (2005) and Ikeda and Jefferys, argue that the anthropic principle as conventionally stated actually undermines intelligent design. Paul Davies's book The Goldilocks Enigma (2006) reviews the current state of the fine-tuning debate in detail, and concludes by enumerating the following responses to that debate: The absurd universe: Our universe just happens to be the way it is. The unique universe: There is a deep underlying unity in physics that necessitates the Universe being the way it is. A Theory of Everything will explain why the various features of the Universe must have exactly the values that have been recorded. The multiverse: Multiple universes exist, having all possible combinations of characteristics, and humans inevitably find themselves within a universe that allows us to exist. Intelligent design: A creator designed the Universe with the purpose of supporting complexity and the emergence of intelligence. The life principle: There is an underlying principle that constrains the Universe to evolve towards life and mind. The self-explaining universe: A closed explanatory or causal loop: "perhaps only universes with a capacity for consciousness can exist". This is Wheeler's participatory anthropic principle (PAP). The fake universe: Humans live inside a virtual reality simulation. Omitted here is Lee Smolin's model of cosmological natural selection, also known as fecund universes, which proposes that universes have "offspring" that are more plentiful if they resemble our universe. Also see Gardner (2005). Clearly each of these hypotheses resolve some aspects of the puzzle, while leaving others unanswered. Followers of Carter would admit only option 3 as an anthropic explanation, whereas 3 through 6 are covered by different versions of Barrow and Tipler's SAP (which would also include 7 if it is considered a variant of 4, as in Tipler 1994). The anthropic principle, at least as Carter conceived it, can be applied on scales much smaller than the whole universe. For example, Carter (1983) inverted the usual line of reasoning and pointed out that when interpreting the evolutionary record, one must take into account cosmological and astrophysical considerations. With this in mind, Carter concluded that given the best estimates of the age of the universe, the evolutionary chain culminating in Homo sapiens probably admits only one or two low probability links. Observational evidence No possible observational evidence bears on Carter's WAP, as it is merely advice to the scientist and asserts nothing debatable. The obvious test of Barrow's SAP, which says that the universe is "required" to support life, is to find evidence of life in universes other than ours. Any other universe is, by most definitions, unobservable (otherwise it would be included in our portion of this universe). Thus, in principle Barrow's SAP cannot be falsified by observing a universe in which an observer cannot exist. Philosopher John Leslie states that the Carter SAP (with multiverse) predicts the following: Physical theory will evolve so as to strengthen the hypothesis that early phase transitions occur probabilistically rather than deterministically, in which case there will be no deep physical reason for the values of fundamental constants; Various theories for generating multiple universes will prove robust; Evidence that the universe is fine tuned will continue to accumulate; No life with a non-carbon chemistry will be discovered; Mathematical studies of galaxy formation will confirm that it is sensitive to the rate of expansion of the universe. Hogan has emphasised that it would be very strange if all fundamental constants were strictly determined, since this would leave us with no ready explanation for apparent fine tuning. In fact, humans might have to resort to something akin to Barrow and Tipler's SAP: there would be no option for such a universe not to support life. Probabilistic predictions of parameter values can be made given: a particular multiverse with a "measure", i.e. a well defined "density of universes" (so, for parameter X, one can calculate the prior probability P(X0) dX that X is in the range ), and an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe). The probability of observing value X is then proportional to . A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be "over-tuned", i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense. One thing that would not count as evidence for the anthropic principle is evidence that the Earth or the Solar System occupied a privileged position in the universe, in violation of the Copernican principle (for possible counterevidence to this principle, see Copernican principle), unless there was some reason to think that that position was a necessary condition for our existence as observers. Applications of the principle The nucleosynthesis of carbon-12 Fred Hoyle may have invoked anthropic reasoning to predict an astrophysical phenomenon. He is said to have reasoned, from the prevalence on Earth of life forms whose chemistry was based on carbon-12 nuclei, that there must be an undiscovered resonance in the carbon-12 nucleus facilitating its synthesis in stellar interiors via the triple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 million electronvolts. Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction. However, in 2010 Helge Kragh argued that Hoyle did not use anthropic reasoning in making his prediction, since he made his prediction in 1953 and anthropic reasoning did not come into prominence until 1980. He called this an "anthropic myth", saying that Hoyle and others made an after-the-fact connection between carbon and life decades after the discovery of the resonance. Cosmic inflation Don Page criticized the entire theory of cosmic inflation as follows. He emphasized that initial conditions that made possible a thermodynamic arrow of time in a universe with a Big Bang origin, must include the assumption that at the initial singularity, the entropy of the universe was low and therefore extremely improbable. Paul Davies rebutted this criticism by invoking an inflationary version of the anthropic principle. While Davies accepted the premise that the initial state of the visible universe (which filled a microscopic amount of space before inflating) had to possess a very low entropy value—due to random quantum fluctuations—to account for the observed thermodynamic arrow of time, he deemed this fact an advantage for the theory. That the tiny patch of space from which our observable universe grew had to be extremely orderly, to allow the post-inflation universe to have an arrow of time, makes it unnecessary to adopt any "ad hoc" hypotheses about the initial entropy state, hypotheses other Big Bang theories require. String theory String theory predicts a large number of possible universes, called the "backgrounds" or "vacua". The set of these vacua is often called the "multiverse" or "anthropic landscape" or "string landscape". Leonard Susskind has argued that the existence of a large number of vacua puts anthropic reasoning on firm ground: only universes whose properties are such as to allow observers to exist are observed, while a possibly much larger set of universes lacking such properties go unnoticed. Steven Weinberg believes the anthropic principle may be appropriated by cosmologists committed to nontheism, and refers to that principle as a "turning point" in modern science because applying it to the string landscape "may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator". Others—most notably David Gross but also Luboš Motl, Peter Woit, and Lee Smolin—argue that this is not predictive. Max Tegmark, Mario Livio, and Martin Rees argue that only some aspects of a physical theory need be observable and/or testable for the theory to be accepted, and that many well-accepted theories are far from completely testable at present. Jürgen Schmidhuber (2000–2002) points out that Ray Solomonoff's theory of universal inductive inference and its extensions already provide a framework for maximizing our confidence in any theory, given a limited sequence of physical observations, and some prior distribution on the set of possible explanations of the universe. Zhi-Wei Wang and Samuel L. Braunstein proved that life's existence in the universe depends on various fundamental constants. It suggests that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life. Dimensions of spacetime There are two kinds of dimensions: spatial (bidirectional) and temporal (unidirectional). Let the number of spatial dimensions be N and the number of temporal dimensions be T. That and , setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T differ from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue. The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena". Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow said that it "gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002:204). In 1920, Paul Ehrenfest showed that if there is only a single time dimension and more than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy. Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are spatial dimensions, where k is a positive whole number, then wave impulses become distorted. In 1922, Hermann Weyl claimed that Maxwell's theory of electromagnetism can be expressed in terms of an action only for a four-dimensional manifold. Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse. Max Tegmark expands on the preceding argument in the following anthropic manner. If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if , Tegmark maintains that protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.) Lastly, if , gravitation of any kind becomes problematic, and the universe would probably be too simple to contain observers. For example, when , nerves cannot cross without intersecting. Hence anthropic and other arguments rule out all cases except and , which describes the world around us. On the other hand, in view of creating black holes from an ideal monatomic gas under its self-gravity, Wei-Xiang Feng showed that -dimensional spacetime is the marginal dimensionality. Moreover, it is the unique dimensionality that can afford a "stable" gas sphere with a "positive" cosmological constant. However, a self-gravitating gas cannot be stably bound if the mass sphere is larger than ~1021 solar masses, due to the small positivity of the cosmological constant observed. In 2019, James Scargill argued that complex life may be possible with two spatial dimensions. According to Scargill, a purely scalar theory of gravity may enable a local gravitational force, and 2D networks may be sufficient for complex neural networks. Metaphysical interpretations Some of the metaphysical disputes and speculations include, for example, attempts to back Pierre Teilhard de Chardin's earlier interpretation of the universe as being Christ centered (compare Omega Point), expressing a creatio evolutiva instead the elder notion of creatio continua. From a strictly secular, humanist perspective, it allows as well to put human beings back in the center, an anthropogenic shift in cosmology. Karl W. Giberson has laconically stated that William Sims Bainbridge disagreed with de Chardin's optimism about a future Omega point at the end of history, arguing that logically, humans are trapped at the Omicron point, in the middle of the Greek alphabet rather than advancing to the end, because the universe does not need to have any characteristics that would support our further technical progress, if the anthropic principle merely requires it to be suitable for our evolution to this point. The anthropic cosmological principle A thorough extant study of the anthropic principle is the book The anthropic cosmological principle by John D. Barrow, a cosmologist, and Frank J. Tipler, a cosmologist and mathematical physicist. This book sets out in detail the many known anthropic coincidences and constraints, including many found by its authors. While the book is primarily a work of theoretical astrophysics, it also touches on quantum physics, chemistry, and earth science. An entire chapter argues that Homo sapiens is, with high probability, the only intelligent species in the Milky Way. The book begins with an extensive review of many topics in the history of ideas the authors deem relevant to the anthropic principle, because the authors believe that principle has important antecedents in the notions of teleology and intelligent design. They discuss the writings of Fichte, Hegel, Bergson, and Alfred North Whitehead, and the Omega Point cosmology of Teilhard de Chardin. Barrow and Tipler carefully distinguish teleological reasoning from eutaxiological reasoning; the former asserts that order must have a consequent purpose; the latter asserts more modestly that order must have a planned cause. They attribute this important but nearly always overlooked distinction to an obscure 1883 book by L. E. Hicks. Seeing little sense in a principle requiring intelligent life to emerge while remaining indifferent to the possibility of its eventual extinction, Barrow and Tipler propose the final anthropic principle (FAP): Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out. Barrow and Tipler submit that the FAP is both a valid physical statement and "closely connected with moral values". FAP places strong constraints on the structure of the universe, constraints developed further in Tipler's The Physics of Immortality. One such constraint is that the universe must end in a Big Crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 about dark energy, based on observations of very distant supernovas. In his review of Barrow and Tipler, Martin Gardner ridiculed the FAP by quoting the last two sentences of their book as defining a completely ridiculous anthropic principle (CRAP): Reception and controversies Carter has frequently expressed regret for his own choice of the word "anthropic", because it conveys the misleading impression that the principle involves humans in particular, to the exclusion of non-human intelligence more broadly. Others have criticised the word "principle" as being too grandiose to describe straightforward applications of selection effects. A common criticism of Carter's SAP is that it is an easy deus ex machina that discourages searches for physical explanations. To quote Penrose again: "It tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts." Carter's SAP and Barrow and Tipler's WAP have been dismissed as truisms or trivial tautologies—that is, statements true solely by virtue of their logical form and not because a substantive claim is made and supported by observation of reality. As such, they are criticized as an elaborate way of saying, "If things were different, they would be different", which is a valid statement, but does not make a claim of some factual alternative over another. Critics of the Barrow and Tipler SAP claim that it is neither testable nor falsifiable, and thus is not a scientific statement but rather a philosophical one. The same criticism has been leveled against the hypothesis of a multiverse, although some argue that it does make falsifiable predictions. A modified version of this criticism is that humanity understands so little about the emergence of life, especially intelligent life, that it is effectively impossible to calculate the number of observers in each universe. Also, the prior distribution of universes as a function of the fundamental constants is easily modified to get any desired result. Many criticisms focus on versions of the strong anthropic principle, such as Barrow and Tipler's anthropic cosmological principle, which are teleological notions that tend to describe the existence of life as a necessary prerequisite for the observable constants of physics. Similarly, Stephen Jay Gould, Michael Shermer, and others claim that the stronger versions of the anthropic principle seem to reverse known causes and effects. Gould compared the claim that the universe is fine-tuned for the benefit of our kind of life to saying that sausages were made long and narrow so that they could fit into modern hotdog buns, or saying that ships had been invented to house barnacles. These critics cite the vast physical, fossil, genetic, and other biological evidence consistent with life having been fine-tuned through natural selection to adapt to the physical and geophysical environment in which life exists. Life appears to have adapted to the universe, and not vice versa. Some applications of the anthropic principle have been criticized as an argument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism"; see also alternative biochemistry). The range of fundamental physical constants consistent with the evolution of carbon-based life may also be wider than those who advocate a fine-tuned universe have argued. For instance, Harnik et al. propose a Weakless Universe in which the weak nuclear force is eliminated. They show that this has no significant effect on the other fundamental interactions, provided some adjustments are made in how those interactions work. However, if some of the fine-tuned details of our universe were violated, that would rule out complex structures of any kind—stars, planets, galaxies, etc. Lee Smolin has offered a theory designed to improve on the lack of imagination that has been ascribed to anthropic principles. He puts forth his fecund universes theory, which assumes universes have "offspring" through the creation of black holes whose offspring universes have values of physical constants that depend on those of the mother universe. The philosophers of cosmology John Earman, Ernan McMullin, and Jesús Mosterín contend that "in its weak version, the anthropic principle is a mere tautology, which does not allow us to explain anything or to predict anything that we did not already know. In its strong version, it is a gratuitous speculation". A further criticism by Mosterín concerns the flawed "anthropic" inference from the assumption of an infinity of worlds to the existence of one like ours: See also (discussing the anthropic principle) (an immediate precursor of the idea) (work of Alejandro Jenkins) Notes Footnotes References 5 chapters available online. Stenger, Victor J. (1999), "Anthropic design", The skeptical inquirer 23 (August 31, 1999): 40–43 Mosterín, Jesús (2005). "Anthropic explanations in cosmology". In P. Háyek, L. Valdés and D. Westerstahl (ed.), Logic, methodology and philosophy of science, Proceedings of the 12th international congress of the LMPS. London: King's college publications, pp. 441–473. . A simple anthropic argument for why there are 3 spatial and 1 temporal dimensions. Shows that some of the common criticisms of anthropic principle based on its relationship with numerology or the theological design argument are wrong. External links Nick Bostrom: web site devoted to the anthropic principle. Friederich, Simon. Fine-tuning, review article of the discussion about fine-tuning, highlighting the role of the anthropic principles. Gijsbers, Victor. (2000). Theistic anthropic principle refuted – Positive atheism magazine. Chown, Marcus, Anything Goes, New scientist, 6 June 1998. On Max Tegmark's work. Stephen Hawking, Steven Weinberg, Alexander Vilenkin, David Gross and Lawrence Krauss: Debate on anthropic reasoning Kavli-CERCA conference video archive. Sober, Elliott R. 2009, "Absence of evidence and evidence of absence – Evidential transitivity in connection with fossils, fishing, fine-tuning, and firing squads." Philosophical Studies, 2009, 143: 63–90. "Anthropic coincidence" – The anthropic controversy as a segue to Lee Smolin's theory of cosmological natural selection. Leonard Susskind and Lee Smolin debate the anthropic principle. Debate among scientists on arxiv.org. Evolutionary probability and fine tuning Benevolent design and the anthropic principle at MathPages Critical review of "The privileged planet" The anthropic principle – a review. Berger, Daniel, 2002, "An impertinent résumé of the Anthropic cosmological principle. " A critique of Barrow & Tipler. Jürgen Schmidhuber: Papers on algorithmic theories of everything and the anthropic principle's lack of predictive power. Paul Davies: Cosmic jackpot – Interview about the anthropic principle (starts at 40 min), 15 May 2007. Astronomical hypotheses Concepts in epistemology Physical cosmology Principles Religion and science
Anthropic principle
[ "Physics", "Astronomy" ]
8,402
[ "Astronomical hypotheses", "Astronomical sub-disciplines", "Philosophy of astronomy", "Theoretical physics", "Astrophysics", "Astronomical controversies", "Anthropic principle", "Physical cosmology" ]
2,819
https://en.wikipedia.org/wiki/Aerodynamics
Aerodynamics ( aero (air) + (dynamics)) is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature. History Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes. In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier–Stokes equations are the most general governing equations of fluid flow but are difficult to solve for the flow around all but the simplest of shapes. In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight, lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become highly successful with glider flights, was also the first to propose thin, curved airfoils that would produce high lift and low drag. Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903. During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers. As aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the Bell X-1 aircraft. By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aerodynamics, while work continues to be done on important problems in basic aerodynamic theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier–Stokes equations. Fundamental concepts Understanding the motion of air around an object (often called a flow field) enables the calculation of forces and moments acting on the object. In many aerodynamics problems, the forces of interest are the fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e. forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as flow velocity, pressure, density, and temperature, which may be functions of position and time. These properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an additional property, viscosity, are used to classify flow fields. Flow classification Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which the air speed field is always below the local speed of sound. Transonic flows include both regions of subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of sound. Aerodynamicists disagree on the precise definition of hypersonic flow. Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible, and calculations that neglect the changes of density in these flow fields will yield inaccurate results. Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very small, and approximate solutions may safely neglect viscous effects. These approximations are called inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic problems may also be classified by the flow environment. External aerodynamics is the study of flow around solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of flow through passages inside solid objects (e.g. through a jet engine). Continuum assumption Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a continuum. This assumption allows fluid properties such as density and flow velocity to be defined everywhere within the flow. The validity of the continuum assumption is dependent on the density of the gas and the application in question. For the continuum assumption to be valid, the mean free path length must be much smaller than the length scale of the application in question. For example, many aerodynamics applications deal with aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a few meters to a few tens of meters, which is much larger than the mean free path length. For such applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between statistical mechanics and the continuous formulation of aerodynamics. Conservation laws The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics conservation laws. Three conservation principles are used: Conservation of mass Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical formulation of this principle is known as the mass continuity equation. Conservation of momentum The mathematical formulation of this principle can be considered an application of Newton's Second Law. Momentum within a flow is only changed by external forces, which may include both surface forces, such as viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components). Conservation of energy The energy conservation equation states that energy is neither created nor destroyed within a flow, and that any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and out of the region of interest. Together, these equations are known as the Navier–Stokes equations, although some authors define the term to only include the momentum equation(s). The Navier–Stokes equations have no known analytical solution and are solved in modern aerodynamics using computational techniques. Because computational methods using high speed computers were not historically available and the high computational cost of solving these complex equations now that they are available, simplifications of the Navier–Stokes equations have been and continue to be employed. The Euler equations are a set of similar conservation equations which neglect viscosity and may be used in cases where the effect of viscosity is expected to be small. Further simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a solution in one dimension to both the momentum and energy conservation equations. The ideal gas law or another such equation of state is often used in conjunction with these equations to form a determined system that allows the solution for the unknown variables. Branches of aerodynamics Aerodynamic problems are classified by the flow environment or properties of the flow, including flow speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a jet engine or through an air conditioning pipe. Aerodynamic problems can also be classified according to whether the flow speed is below, near or above the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of sound, transonic if speeds both below and above the speed of sound are present (normally when the characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers flows with Mach numbers above 5 to be hypersonic. The influence of viscosity on the flow dictates a third classification. Some problems may encounter only very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous flows. Incompressible aerodynamics An incompressible flow is a flow in which density is constant in both time and space. Although all real fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes cause only small changes to the calculated results. This is more likely to be true when the flow speeds are significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be assumed, otherwise the effects of compressibility must be included. Subsonic flow Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the differential equations that describe the flow to be a simplified version of the equations of fluid dynamics, thus making available to the aerodynamicist a range of quick and easy solutions. In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the effects of compressibility. Compressibility is a description of the amount of change of density in the flow. When the effects of compressibility on the solution are small, the assumption that density is constant may be made. The problem is then an incompressible low-speed aerodynamics problem. When the density is allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible aerodynamics. Compressible aerodynamics According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%. Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic, supersonic, and hypersonic flows are all compressible flows. Transonic flow The term Transonic refers to a range of flow velocities just below and above the local speed of sound (generally taken as Mach 0.8–1.2). It is defined as the range of speeds between the critical Mach number, when some parts of the airflow over an aircraft become supersonic, and a higher speed, typically near Mach 1.2, when all of the airflow is supersonic. Between these speeds, some of the airflow is supersonic, while some of the airflow is not supersonic. Supersonic flow Supersonic aerodynamic problems are those involving flow speeds greater than the speed of sound. Calculating the lift on the Concorde during cruise can be an example of a supersonic aerodynamic problem. Supersonic flow behaves very differently from subsonic flow. Fluids react to differences in pressure; pressure changes are how a fluid is "told" to respond to its environment. Therefore, since sound is, in fact, an infinitesimal pressure difference propagating through a fluid, the speed of sound in that fluid can be considered the fastest speed that "information" can travel in the flow. This difference most obviously manifests itself in the case of a fluid striking an object. In front of that object, the fluid builds up a stagnation pressure as impact with the object brings the moving fluid to rest. In fluid traveling at subsonic speed, this pressure disturbance can propagate upstream, changing the flow pattern ahead of the object and giving the impression that the fluid "knows" the object is there by seemingly adjusting its movement and is flowing around it. In a supersonic flow, however, the pressure disturbance cannot propagate upstream. Thus, when the fluid finally reaches the object it strikes it and the fluid is forced to change its properties – temperature, density, pressure, and Mach number—in an extremely violent and irreversible fashion called a shock wave. The presence of shock waves, along with the compressibility effects of high-flow velocity (see Reynolds number) fluids, is the central difference between the supersonic and subsonic aerodynamics regimes. Hypersonic flow In aerodynamics, hypersonic speeds are speeds that are highly supersonic. In the 1970s, the term generally came to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime. Hypersonic flow is characterized by high temperature flow behind a shock wave, viscous interaction, and chemical dissociation of gas. Associated terminology The incompressible and compressible flow regimes produce many associated phenomena, such as boundary layers and turbulence. Boundary layers The concept of a boundary layer is important in many problems in aerodynamics. The viscosity and fluid friction in the air is approximated as being significant only in this thin layer. This assumption makes the description of such aerodynamics much more tractable mathematically. Turbulence In aerodynamics, turbulence is characterized by chaotic property changes in the flow. These include low momentum diffusion, high momentum convection, and rapid variation of pressure and flow velocity in space and time. Flow that is not turbulent is called laminar flow. Aerodynamics in other fields Engineering design Aerodynamics is a significant element of vehicle design, including road cars and trucks where the main goal is to reduce the vehicle drag coefficient, and racing cars, where in addition to reducing drag the goal is also to increase the overall level of downforce. Aerodynamics is also important in the prediction of forces and moments acting on sailing vessels. It is used in the design of mechanical components such as hard drive heads. Structural engineers resort to aerodynamics, and particularly aeroelasticity, when calculating wind loads in the design of large buildings, bridges, and wind turbines. The aerodynamics of internal passages is important in heating/ventilation, gas piping, and in automotive engines where detailed flow patterns strongly affect the performance of the engine. Environmental design Urban aerodynamics are studied by town planners and designers seeking to improve amenity in outdoor spaces, or in creating urban microclimates to reduce the effects of urban pollution. The field of environmental aerodynamics describes ways in which atmospheric circulation and flight mechanics affect ecosystems. Aerodynamic equations are used in numerical weather prediction. Ball-control in sports Sports in which aerodynamics are of crucial importance include soccer, table tennis, cricket, baseball, and golf, in which most players can control the trajectory of the ball using the "Magnus effect". See also Aeronautics Aerostatics Aviation Insect flight – how bugs fly List of aerospace engineering topics List of engineering topics Nose cone design Fluid dynamics Computational fluid dynamics References Further reading General aerodynamics Subsonic aerodynamics Obert, Ed (2009). . Delft; About practical aerodynamics in industry and the effects on design of aircraft. . Transonic aerodynamics Supersonic aerodynamics Hypersonic aerodynamics History of aerodynamics Aerodynamics related to engineering Ground vehicles Fixed-wing aircraft Helicopters Missiles Model aircraft Related branches of aerodynamics Aerothermodynamics Aeroelasticity Boundary layers Turbulence External links NASA's Guide to Aerodynamics. . Aerodynamics for Students Aerodynamics for Pilots (archived) Aerodynamics and Race Car Tuning (archived) Aerodynamic Related Projects. . eFluids Bicycle Aerodynamics. . Application of Aerodynamics in Formula One (F1) (archived) Aerodynamics in Car Racing. . Aerodynamics of Birds. . NASA Aerodynamics Index Aerodynamics Aerospace engineering Energy in transport
Aerodynamics
[ "Physics", "Chemistry", "Engineering" ]
4,401
[ "Aerodynamics", "Physical systems", "Transport", "Aerospace engineering", "Energy in transport", "Fluid dynamics" ]
2,839
https://en.wikipedia.org/wiki/Angular%20momentum
Angular momentum (sometimes called moment of momentum or rotational momentum) is the rotational analog of linear momentum. It is an important physical quantity because it is a conserved quantity – the total angular momentum of a closed system remains constant. Angular momentum has both a direction and a magnitude, and both are conserved. Bicycles and motorcycles, flying discs, rifled bullets, and gyroscopes owe their useful properties to conservation of angular momentum. Conservation of angular momentum is also why hurricanes form spirals and neutron stars have high rotational rates. In general, conservation limits the possible motion of a system, but it does not uniquely determine it. The three-dimensional angular momentum for a point particle is classically represented as a pseudovector , the cross product of the particle's position vector (relative to some origin) and its momentum vector; the latter is in Newtonian mechanics. Unlike linear momentum, angular momentum depends on where this origin is chosen, since the particle's position is measured from it. Angular momentum is an extensive quantity; that is, the total angular momentum of any composite system is the sum of the angular momenta of its constituent parts. For a continuous rigid body or a fluid, the total angular momentum is the volume integral of angular momentum density (angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body. Similar to conservation of linear momentum, where it is conserved if there is no external force, angular momentum is conserved if there is no external torque. Torque can be defined as the rate of change of angular momentum, analogous to force. The net external torque on any system is always equal to the total torque on the system; the sum of all internal torques of any system is always 0 (this is the rotational analogue of Newton's third law of motion). Therefore, for a closed system (where there is no net external torque), the total torque on the system must be 0, which means that the total angular momentum of the system is constant. The change in angular momentum for a particular interaction is called angular impulse, sometimes twirl. Angular impulse is the angular analog of (linear) impulse. Examples The trivial case of the angular momentum of a body in an orbit is given by where is the mass of the orbiting object, is the orbit's frequency and is the orbit's radius. The angular momentum of a uniform rigid sphere rotating around its axis, instead, is given by where is the sphere's mass, is the frequency of rotation and is the sphere's radius. Thus, for example, the orbital angular momentum of the Earth with respect to the Sun is about 2.66 × 1040 kg⋅m2⋅s−1, while its rotational angular momentum is about 7.05 × 1033 kg⋅m2⋅s−1. In the case of a uniform rigid sphere rotating around its axis, if, instead of its mass, its density is known, the angular momentum is given by where is the sphere's density, is the frequency of rotation and is the sphere's radius. In the simplest case of a spinning disk, the angular momentum is given by where is the disk's mass, is the frequency of rotation and is the disk's radius. If instead the disk rotates about its diameter (e.g. coin toss), its angular momentum is given by Definition in classical mechanics Just as for angular velocity, there are two special types of angular momentum of an object: the spin angular momentum is the angular momentum about the object's centre of mass, while the orbital angular momentum is the angular momentum about a chosen center of rotation. The Earth has an orbital angular momentum by nature of revolving around the Sun, and a spin angular momentum by nature of its daily rotation around the polar axis. The total angular momentum is the sum of the spin and orbital angular momenta. In the case of the Earth the primary conserved quantity is the total angular momentum of the solar system because angular momentum is exchanged to a small but important extent among the planets and the Sun. The orbital angular momentum vector of a point particle is always parallel and directly proportional to its orbital angular velocity vector ω, where the constant of proportionality depends on both the mass of the particle and its distance from origin. The spin angular momentum vector of a rigid body is proportional but not always parallel to the spin angular velocity vector Ω, making the constant of proportionality a second-rank tensor rather than a scalar. Orbital angular momentum in two dimensions Angular momentum is a vector quantity (more precisely, a pseudovector) that represents the product of a body's rotational inertia and rotational velocity (in radians/sec) about a particular axis. However, if the particle's trajectory lies in a single plane, it is sufficient to discard the vector nature of angular momentum, and treat it as a scalar (more precisely, a pseudoscalar). Angular momentum can be considered a rotational analog of linear momentum. Thus, where linear momentum is proportional to mass and linear speed angular momentum is proportional to moment of inertia and angular speed measured in radians per second. Unlike mass, which depends only on amount of matter, moment of inertia depends also on the position of the axis of rotation and the distribution of the matter. Unlike linear velocity, which does not depend upon the choice of origin, orbital angular velocity is always measured with respect to a fixed origin. Therefore, strictly speaking, should be referred to as the angular momentum relative to that center. In the case of circular motion of a single particle, we can use and to expand angular momentum as reducing to: the product of the radius of rotation and the linear momentum of the particle , where is the linear (tangential) speed. This simple analysis can also apply to non-circular motion if one uses the component of the motion perpendicular to the radius vector: where is the perpendicular component of the motion. Expanding, rearranging, and reducing, angular momentum can also be expressed, where is the length of the moment arm, a line dropped perpendicularly from the origin onto the path of the particle. It is this definition, , to which the term moment of momentum refers. Scalar angular momentum from Lagrangian mechanics Another approach is to define angular momentum as the conjugate momentum (also called canonical momentum) of the angular coordinate expressed in the Lagrangian of the mechanical system. Consider a mechanical system with a mass constrained to move in a circle of radius in the absence of any external force field. The kinetic energy of the system is And the potential energy is Then the Lagrangian is The generalized momentum "canonically conjugate to" the coordinate is defined by Orbital angular momentum in three dimensions To completely define orbital angular momentum in three dimensions, it is required to know the rate at which the position vector sweeps out angle, the direction perpendicular to the instantaneous plane of angular displacement, and the mass involved, as well as how this mass is distributed in space. By retaining this vector nature of angular momentum, the general nature of the equations is also retained, and can describe any sort of three-dimensional motion about the center of rotation – circular, linear, or otherwise. In vector notation, the orbital angular momentum of a point particle in motion about the origin can be expressed as: where is the moment of inertia for a point mass, is the orbital angular velocity of the particle about the origin, is the position vector of the particle relative to the origin, and , is the linear velocity of the particle relative to the origin, and is the mass of the particle. This can be expanded, reduced, and by the rules of vector algebra, rearranged: which is the cross product of the position vector and the linear momentum of the particle. By the definition of the cross product, the vector is perpendicular to both and . It is directed perpendicular to the plane of angular displacement, as indicated by the right-hand rule – so that the angular velocity is seen as counter-clockwise from the head of the vector. Conversely, the vector defines the plane in which and lie. By defining a unit vector perpendicular to the plane of angular displacement, a scalar angular speed results, where and where is the perpendicular component of the motion, as above. The two-dimensional scalar equations of the previous section can thus be given direction: and for circular motion, where all of the motion is perpendicular to the radius . In the spherical coordinate system the angular momentum vector expresses as Analogy to linear momentum Angular momentum can be described as the rotational analog of linear momentum. Like linear momentum it involves elements of mass and displacement. Unlike linear momentum it also involves elements of position and shape. Many problems in physics involve matter in motion about some certain point in space, be it in actual rotation about it, or simply moving past it, where it is desired to know what effect the moving matter has on the point—can it exert energy upon it or perform work about it? Energy, the ability to do work, can be stored in matter by setting it in motion—a combination of its inertia and its displacement. Inertia is measured by its mass, and displacement by its velocity. Their product, is the matter's momentum. Referring this momentum to a central point introduces a complication: the momentum is not applied to the point directly. For instance, a particle of matter at the outer edge of a wheel is, in effect, at the end of a lever of the same length as the wheel's radius, its momentum turning the lever about the center point. This imaginary lever is known as the moment arm. It has the effect of multiplying the momentum's effort in proportion to its length, an effect known as a moment. Hence, the particle's momentum referred to a particular point, is the angular momentum, sometimes called, as here, the moment of momentum of the particle versus that particular center point. The equation combines a moment (a mass turning moment arm ) with a linear (straight-line equivalent) speed . Linear speed referred to the central point is simply the product of the distance and the angular speed versus the point: another moment. Hence, angular momentum contains a double moment: Simplifying slightly, the quantity is the particle's moment of inertia, sometimes called the second moment of mass. It is a measure of rotational inertia. The above analogy of the translational momentum and rotational momentum can be expressed in vector form: for linear motion for rotation The direction of momentum is related to the direction of the velocity for linear movement. The direction of angular momentum is related to the angular velocity of the rotation. Because moment of inertia is a crucial part of the spin angular momentum, the latter necessarily includes all of the complications of the former, which is calculated by multiplying elementary bits of the mass by the squares of their distances from the center of rotation. Therefore, the total moment of inertia, and the angular momentum, is a complex function of the configuration of the matter about the center of rotation and the orientation of the rotation for the various bits. For a rigid body, for instance a wheel or an asteroid, the orientation of rotation is simply the position of the rotation axis versus the matter of the body. It may or may not pass through the center of mass, or it may lie completely outside of the body. For the same body, angular momentum may take a different value for every possible axis about which rotation may take place. It reaches a minimum when the axis passes through the center of mass. For a collection of objects revolving about a center, for instance all of the bodies of the Solar System, the orientations may be somewhat organized, as is the Solar System, with most of the bodies' axes lying close to the system's axis. Their orientations may also be completely random. In brief, the more mass and the farther it is from the center of rotation (the longer the moment arm), the greater the moment of inertia, and therefore the greater the angular momentum for a given angular velocity. In many cases the moment of inertia, and hence the angular momentum, can be simplified by, where is the radius of gyration, the distance from the axis at which the entire mass may be considered as concentrated. Similarly, for a point mass the moment of inertia is defined as, where is the radius of the point mass from the center of rotation, and for any collection of particles as the sum, Angular momentum's dependence on position and shape is reflected in its units versus linear momentum: kg⋅m2/s or N⋅m⋅s for angular momentum versus kg⋅m/s or N⋅s for linear momentum. When calculating angular momentum as the product of the moment of inertia times the angular velocity, the angular velocity must be expressed in radians per second, where the radian assumes the dimensionless value of unity. (When performing dimensional analysis, it may be productive to use orientational analysis which treats radians as a base unit, but this is not done in the International system of units). The units if angular momentum can be interpreted as torque⋅time. An object with angular momentum of can be reduced to zero angular velocity by an angular impulse of . The plane perpendicular to the axis of angular momentum and passing through the center of mass is sometimes called the invariable plane, because the direction of the axis remains fixed if only the interactions of the bodies within the system, free from outside influences, are considered. One such plane is the invariable plane of the Solar System. Angular momentum and torque Newton's second law of motion can be expressed mathematically, or force = mass × acceleration. The rotational equivalent for point particles may be derived as follows: which means that the torque (i.e. the time derivative of the angular momentum) is Because the moment of inertia is , it follows that , and which, reduces to This is the rotational analog of Newton's second law. Note that the torque is not necessarily proportional or parallel to the angular acceleration (as one might expect). The reason for this is that the moment of inertia of a particle can change with time, something that cannot occur for ordinary mass. Conservation of angular momentum General considerations A rotational analog of Newton's third law of motion might be written, "In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque about the same axis." Hence, angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant (is conserved). Seen another way, a rotational analogue of Newton's first law of motion might be written, "A rigid body continues in a state of uniform rotation unless acted upon by an external influence." Thus with no external influence to act upon it, the original angular momentum of the system remains constant. The conservation of angular momentum is used in analyzing central force motion. If the net force on some body is directed always toward some point, the center, then there is no torque on the body with respect to the center, as all of the force is directed along the radius vector, and none is perpendicular to the radius. Mathematically, torque because in this case and are parallel vectors. Therefore, the angular momentum of the body about the center is constant. This is the case with gravitational attraction in the orbits of planets and satellites, where the gravitational force is always directed toward the primary body and orbiting bodies conserve angular momentum by exchanging distance and velocity as they move about the primary. Central force motion is also used in the analysis of the Bohr model of the atom. For a planet, angular momentum is distributed between the spin of the planet and its revolution in its orbit, and these are often exchanged by various mechanisms. The conservation of angular momentum in the Earth–Moon system results in the transfer of angular momentum from Earth to Moon, due to tidal torque the Moon exerts on the Earth. This in turn results in the slowing down of the rotation rate of Earth, at about 65.7 nanoseconds per day, and in gradual increase of the radius of Moon's orbit, at about 3.82 centimeters per year. The conservation of angular momentum explains the angular acceleration of an ice skater as they bring their arms and legs close to the vertical axis of rotation. By bringing part of the mass of their body closer to the axis, they decrease their body's moment of inertia. Because angular momentum is the product of moment of inertia and angular velocity, if the angular momentum remains constant (is conserved), then the angular velocity (rotational speed) of the skater must increase. The same phenomenon results in extremely fast spin of compact stars (like white dwarfs, neutron stars and black holes) when they are formed out of much larger and slower rotating stars. Conservation is not always a full explanation for the dynamics of a system but is a key constraint. For example, a spinning top is subject to gravitational torque making it lean over and change the angular momentum about the nutation axis, but neglecting friction at the point of spinning contact, it has a conserved angular momentum about its spinning axis, and another about its precession axis. Also, in any planetary system, the planets, star(s), comets, and asteroids can all move in numerous complicated ways, but only so that the angular momentum of the system is conserved. Noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. The symmetry associated with conservation of angular momentum is rotational invariance. The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved. Relation to Newton's second law of motion While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can also be understood simply as an efficient method of calculation of results that can also be otherwise arrived at directly from Newton's second law, together with laws governing the forces of nature (such as Newton's third law, Maxwell's equations and Lorentz force). Indeed, given initial conditions of position and velocity for every point, and the forces at such a condition, one may use Newton's second law to calculate the second derivative of position, and solving for this gives full information on the development of the physical system with time. Note, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space. As an example, consider decreasing of the moment of inertia, e.g. when a figure skater is pulling in their hands, speeding up the circular motion. In terms of angular momentum conservation, we have, for angular momentum L, moment of inertia I and angular velocity ω: Using this, we see that the change requires an energy of: so that a decrease in the moment of inertia requires investing energy. This can be compared to the work done as calculated using Newton's laws. Each point in the rotating body is accelerating, at each point of time, with radial acceleration of: Let us observe a point of mass m, whose position vector relative to the center of motion is perpendicular to the z-axis at a given point of time, and is at a distance z. The centripetal force on this point, keeping the circular motion, is: Thus the work required for moving this point to a distance dz farther from the center of motion is: For a non-pointlike body one must integrate over this, with m replaced by the mass density per unit z. This gives: which is exactly the energy required for keeping the angular momentum conserved. Note, that the above calculation can also be performed per mass, using kinematics only. Thus the phenomena of figure skater accelerating tangential velocity while pulling their hands in, can be understood as follows in layman's language: The skater's palms are not moving in a straight line, so they are constantly accelerating inwards, but do not gain additional speed because the accelerating is always done when their motion inwards is zero. However, this is different when pulling the palms closer to the body: The acceleration due to rotation now increases the speed; but because of the rotation, the increase in speed does not translate to a significant speed inwards, but to an increase of the rotation speed. Stationary-action principle In classical mechanics it can be shown that the rotational invariance of action functionals implies conservation of angular momentum. The action is defined in classical physics as a functional of positions, often represented by the use of square brackets, and the final and initial times. It assumes the following form in cartesian coordinates:where the repeated indices indicate summation over the index. If the action is invariant of an infinitesimal transformation, it can be mathematically stated as: . Under the transformation, , the action becomes: where we can employ the expansion of the terms up-to first order in : giving the following change in action: Since all rotations can be expressed as matrix exponential of skew-symmetric matrices, i.e. as where is a skew-symmetric matrix and is angle of rotation, we can express the change of coordinates due to the rotation , up-to first order of infinitesimal angle of rotation, as: Combining the equation of motion and rotational invariance of action, we get from the above equations that:Since this is true for any matrix that satisfies it results in the conservation of the following quantity: as . This corresponds to the conservation of angular momentum throughout the motion. Lagrangian formalism In Lagrangian mechanics, angular momentum for rotation around a given axis, is the conjugate momentum of the generalized coordinate of the angle around the same axis. For example, , the angular momentum around the z axis, is: where is the Lagrangian and is the angle around the z axis. Note that , the time derivative of the angle, is the angular velocity . Ordinarily, the Lagrangian depends on the angular velocity through the kinetic energy: The latter can be written by separating the velocity to its radial and tangential part, with the tangential part at the x-y plane, around the z-axis, being equal to: where the subscript i stands for the i-th body, and m, vT and ωz stand for mass, tangential velocity around the z-axis and angular velocity around that axis, respectively. For a body that is not point-like, with density ρ, we have instead: where integration runs over the area of the body, and Iz is the moment of inertia around the z-axis. Thus, assuming the potential energy does not depend on ωz (this assumption may fail for electromagnetic systems), we have the angular momentum of the ith object: We have thus far rotated each object by a separate angle; we may also define an overall angle θz by which we rotate the whole system, thus rotating also each object around the z-axis, and have the overall angular momentum: From Euler–Lagrange equations it then follows that: Since the lagrangian is dependent upon the angles of the object only through the potential, we have: which is the torque on the ith object. Suppose the system is invariant to rotations, so that the potential is independent of an overall rotation by the angle θz (thus it may depend on the angles of objects only through their differences, in the form ). We therefore get for the total angular momentum: And thus the angular momentum around the z-axis is conserved. This analysis can be repeated separately for each axis, giving conversation of the angular momentum vector. However, the angles around the three axes cannot be treated simultaneously as generalized coordinates, since they are not independent; in particular, two angles per point suffice to determine its position. While it is true that in the case of a rigid body, fully describing it requires, in addition to three translational degrees of freedom, also specification of three rotational degrees of freedom; however these cannot be defined as rotations around the Cartesian axes (see Euler angles). This caveat is reflected in quantum mechanics in the non-trivial commutation relations of the different components of the angular momentum operator. Hamiltonian formalism Equivalently, in Hamiltonian mechanics the Hamiltonian can be described as a function of the angular momentum. As before, the part of the kinetic energy related to rotation around the z-axis for the ith object is: which is analogous to the energy dependence upon momentum along the z-axis, . Hamilton's equations relate the angle around the z-axis to its conjugate momentum, the angular momentum around the same axis: The first equation gives And so we get the same results as in the Lagrangian formalism. Note, that for combining all axes together, we write the kinetic energy as: where pr is the momentum in the radial direction, and the moment of inertia is a 3-dimensional matrix; bold letters stand for 3-dimensional vectors. For point-like bodies we have: This form of the kinetic energy part of the Hamiltonian is useful in analyzing central potential problems, and is easily transformed to a quantum mechanical work frame (e.g. in the hydrogen atom problem). Angular momentum in orbital mechanics While in classical mechanics the language of angular momentum can be replaced by Newton's laws of motion, it is particularly useful for motion in central potential such as planetary motion in the solar system. Thus, the orbit of a planet in the solar system is defined by its energy, angular momentum and angles of the orbit major axis relative to a coordinate frame. In astrodynamics and celestial mechanics, a quantity closely related to angular momentum is defined as called specific angular momentum. Note that Mass is often unimportant in orbital mechanics calculations, because motion of a body is determined by gravity. The primary body of the system is often so much larger than any bodies in motion about it that the gravitational effect of the smaller bodies on it can be neglected; it maintains, in effect, constant velocity. The motion of all bodies is affected by its gravity in the same way, regardless of mass, and therefore all move approximately the same way under the same conditions. Solid bodies Angular momentum is also an extremely useful concept for describing rotating rigid bodies such as a gyroscope or a rocky planet. For a continuous mass distribution with density function ρ(r), a differential volume element dV with position vector r within the mass has a mass element dm = ρ(r)dV. Therefore, the infinitesimal angular momentum of this element is: and integrating this differential over the volume of the entire mass gives its total angular momentum: In the derivation which follows, integrals similar to this can replace the sums for the case of continuous mass. Collection of particles For a collection of particles in motion about an arbitrary origin, it is informative to develop the equation of angular momentum by resolving their motion into components about their own center of mass and about the origin. Given, is the mass of particle , is the position vector of particle w.r.t. the origin, is the velocity of particle w.r.t. the origin, is the position vector of the center of mass w.r.t. the origin, is the velocity of the center of mass w.r.t. the origin, is the position vector of particle w.r.t. the center of mass, is the velocity of particle w.r.t. the center of mass, The total mass of the particles is simply their sum, The position vector of the center of mass is defined by, By inspection, and The total angular momentum of the collection of particles is the sum of the angular momentum of each particle, Expanding , Expanding , It can be shown that (see sidebar), and therefore the second and third terms vanish, The first term can be rearranged, and total angular momentum for the collection of particles is finally, The first term is the angular momentum of the center of mass relative to the origin. Similar to , below, it is the angular momentum of one particle of mass M at the center of mass moving with velocity V. The second term is the angular momentum of the particles moving relative to the center of mass, similar to , below. The result is general—the motion of the particles is not restricted to rotation or revolution about the origin or center of mass. The particles need not be individual masses, but can be elements of a continuous distribution, such as a solid body. Rearranging equation () by vector identities, multiplying both terms by "one", and grouping appropriately, gives the total angular momentum of the system of particles in terms of moment of inertia and angular velocity , Single particle case In the case of a single particle moving about the arbitrary origin, and equations () and () for total angular momentum reduce to, Case of a fixed center of mass For the case of the center of mass fixed in space with respect to the origin, and equations () and () for total angular momentum reduce to, Angular momentum in general relativity In modern (20th century) theoretical physics, angular momentum (not including any intrinsic angular momentum – see below) is described using a different formalism, instead of a classical pseudovector. In this formalism, angular momentum is the 2-form Noether charge associated with rotational invariance. As a result, angular momentum is generally not conserved locally for general curved spacetimes, unless they have rotational symmetry; whereas globally the notion of angular momentum itself only makes sense if the spacetime is asymptotically flat. If the spacetime is only axially symmetric like for the Kerr metric, the total angular momentum is not conserved but is conserved which is related to the invariance of rotating around the symmetry-axis, where note that where is the metric, is the rest mass, is the four-velocity, and is the four-position in spherical coordinates. In classical mechanics, the angular momentum of a particle can be reinterpreted as a plane element: in which the exterior product (∧) replaces the cross product (×) (these products have similar characteristics but are nonequivalent). This has the advantage of a clearer geometric interpretation as a plane element, defined using the vectors x and p, and the expression is true in any number of dimensions. In Cartesian coordinates: or more compactly in index notation: The angular velocity can also be defined as an anti-symmetric second order tensor, with components ωij. The relation between the two anti-symmetric tensors is given by the moment of inertia which must now be a fourth order tensor: Again, this equation in L and ω as tensors is true in any number of dimensions. This equation also appears in the geometric algebra formalism, in which L and ω are bivectors, and the moment of inertia is a mapping between them. In relativistic mechanics, the relativistic angular momentum of a particle is expressed as an anti-symmetric tensor of second order: in terms of four-vectors, namely the four-position X and the four-momentum P, and absorbs the above L together with the moment of mass, i.e., the product of the relativistic mass of the particle and its centre of mass, which can be thought of as describing the motion of its centre of mass, since mass–energy is conserved. In each of the above cases, for a system of particles the total angular momentum is just the sum of the individual particle angular momenta, and the centre of mass is for the system. Angular momentum in quantum mechanics In quantum mechanics, angular momentum (like other quantities) is expressed as an operator, and its one-dimensional projections have quantized eigenvalues. Angular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called "component") can be measured with definite precision; the other two then remain uncertain. Because of this, the axis of rotation of a quantum particle is undefined. Quantum particles do possess a type of non-orbital angular momentum called "spin", but this angular momentum does not correspond to a spinning motion. In relativistic quantum mechanics the above relativistic definition becomes a tensorial operator. Spin, orbital, and total angular momentum The classical definition of angular momentum as can be carried over to quantum mechanics, by reinterpreting r as the quantum position operator and p as the quantum momentum operator. L is then an operator, specifically called the orbital angular momentum operator. The components of the angular momentum operator satisfy the commutation relations of the Lie algebra so(3). Indeed, these operators are precisely the infinitesimal action of the rotation group on the quantum Hilbert space. (See also the discussion below of the angular momentum operators as the generators of rotations.) However, in quantum physics, there is another type of angular momentum, called spin angular momentum, represented by the spin operator S. Spin is often depicted as a particle literally spinning around an axis, but this is a misleading and inaccurate picture: spin is an intrinsic property of a particle, unrelated to any sort of motion in space and fundamentally different from orbital angular momentum. All elementary particles have a characteristic spin (possibly zero), and almost all elementary particles have nonzero spin. For example electrons have "spin 1/2" (this actually means "spin ħ/2"), photons have "spin 1" (this actually means "spin ħ"), and pi-mesons have spin 0. Finally, there is total angular momentum J, which combines both the spin and orbital angular momentum of all particles and fields. (For one particle, .) Conservation of angular momentum applies to J, but not to L or S; for example, the spin–orbit interaction allows angular momentum to transfer back and forth between L and S, with the total remaining constant. Electrons and photons need not have integer-based values for total angular momentum, but can also have half-integer values. In molecules the total angular momentum F is the sum of the rovibronic (orbital) angular momentum N, the electron spin angular momentum S, and the nuclear spin angular momentum I. For electronic singlet states the rovibronic angular momentum is denoted J rather than N. As explained by Van Vleck, the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those for the components about space-fixed axes. Quantization In quantum mechanics, angular momentum is quantized – that is, it cannot vary continuously, but only in "quantum leaps" between certain allowed values. For any system, the following restrictions on measurement results apply, where is the reduced Planck constant and is any Euclidean vector such as x, y, or z: The reduced Planck constant is tiny by everyday standards, about 10−34 J s, and therefore this quantization does not noticeably affect the angular momentum of macroscopic objects. However, it is very important in the microscopic world. For example, the structure of electron shells and subshells in chemistry is significantly affected by the quantization of angular momentum. Quantization of angular momentum was first postulated by Niels Bohr in his model of the atom and was later predicted by Erwin Schrödinger in his Schrödinger equation. Uncertainty In the definition , six operators are involved: The position operators , , , and the momentum operators , , . However, the Heisenberg uncertainty principle tells us that it is not possible for all six of these quantities to be known simultaneously with arbitrary precision. Therefore, there are limits to what can be known or measured about a particle's angular momentum. It turns out that the best that one can do is to simultaneously measure both the angular momentum vector's magnitude and its component along one axis. The uncertainty is closely related to the fact that different components of an angular momentum operator do not commute, for example . (For the precise commutation relations, see angular momentum operator.) Total angular momentum as generator of rotations As mentioned above, orbital angular momentum L is defined as in classical mechanics: , but total angular momentum J is defined in a different, more basic way: J is defined as the "generator of rotations". More specifically, J is defined so that the operator is the rotation operator that takes any system and rotates it by angle about the axis . (The "exp" in the formula refers to operator exponential.) To put this the other way around, whatever our quantum Hilbert space is, we expect that the rotation group SO(3) will act on it. There is then an associated action of the Lie algebra so(3) of SO(3); the operators describing the action of so(3) on our Hilbert space are the (total) angular momentum operators. The relationship between the angular momentum operator and the rotation operators is the same as the relationship between Lie algebras and Lie groups in mathematics. The close relationship between angular momentum and rotations is reflected in Noether's theorem that proves that angular momentum is conserved whenever the laws of physics are rotationally invariant. Angular momentum in electrodynamics When describing the motion of a charged particle in an electromagnetic field, the canonical momentum P (derived from the Lagrangian for this system) is not gauge invariant. As a consequence, the canonical angular momentum L = r × P is not gauge invariant either. Instead, the momentum that is physical, the so-called kinetic momentum (used throughout this article), is (in SI units) where e is the electric charge of the particle and A the magnetic vector potential of the electromagnetic field. The gauge-invariant angular momentum, that is kinetic angular momentum, is given by The interplay with quantum mechanics is discussed further in the article on canonical commutation relations. Angular momentum in optics In classical Maxwell electrodynamics the Poynting vector is a linear momentum density of electromagnetic field. The angular momentum density vector is given by a vector product as in classical mechanics: The above identities are valid locally, i.e. in each space point in a given moment . Angular momentum in nature and the cosmos Tropical cyclones and other related weather phenomena involve conservation of angular momentum in order to explain the dynamics. Winds revolve slowly around low pressure systems, mainly due to the coriolis effect. If the low pressure intensifies and the slowly circulating air is drawn toward the center, the molecules must speed up in order to conserve angular momentum. By the time they reach the center, the speeds become destructive. Johannes Kepler determined the laws of planetary motion without knowledge of conservation of momentum. However, not long after his discovery their derivation was determined from conservation of angular momentum. Planets move more slowly the further they are out in their elliptical orbits, which is explained intuitively by the fact that orbital angular momentum is proportional to the radius of the orbit. Since the mass does not change and the angular momentum is conserved, the velocity drops. Tidal acceleration is an effect of the tidal forces between an orbiting natural satellite (e.g. the Moon) and the primary planet that it orbits (e.g. Earth). The gravitational torque between the Moon and the tidal bulge of Earth causes the Moon to be constantly promoted to a slightly higher orbit (~3.8 cm per year) and Earth to be decelerated (by −25.858 ± 0.003″/cy²) in its rotation (the length of the day increases by ~1.7 ms per century, +2.3 ms from tidal effect and −0.6 ms from post-glacial rebound). The Earth loses angular momentum which is transferred to the Moon such that the overall angular momentum is conserved. Angular momentum in engineering and technology Examples of using conservation of angular momentum for practical advantage are abundant. In engines such as steam engines or internal combustion engines, a flywheel is needed to efficiently convert the lateral motion of the pistons to rotational motion. Inertial navigation systems explicitly use the fact that angular momentum is conserved with respect to the inertial frame of space. Inertial navigation is what enables submarine trips under the polar ice cap, but are also crucial to all forms of modern navigation. Rifled bullets use the stability provided by conservation of angular momentum to be more true in their trajectory. The invention of rifled firearms and cannons gave their users significant strategic advantage in battle, and thus were a technological turning point in history. History Isaac Newton, in the Principia, hinted at angular momentum in his examples of the first law of motion,A top, whose parts by their cohesion are perpetually drawn aside from rectilinear motions, does not cease its rotation, otherwise than as it is retarded by the air. The greater bodies of the planets and comets, meeting with less resistance in more free spaces, preserve their motions both progressive and circular for a much longer time.He did not further investigate angular momentum directly in the Principia, saying:From such kind of reflexions also sometimes arise the circular motions of bodies about their own centres. But these are cases which I do not consider in what follows; and it would be too tedious to demonstrate every particular that relates to this subject.However, his geometric proof of the law of areas is an outstanding example of Newton's genius, and indirectly proves angular momentum conservation in the case of a central force. Law of Areas Newton's derivation As a planet orbits the Sun, the line between the Sun and the planet sweeps out equal areas in equal intervals of time. This had been known since Kepler expounded his second law of planetary motion. Newton derived a unique geometric proof, and went on to show that the attractive force of the Sun's gravity was the cause of all of Kepler's laws. During the first interval of time, an object is in motion from point A to point B. Undisturbed, it would continue to point c during the second interval. When the object arrives at B, it receives an impulse directed toward point S. The impulse gives it a small added velocity toward S, such that if this were its only velocity, it would move from B to V during the second interval. By the rules of velocity composition, these two velocities add, and point C is found by construction of parallelogram BcCV. Thus the object's path is deflected by the impulse so that it arrives at point C at the end of the second interval. Because the triangles SBc and SBC have the same base SB and the same height Bc or VC, they have the same area. By symmetry, triangle SBc also has the same area as triangle SAB, therefore the object has swept out equal areas SAB and SBC in equal times. At point C, the object receives another impulse toward S, again deflecting its path during the third interval from d to D. Thus it continues to E and beyond, the triangles SAB, SBc, SBC, SCd, SCD, SDe, SDE all having the same area. Allowing the time intervals to become ever smaller, the path ABCDE approaches indefinitely close to a continuous curve. Note that because this derivation is geometric, and no specific force is applied, it proves a more general law than Kepler's second law of planetary motion. It shows that the Law of Areas applies to any central force, attractive or repulsive, continuous or non-continuous, or zero. Conservation of angular momentum in the law of areas The proportionality of angular momentum to the area swept out by a moving object can be understood by realizing that the bases of the triangles, that is, the lines from S to the object, are equivalent to the radius , and that the heights of the triangles are proportional to the perpendicular component of velocity . Hence, if the area swept per unit time is constant, then by the triangular area formula , the product and therefore the product are constant: if and the base length are decreased, and height must increase proportionally. Mass is constant, therefore angular momentum is conserved by this exchange of distance and velocity. In the case of triangle SBC, area is equal to (SB)(VC). Wherever C is eventually located due to the impulse applied at B, the product (SB)(VC), and therefore remain constant. Similarly so for each of the triangles. Another areal proof of conservation of angular momentum for any central force uses Mamikon's sweeping tangents theorem. After Newton Leonhard Euler, Daniel Bernoulli, and Patrick d'Arcy all understood angular momentum in terms of conservation of areal velocity, a result of their analysis of Kepler's second law of planetary motion. It is unlikely that they realized the implications for ordinary rotating matter. In 1736 Euler, like Newton, touched on some of the equations of angular momentum in his Mechanica without further developing them. Bernoulli wrote in a 1744 letter of a "moment of rotational motion", possibly the first conception of angular momentum as we now understand it. In 1799, Pierre-Simon Laplace first realized that a fixed plane was associated with rotation—his invariable plane. Louis Poinsot in 1803 began representing rotations as a line segment perpendicular to the rotation, and elaborated on the "conservation of moments". In 1852 Léon Foucault used a gyroscope in an experiment to display the Earth's rotation. William J. M. Rankine's 1858 Manual of Applied Mechanics defined angular momentum in the modern sense for the first time:... a line whose length is proportional to the magnitude of the angular momentum, and whose direction is perpendicular to the plane of motion of the body and of the fixed point, and such, that when the motion of the body is viewed from the extremity of the line, the radius-vector of the body seems to have right-handed rotation.In an 1872 edition of the same book, Rankine stated that "The term angular momentum was introduced by Mr. Hayward," probably referring to R.B. Hayward's article On a Direct Method of estimating Velocities, Accelerations, and all similar Quantities with respect to Axes moveable in any manner in Space with Applications, which was introduced in 1856, and published in 1864. Rankine was mistaken, as numerous publications feature the term starting in the late 18th to early 19th centuries. However, Hayward's article apparently was the first use of the term and the concept seen by much of the English-speaking world. Before this, angular momentum was typically referred to as "momentum of rotation" in English. See also References Further reading . External links "What Do a Submarine, a Rocket and a Football Have in Common? Why the prolate spheroid is the shape for success" (Scientific American, November 8, 2010) Conservation of Angular Momentum – a chapter from an online textbook Angular Momentum in a Collision Process – derivation of the three-dimensional case Angular Momentum and Rolling Motion – more momentum theory Mechanical quantities Rotation Conservation laws Moment (physics) Angular momentum
Angular momentum
[ "Physics", "Mathematics" ]
9,705
[ "Symmetry", "Physical phenomena", "Mechanical quantities", "Physical quantities", "Equations of physics", "Conservation laws", "Quantity", "Classical mechanics", "Rotation", "Motion (physics)", "Mechanics", "Angular momentum", "Moment (physics)", "Momentum", "Physics theorems" ]
2,840
https://en.wikipedia.org/wiki/Plum%20pudding%20model
The plum pudding model was the first scientific model of the atom to describe an internal structure. It was first proposed by J. J. Thomson in 1904 following his discovery of the electron in 1897, and was rendered obsolete by Ernest Rutherford's discovery of the atomic nucleus in 1911. The model tried to account for two properties of atoms then known: that there are electrons, and that atoms have no net electric charge. Logically there had to be an equal amount of positive charge to balance out the negative charge of the electrons. As Thomson had no idea as to the source of this positive charge, he tentatively proposed that it was everywhere in the atom, and that the atom was spherical. This was the mathematically simplest hypothesis to fit the available evidence, or lack thereof. In such a sphere, the negatively charged electrons would distribute themselves in a more or less even manner throughout the volume, simultaneously repelling each other while being attracted to the positive sphere's center. Despite Thomson's efforts, his model couldn't account for emission spectra and valencies. Based on experimental studies of alpha particle scattering (in the gold foil experiment), Ernest Rutherford developed an alternative model for the atom featuring a compact nucleus where the positive charge is concentrated. Thomson's model is popularly referred to as the "plum pudding model" with the notion that the electrons are distributed uniformly like raisins in a plum pudding. Neither Thomson nor his colleagues ever used this analogy. It seems to have been coined by popular science writers to make the model easier to understand for the layman. The analogy is perhaps misleading because Thomson likened the positive sphere to a liquid rather than a solid since he thought the electrons moved around in it. Significance Thomson's model marks the moment when the development of atomic theory passed from chemists to physicists. While atomic theory was widely accepted by chemists by the end of the 19th century, physicists remained skeptical because the atomic model lacked any properties which concerned their field, such as electric charge, magnetic moment, volume, or absolute mass. Thomson himself was a physicist and his atomic model was a byproduct of his investigations of cathode rays, by which he discovered electrons. Thomson hypothesized that the quantity, arrangement, and motions of electrons in the atom could explain its physical and chemical properties, such as emission spectra, valencies, reactivity, and ionization. He was on the right track, though his approach was based on classical mechanics and he did not have the insight to incorporate quantized energy into it. Background Throughout the 19th century evidence from chemistry and statistical mechanics accumulated that matter was composed of atoms. The structure of the atom was discussed, and by the end of the century the leading model was the vortex theory of the atom, proposed by William Thomson (later Lord Kelvin) in 1867. By 1890, J.J. Thomson had his own version called the "nebular atom" hypothesis, in which atoms were composed of immaterial vortices and suggested similarities between the arrangement of vortices and periodic regularity found among the chemical elements. Thomson's discovery of the electron in 1897 changed his views. Thomson called them "corpuscles" (particles), but they were more commonly called "electrons", the name G. J. Stoney had coined for the "fundamental unit quantity of electricity" in 1891. However even late in 1899, few scientists believed in subatomic particles. Another emerging scientific theme of the 19th century was the discovery and study of radioactivity. Thomson discovered the electron by studying cathode rays, and in 1900 Henri Becquerel determined that the radiation from uranium, now called beta particles, had the same charge/mass ratio as cathode rays. These beta particles were believed to be electrons travelling at high speed. The particles were used by Thomson to probe atoms to find evidence for his atomic theory. The other form of radiation critical to this era of atomic models was alpha particles. Heavier and slower than beta particles, these were the key tool used by Rutherford to find evidence against Thomson's model. In addition to the emerging atomic theory, the electron, and radiation, the last element of history was the many studies of atomic spectra published in the late 19th century. Part of the attraction of the vortex model was its possible role in describing the spectral data as vibrational responses to electromagnetic radiation. Neither Thomson's model nor its successor, Rutherford's model, made progress towards understanding atomic spectra. That would have to wait until Niels Bohr built the first quantum-based atom model. Development Thomson's model was the first to assign a specific inner structure to an atom, though his earliest descriptions did not include mathematical formulas. From 1897 through 1913, Thomson proposed a series of increasingly detailed polyelectron models for the atom. His first versions were qualitative culminating in his 1906 paper and follow on summaries. Thomson's model changed over the course of its initial publication, finally becoming a model with much more mobility containing electrons revolving in the dense field of positive charge rather than a static structure. Thomson attempted unsuccessfully to reshape his model to account for some of the major spectral lines experimentally known for several elements. 1897 Corpuscles inside atoms In a paper titled Cathode Rays, Thomson demonstrated that cathode rays are not light but made of negatively charged particles which he called corpuscles. He observed that cathode rays can be deflected by electric and magnetic fields, which does not happen with light rays. In a few paragraphs near the end of this long paper Thomson discusses the possibility that atoms were made of these corpuscles, calling them primordial atoms. Thomson believed that the intense electric field around the cathode caused the surrounding gas molecules to split up into their component corpuscles, thereby generating cathode rays. Thomson thus showed evidence that atoms were divisible, though he did not attempt to describe their structure at this point. Thomson notes that he was not the first scientist to propose that atoms are divisible, making reference to William Prout who in 1815 found that the atomic weights of various elements were multiples of hydrogen's atomic weight and hypothesised that all atoms were made of hydrogen atoms fused together. Prout's hypothesis was dismissed by chemists when by the 1830s it was found that some elements seemed to have a non-integer atomic weight—e.g. chlorine has an atomic weight of about 35.45. But the idea continued to intrigue scientists. The discrepancies were eventually explained with the discovery of isotopes in 1912. A few months after Thomson's paper appeared, George FitzGerald suggested that the corpuscle identified by Thomson from cathode rays and proposed as parts of an atom was a "free electron", as described by physicist Joseph Larmor and Hendrik Lorentz. While Thomson did not adopt the terminology, the connection convinced other scientists that cathode rays were particles, an important step in their eventual acceptance of an atomic model based on sub-atomic particles. In 1899 Thomson reiterated his atomic model in a paper that showed that negative electricity created by ultraviolet light landing on a metal (known now as the photoelectric effect) has the same mass-to-charge ratio as cathode rays; then he applied his previous method for determining the charge on ions to the negative electric particles created by ultraviolet light. He estimated that the electron's mass was 0.0014 times that of the hydrogen ion (as a fraction: ). In the conclusion of this paper he writes: 1904 Mechanical model of the atom Thomson provided his first detailed description of the atom in his 1904 paper On the Structure of the Atom. Thomson starts with a short description of his model ... the atoms of the elements consist of a number of negatively electrified corpuscles enclosed in a sphere of uniform positive electrification, ... Primarily focused on the electrons, Thomson adopted the positive sphere from Kelvin's atom model proposed a year earlier. He then gives a detailed mechanical analysis of such a system, distributing the electrons uniformly around a ring. The attraction of the positive electrification is balanced by the mutual repulsion of the electrons. His analysis focuses on stability, looking for cases where small changes in position are countered by restoring forces. After discussing his many formulae for stability he turned to analysing patterns in the number of electrons in various concentric rings of stable configurations. These regular patterns Thomson argued are analogous to the periodic law of chemistry behind the structure of the periodic table. This concept, that a model based on subatomic particles could account for chemical trends, encouraged interest in Thomson's model and influenced future work even if the details Thomson's electron assignments turned out to be incorrect. Thomson at this point believed that all the mass of the atom was carried by the electrons. This would mean that even a small atom would have to contain thousands of electrons, and the positive electrification that encapsulated them was without mass. 1905 lecture on electron arrangements In a lecture delivered to the Royal Institution of Great Britain in 1905, Thomson explained that it was too computationally difficult for him to calculate the movements of large numbers of electrons in the positive sphere, so he proposed a practical experiment. This involved magnetised pins pushed into cork discs and set afloat in a basin of water. The pins were oriented such that they repelled each other. Above the centre of the basin was suspended an electromagnet that attracted the pins. The equilibrium arrangement the pins took informed Thomson on what arrangements the electrons in an atom might take. For instance, he observed that while five pins would arrange themselves in a stable pentagon around the centre, six pins could not form a stable hexagon. Instead, one pin would move to the centre and the other five would form a pentagon around the centre pin, and this arrangement was stable. As he added more pins, they would arrange themselves in concentric rings around the centre. The experiment functioned in two dimensions instead of three, but Thomson inferred the electrons in the atom arranged themselves in concentric shells and the could move within these shells but did not move from one shell to another them except when electrons were added or subtracted from the atom. 1906 Estimating electrons per atom Before 1906 Thomson considered the atomic weight to be due to the mass of the electrons (which he continued to call "corpuscles"). Based on his own estimates of the electron mass, an atom would need tens of thousands electrons to account for the mass. In 1906 he used three different methods, X-ray scattering, beta ray absorption, or optical properties of gases, to estimate that "number of corpuscles is not greatly different from the atomic weight". This reduced the number of electrons to tens or at most a couple of hundred and that in turn meant that the positive sphere in Thomson's model contained most of the mass of the atom. This meant that Thomson's mechanical stability work from 1904 and the comparison to the periodic table were no longer valid. Moreover, the alpha particle, so important to the next advance in atomic theory by Rutherford, would no longer be viewed as an atom containing thousands of electrons. In 1907, Thomson published The Corpuscular Theory of Matter which reviewed his ideas on the atom's structure and proposed further avenues of research. In Chapter 6, he further elaborates his experiment using magnetised pins in water, providing an expanded table. For instance, if 59 pins were placed in the pool, they would arrange themselves in concentric rings of the order 20-16-13-8-2 (from outermost to innermost). In Chapter 7, Thomson summarised his 1906 results on the number of electrons in an atom. He included one important correction: he replaced the beta-particle analysis with one based on the cathode ray experiments of August Becker, giving a result in better agreement with other approaches to the problem. Experiments by other scientists in this field had shown that atoms contain far fewer electrons than Thomson previously thought. Thomson now believed the number of electrons in an atom was a small multiple of its atomic weight: "the number of corpuscles in an atom of any element is proportional to the atomic weight of the element — it is a multiple, and not a large one, of the atomic weight of the element." This meant that almost all of the atom's mass had to be carried by the positive sphere, whatever it was made of. Thomson in this book estimated that a hydrogen atom is 1,700 times heavier than an electron (the current measurement is 1,837). Thomson noted that no scientist had yet found a positively charged particle smaller than a hydrogen ion. He also wrote that the positive charge of an atom is a multiple of a basic unit of positive charge, equal to the negative charge of an electron. Thomson refused to jump to the conclusion that the basic unit of positive charge has a mass equal to that of the hydrogen ion, arguing that scientists first had to know how many electrons an atom contains. For all he could tell, a hydrogen ion might still contain a few electrons—perhaps two electrons and three units of positive charge. 1910 Multiple scattering Thomson's difficulty with beta scattering in 1906 lead him to renewed interest in the topic. He encouraged J. Arnold Crowther to experiment with beta scattering through thin foils and, in 1910, Thomson produced a new theory of beta scattering. The two innovations in this paper was the introduction of scattering from the positive sphere of the atom and analysis that multiple or compound scattering was critical to the final results. This theory and Crowther's experimental results would be confronted by Rutherford's theory and Geiger and Mardsen new experiments with alpha particles. Another innovation in Thomson's 1910 paper was that he modelled how an atom might deflect an incoming beta particle if the positive charge of the atom existed in discrete units of equal but arbitrary size, spread evenly throughout the atom, separated by empty space, with each unit having a positive charge equal to the electron's negative charge. Thomson therefore came close to deducing the existence of the proton, which was something Rutherford eventually did. In Rutherford's model of the atom, the protons are clustered in a very small nucleus, but in Thomson's alternative model, the positive units were spread throughout the atom. Thomson's 1910 scattering model In his 1910 paper "On the Scattering of rapidly moving Electrified Particles", Thomson presented equations that modelled how beta particles scatter in a collision with an atom. His work was based on beta scattering studies by James Crowther. Deflection by the positive sphere Thomson typically assumed the positive charge in the atom was uniformly distributed throughout its volume, encapsulating the electrons. In his 1910 paper, Thomson presented the following equation which isolated the effect of this positive sphere: where k is the Coulomb constant, qe is the charge of the beta particle, qg is the charge of the positive sphere, m is the mass of the beta particle, and R is the radius of the sphere. Because the atom is many thousands of times heavier than the beta particle, no correction for recoil is needed. Thomson did not explain how this equation was developed, but the historian John L. Heilbron provided an educated guess he called a "straight-line" approximation. Consider a beta particle passing through the positive sphere with its initial trajectory at a lateral distance b from the centre. The path is assumed to have a very small deflection and therefore is treated here as a straight line. Inside a sphere of uniformly distributed positive charge the force exerted on the beta particle at any point along its path through the sphere would be directed along the radius with magnitude: The component of force perpendicular to the trajectory and thus deflecting the path of the particle would be: The lateral change in momentum py is therefore The resulting angular deflection, , is given by where px is the average horizontal momentum taken to be equal to the incoming momentum. Since we already know the deflection is very small, we can treat as being equal to . To find the average deflection angle , the angle for each value of b and the corresponding L are added across the face sphere, then divided by the cross-section area. per Pythagorean theorem. This matches Thomson's formula in his 1910 paper. Deflection by the electrons Thomson modelled the collisions between a beta particle and the electrons of an atom by calculating the deflection of one collision then multiplying by a factor for the number of collisions as the particle crosses the atom. For the electrons within an arbitrary distance s of the beta particle's path, their mean distance will be . Therefore, the average deflection per electron will be where qe is the elementary charge, k is the Coulomb constant, m and v are the mass and velocity of the beta particle. The factor for the number of collisions was known to be the square root of the number of possible electrons along path. The number of electrons depends upon the density of electrons along the particle path times the path length L. The net deflection caused by all the electrons within this arbitrary cylinder of effect around the beta particle's path is where N0 is the number of electrons per unit volume and is the volume of this cylinder. Since Thomson calculated the deflection would be very small, he treats L as a straight line. Therefore where b is the distance of this chord from the centre. The mean of is given by the integral We can now replace in the equation for to obtain the mean deflection : where N is the number of electrons in the atom, equal to . Deflection by the positive charge in discrete units In his 1910 paper, Thomson proposed an alternative model in which the positive charge exists in discrete units separated by empty space, with those units being evenly distributed throughout the atom's volume. In this concept, the average scattering angle of the beta particle is given by: where σ is the ratio of the volume occupied by the positive charge to the volume of the whole atom. Thomson did not explain how he arrived at this equation. Net deflection To find the combined effect of the positive charge and the electrons on the beta particle's path, Thomson provided the following equation: Demise of the plum pudding model Thomson probed the structure of atoms through beta particle scattering, whereas his former student Ernest Rutherford was interested in alpha particle scattering. Beta particles are electrons emitted by radioactive decay, whereas alpha particles are essentially helium atoms, also emitted in process of decay. Alpha particles have considerably more momentum than beta particles and Rutherford found that matter scatters alpha particles in ways that Thomson's plum pudding model could not predict. Between 1908 and 1913, Ernest Rutherford, Hans Geiger, and Ernest Marsden collaborated on a series of experiments in which they bombarded thin metal foils with a beam of alpha particles and measured the intensity versus scattering angle of the particles. They found that the metal foil could scatter alpha particles by more than 90°. This should not have been possible according to the Thomson model: the scattering into large angles should have been negligible. The odds of a beta particle being scattered by more than 90° under such circumstances is astronomically small, and since alpha particles typically have much more momentum than beta particles, their deflection should be smaller still. The Thomson models simply could not produce electrostatic forces of sufficient strength to cause such large deflection. The charges in the Thomson model were too diffuse. This led Rutherford to discard the Thomson for a new model where the positive charge of the atom is concentrated in a tiny nucleus. Rutherford went on to make more compelling discoveries. In Thomson's model, the positive charge sphere was just an abstract component, but Rutherford found something concrete to attribute the positive charge to: particles he dubbed "protons". Whereas Thomson believed that the electron count was roughly correlated to the atomic weight, Rutherford showed that (in a neutral atom) it is exactly equal to the atomic number. Thomson hypothesised that the arrangement of the electrons in the atom somehow determined the spectral lines of a chemical element. He was on the right track, but it had nothing to do with how atoms circulated in a sphere of positive charge. Scientists eventually discovered that it had to do with how electrons absorb and release energy in discrete quantities, moving through energy levels which correspond to emission and absorption spectra. Thomson had not incorporated quantum mechanics into his atomic model, which at the time was a very new field of physics. Niels Bohr and Erwin Schroedinger later incorporated quantum mechanics into the atomic model. Rutherford's nuclear model Rutherford's 1911 paper on alpha particle scattering showed that Thomson's scattering model could not explain the large angle scattering and it showed that multiple scattering was not necessary to explain the data. However, in the years immediately following its publication few scientists took note. The scattering model predictions were not considered definitive evidence against Thomson's plum pudding model. Thomson and Rutherford had pioneered scattering as a technique to probe atoms, its reliability and value were unproven. Before Rutherford's paper the alpha particle was considered an atom, not a compact mass. It was not clear why it should be a good probe. Moreover, Rutherford's paper did not discuss the atomic electrons vital to practical problems like chemistry or atomic spectroscopy. Rutherford's nuclear model would only become widely accepted after the work of Niels Bohr. Mathematical Thomson problem The Thomson problem in mathematics seeks the optimal distribution of equal point charges on the surface of a sphere. Unlike the original Thomson atomic model, the sphere in this purely mathematical model does not have a charge, and this causes all the point charges to move to the surface of the sphere by their mutual repulsion. There is still no general solution to Thomson's original problem of how electrons arrange themselves within a sphere of positive charge. Origin of the nickname The first known writer to compare Thomson's model to a plum pudding was an anonymous reporter in an article for the British pharmaceutical magazine The Chemist and Druggist in August 1906. The analogy was never used by Thomson nor his colleagues. It seems to have been a conceit of popular science writers to make the model easier to understand for the layman. References Bibliography Foundational quantum physics Atoms Electron Periodic table Obsolete theories in physics 1904 in science
Plum pudding model
[ "Physics", "Chemistry" ]
4,521
[ "Electron", "Periodic table", "Molecular physics", "Theoretical physics", "Foundational quantum physics", "Quantum mechanics", "Atoms", "Matter", "Obsolete theories in physics" ]
2,862
https://en.wikipedia.org/wiki/AI-complete
In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm. In the past, problems supposed to be AI-complete included computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. AI-complete were notably considered useful for testing the presence of humans, as CAPTCHAs aim to do, and in computer security to circumvent brute-force attacks. History The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems. Early uses of the term are in Erik Mueller's 1987 PhD dissertation and in Eric Raymond's 1991 Jargon File. Expert systems, that were popular in the 1980s, were able to solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempted to "scale up" their systems to handle more complicated, real-world situations, the programs tended to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they would fail as unexpected circumstances outside of its original problem context would begin to appear. When human beings are dealing with new situations in the world, they are helped by their awareness of the general context: they know what the things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. Expert systems lacked this adaptability and were brittle when facing new situations. DeepMind published a work in May 2022 in which they trained a single model to do several things at the same time. The model, named Gato, can "play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens." Similarly, some tasks once considered to be AI-complete, like machine translation, are among the capabilities of large language models. AI-complete problems AI-complete problems have been hypothesized to include: AI peer review (composite natural language understanding, automated reasoning, automated theorem proving, formalized logic expert system) Bongard problems Computer vision (and subproblems such as object recognition) Natural language understanding (and subproblems such as text mining, machine translation, and word-sense disambiguation) Autonomous driving Dealing with unexpected circumstances while solving any real world problem, whether navigation, planning, or even the kind of reasoning done by expert systems. Formalization Computational complexity theory deals with the relative computational difficulty of computable functions. By definition, it does not cover problems whose solution is unknown or has not been characterized formally. Since many AI problems have no formalization yet, conventional complexity theory does not enable a formal definition of AI-completeness. Research Roman Yampolskiy suggests that a problem is AI-Complete if it has two properties: It is in the set of AI problems (Human Oracle-solvable). Any AI problem can be converted into by some polynomial time algorithm. On the other hand, a problem is AI-Hard if and only if there is an AI-Complete problem that is polynomial time Turing-reducible to . This also gives as a consequence the existence of AI-Easy problems, that are solvable in polynomial time by a deterministic Turing machine with an oracle for some problem. Yampolskiy has also hypothesized that the Turing Test is a defining feature of AI-completeness. Groppe and Jain classify problems which require artificial general intelligence to reach human-level machine performance as AI-complete, while only restricted versions of AI-complete problems can be solved by the current AI systems. For Šekrst, getting a polynomial solution to AI-complete problems would not necessarily be equal to solving the issue of artificial general intelligence, while emphasizing the lack of computational complexity research being the limiting factor towards achieving artificial general intelligence. For Kwee-Bintoro and Velez, solving AI-complete problems would have strong repercussions on the society. See also ASR-complete List of unsolved problems in computer science Synthetic intelligence References Artificial intelligence Computational problems
AI-complete
[ "Mathematics" ]
908
[ "Mathematical problems", "Computational problems" ]
2,889
https://en.wikipedia.org/wiki/Amorphous%20solid
In condensed matter physics and materials science, an amorphous solid (or non-crystalline solid) is a solid that lacks the long-range order that is characteristic of a crystal. The terms "glass" and "glassy solid" are sometimes used synonymously with amorphous solid; however, these terms refer specifically to amorphous materials that undergo a glass transition. Examples of amorphous solids include glasses, metallic glasses, and certain types of plastics and polymers. Etymology The term comes from the Greek a ("without"), and morphé ("shape, form"). Structure Amorphous materials have an internal structure of molecular-scale structural blocks that can be similar to the basic structural units in the crystalline phase of the same compound. Unlike in crystalline materials, however, no long-range regularity exists: amorphous materials cannot be described by the repetition of a finite unit cell. Statistical measures, such as the atomic density function and radial distribution function, are more useful in describing the structure of amorphous solids. Although amorphous materials lack long range order, they exhibit localized order on small length scales. By convention, short range order extends only to the nearest neighbor shell, typically only 1-2 atomic spacings. Medium range order may extend beyond the short range order by 1-2 nm. Fundamental properties of amorphous solids Glass transition at high temperatures The freezing from liquid state to amorphous solid - glass transition - is considered one of the very important and unsolved problems of physics. Universal low-temperature properties of amorphous solids At very low temperatures (below 1-10 K), a large family of amorphous solids have various similar low-temperature properties. Although there are various theoretical models, neither glass transition nor low-temperature properties of glassy solids are well understood on the fundamental physics level. Amorphous solids is an important area of condensed matter physics aiming to understand these substances at high temperatures of glass transition and at low temperatures towards absolute zero. From the 1970s, low-temperature properties of amorphous solids were studied experimentally in great detail. For all of these substances, specific heat has a (nearly) linear dependence as a function of temperature, and thermal conductivity has nearly quadratic temperature dependence. These properties are conventionally called anomalous being very different from properties of crystalline solids. On the phenomenological level, many of these properties were described by a collection of tunnelling two-level systems. Nevertheless, the microscopic theory of these properties is still missing after more than 50 years of the research. Remarkably, a dimensionless quantity of internal friction is nearly universal in these materials. This quantity is a dimensionless ratio (up to a numerical constant) of the phonon wavelength to the phonon mean free path. Since the theory of tunnelling two-level states (TLSs) does not address the origin of the density of TLSs, this theory cannot explain the universality of internal friction, which in turn is proportional to the density of scattering TLSs. The theoretical significance of this important and unsolved problem was highlighted by Anthony Leggett. Nano-structured materials Amorphous materials will have some degree of short-range order at the atomic-length scale due to the nature of intermolecular chemical bonding. Furthermore, in very small crystals, short-range order encompasses a large fraction of the atoms; nevertheless, relaxation at the surface, along with interfacial effects, distorts the atomic positions and decreases structural order. Even the most advanced structural characterization techniques, such as X-ray diffraction and transmission electron microscopy, can have difficulty distinguishing amorphous and crystalline structures at short-size scales. Characterization of amorphous solids Due to the lack of long-range order, standard crystallographic techniques are often inadequate in determining the structure of amorphous solids. A variety of electron, X-ray, and computation-based techniques have been used to characterize amorphous materials. Multi-modal analysis is very common for amorphous materials. X-ray and neutron diffraction Unlike crystalline materials, which exhibit strong Bragg diffraction, the diffraction patterns of amorphous materials are characterized by broad and diffuse peaks. As a result, detailed analysis and complementary techniques are required to extract real space structural information from the diffraction patterns of amorphous materials. It is useful to obtain diffraction data from both X-ray and neutron sources as they have different scattering properties and provide complementary data. Pair distribution function analysis can be performed on diffraction data to determine the probability of finding a pair of atoms separated by a certain distance. Another type of analysis that is done with diffraction data of amorphous materials is radial distribution function analysis, which measures the number of atoms found at varying radial distances away from an arbitrary reference atom. From these techniques, the local order of an amorphous material can be elucidated. X-ray absorption fine-structure spectroscopy X-ray absorption fine-structure spectroscopy is an atomic scale probe making it useful for studying materials lacking in long-range order. Spectra obtained using this method provide information on the oxidation state, coordination number, and species surrounding the atom in question as well as the distances at which they are found. Atomic electron tomography The atomic electron tomography technique is performed in transmission electron microscopes capable of reaching sub-Angstrom resolution. A collection of 2D images taken at numerous different tilt angles is acquired from the sample in question and then used to reconstruct a 3D image. After image acquisition, a significant amount of processing must be done to correct for issues such as drift, noise, and scan distortion. High-quality analysis and processing using atomic electron tomography results in a 3D reconstruction of an amorphous material detailing the atomic positions of the different species that are present. Fluctuation electron microscopy Fluctuation electron microscopy is another transmission electron microscopy-based technique that is sensitive to the medium-range order of amorphous materials. Structural fluctuations arising from different forms of medium-range order can be detected with this method. Fluctuation electron microscopy experiments can be done in conventional or scanning transmission electron microscope mode. Computational techniques Simulation and modeling techniques are often combined with experimental methods to characterize structures of amorphous materials. Commonly used computational techniques include density functional theory, molecular dynamics, and reverse Monte Carlo. Uses and observations Amorphous thin films Amorphous phases are important constituents of thin films. Thin films are solid layers of a few nanometres to tens of micrometres thickness that are deposited onto a substrate. So-called structure zone models were developed to describe the microstructure of thin films as a function of the homologous temperature (Th), which is the ratio of deposition temperature to melting temperature. According to these models, a necessary condition for the occurrence of amorphous phases is that (Th) has to be smaller than 0.3. The deposition temperature must be below 30% of the melting temperature. Superconductivity Regarding their applications, amorphous metallic layers played an important role in the discovery of superconductivity in amorphous metals made by Buckel and Hilsch. The superconductivity of amorphous metals, including amorphous metallic thin films, is now understood to be due to phonon-mediated Cooper pairing. The role of structural disorder can be rationalized based on the strong-coupling Eliashberg theory of superconductivity. Thermal protection Amorphous solids typically exhibit higher localization of heat carriers compared to crystalline, giving rise to low thermal conductivity. Products for thermal protection, such as thermal barrier coatings and insulation, rely on materials with ultralow thermal conductivity. Technological uses Today, optical coatings made from TiO2, SiO2, Ta2O5 etc. (and combinations of these) in most cases consist of amorphous phases of these compounds. Much research is carried out into thin amorphous films as a gas-separating membrane layer. The technologically most important thin amorphous film is probably represented by a few nm thin SiO2 layers serving as isolator above the conducting channel of a metal-oxide semiconductor field-effect transistor (MOSFET). Also, hydrogenated amorphous silicon (Si:H) is of technical significance for thin-film solar cells. Pharmaceutical use In the pharmaceutical industry, some amorphous drugs have been shown to offer higher bioavailability than their crystalline counterparts as a result of the higher solubility of the amorphous phase. However, certain compounds can undergo precipitation in their amorphous form in vivo and can then decrease mutual bioavailability if administered together. Studies of GDC-0810 ASDs show a strong interrelationship between microstructure, physical properties and dissolution performance. In soils Amorphous materials in soil strongly influence bulk density, aggregate stability, plasticity, and water holding capacity of soils. The low bulk density and high void ratios are mostly due to glass shards and other porous minerals not becoming compacted. Andisol soils contain the highest amounts of amorphous materials. Phase Amorphous phases were a phenomenon of particular interest for the study of thin-film growth. The growth of polycrystalline films is often used and preceded by an initial amorphous layer, the thickness of which may amount to only a few nm. The most investigated example is represented by the unoriented molecules of thin polycrystalline silicon films. Wedge-shaped polycrystals were identified by transmission electron microscopy to grow out of the amorphous phase only after the latter has exceeded a certain thickness, the precise value of which depends on deposition temperature, background pressure, and various other process parameters. The phenomenon has been interpreted in the framework of Ostwald's rule of stages that predicts the formation of phases to proceed with increasing condensation time towards increasing stability. Notes References Further reading Phases of matter Unsolved problems in physics
Amorphous solid
[ "Physics", "Chemistry" ]
2,061
[ "Amorphous solids", "Unsolved problems in physics", "Phases of matter", "Matter" ]
2,961
https://en.wikipedia.org/wiki/Convex%20uniform%20honeycomb
In geometry, a convex uniform honeycomb is a uniform tessellation which fills three-dimensional Euclidean space with non-overlapping convex uniform polyhedral cells. Twenty-eight such honeycombs are known: the familiar cubic honeycomb and 7 truncations thereof; the alternated cubic honeycomb and 4 truncations thereof; 10 prismatic forms based on the uniform plane tilings (11 if including the cubic honeycomb); 5 modifications of some of the above by elongation and/or gyration. They can be considered the three-dimensional analogue to the uniform tilings of the plane. The Voronoi diagram of any lattice forms a convex uniform honeycomb in which the cells are zonohedra. History 1900: Thorold Gosset enumerated the list of semiregular convex polytopes with regular cells (Platonic solids) in his publication On the Regular and Semi-Regular Figures in Space of n Dimensions, including one regular cubic honeycomb, and two semiregular forms with tetrahedra and octahedra. 1905: Alfredo Andreini enumerated 25 of these tessellations. 1991: Norman Johnson's manuscript Uniform Polytopes identified the list of 28. 1994: Branko Grünbaum, in his paper Uniform tilings of 3-space, also independently enumerated all 28, after discovering errors in Andreini's publication. He found the 1905 paper, which listed 25, had 1 wrong, and 4 being missing. Grünbaum states in this paper that Norman Johnson deserves priority for achieving the same enumeration in 1991. He also mentions that I. Alexeyev of Russia had contacted him regarding a putative enumeration of these forms, but that Grünbaum was unable to verify this at the time. 2006: George Olshevsky, in his manuscript Uniform Panoploid Tetracombs, along with repeating the derived list of 11 convex uniform tilings, and 28 convex uniform honeycombs, expands a further derived list of 143 convex uniform tetracombs (Honeycombs of uniform 4-polytopes in 4-space). Only 14 of the convex uniform polyhedra appear in these patterns: three of the five Platonic solids (the tetrahedron, cube, and octahedron), six of the thirteen Archimedean solids (the ones with reflective tetrahedral or octahedral symmetry), and five of the infinite family of prisms (the 3-, 4-, 6-, 8-, and 12-gonal ones; the 4-gonal prism duplicates the cube). The icosahedron, snub cube, and square antiprism appear in some alternations, but those honeycombs cannot be realised with all edges unit length. Names This set can be called the regular and semiregular honeycombs. It has been called the Archimedean honeycombs by analogy with the convex uniform (non-regular) polyhedra, commonly called Archimedean solids. Recently Conway has suggested naming the set as the Architectonic tessellations and the dual honeycombs as the Catoptric tessellations. The individual honeycombs are listed with names given to them by Norman Johnson. (Some of the terms used below are defined in Uniform 4-polytope#Geometric derivations for 46 nonprismatic Wythoffian uniform 4-polytopes) For cross-referencing, they are given with list indices from Andreini (1-22), Williams(1–2,9-19), Johnson (11–19, 21–25, 31–34, 41–49, 51–52, 61–65), and Grünbaum(1-28). Coxeter uses δ4 for a cubic honeycomb, hδ4 for an alternated cubic honeycomb, qδ4 for a quarter cubic honeycomb, with subscripts for other forms based on the ring patterns of the Coxeter diagram. Compact Euclidean uniform tessellations (by their infinite Coxeter group families) The fundamental infinite Coxeter groups for 3-space are: The , [4,3,4], cubic, (8 unique forms plus one alternation) The , [4,31,1], alternated cubic, (11 forms, 3 new) The cyclic group, [(3,3,3,3)] or [3[4]], (5 forms, one new) There is a correspondence between all three families. Removing one mirror from produces , and removing one mirror from produces . This allows multiple constructions of the same honeycombs. If cells are colored based on unique positions within each Wythoff construction, these different symmetries can be shown. In addition there are 5 special honeycombs which don't have pure reflectional symmetry and are constructed from reflectional forms with elongation and gyration operations. The total unique honeycombs above are 18. The prismatic stacks from infinite Coxeter groups for 3-space are: The ×, [4,4,2,∞] prismatic group, (2 new forms) The ×, [6,3,2,∞] prismatic group, (7 unique forms) The ×, [(3,3,3),2,∞] prismatic group, (No new forms) The ××, [∞,2,∞,2,∞] prismatic group, (These all become a cubic honeycomb) In addition there is one special elongated form of the triangular prismatic honeycomb. The total unique prismatic honeycombs above (excluding the cubic counted previously) are 10. Combining these counts, 18 and 10 gives us the total 28 uniform honeycombs. The C̃3, [4,3,4] group (cubic) The regular cubic honeycomb, represented by Schläfli symbol {4,3,4}, offers seven unique derived uniform honeycombs via truncation operations. (One redundant form, the runcinated cubic honeycomb, is included for completeness though identical to the cubic honeycomb.) The reflectional symmetry is the affine Coxeter group [4,3,4]. There are four index 2 subgroups that generate alternations: [1+,4,3,4], [(4,3,4,2+)], [4,3+,4], and [4,3,4]+, with the first two generated repeated forms, and the last two are nonuniform. B̃3, [4,31,1] group The , [4,3] group offers 11 derived forms via truncation operations, four being unique uniform honeycombs. There are 3 index 2 subgroups that generate alternations: [1+,4,31,1], [4,(31,1)+], and [4,31,1]+. The first generates repeated honeycomb, and the last two are nonuniform but included for completeness. The honeycombs from this group are called alternated cubic because the first form can be seen as a cubic honeycomb with alternate vertices removed, reducing cubic cells to tetrahedra and creating octahedron cells in the gaps. Nodes are indexed left to right as 0,1,0',3 with 0' being below and interchangeable with 0. The alternate cubic names given are based on this ordering. Ã3, [3[4]] group There are 5 forms constructed from the , [3[4]] Coxeter group, of which only the quarter cubic honeycomb is unique. There is one index 2 subgroup [3[4]]+ which generates the snub form, which is not uniform, but included for completeness. Nonwythoffian forms (gyrated and elongated) Three more uniform honeycombs are generated by breaking one or another of the above honeycombs where its faces form a continuous plane, then rotating alternate layers by 60 or 90 degrees (gyration) and/or inserting a layer of prisms (elongation). The elongated and gyroelongated alternated cubic tilings have the same vertex figure, but are not alike. In the elongated form, each prism meets a tetrahedron at one triangular end and an octahedron at the other. In the gyroelongated form, prisms that meet tetrahedra at both ends alternate with prisms that meet octahedra at both ends. The gyroelongated triangular prismatic tiling has the same vertex figure as one of the plain prismatic tilings; the two may be derived from the gyrated and plain triangular prismatic tilings, respectively, by inserting layers of cubes. Prismatic stacks Eleven prismatic tilings are obtained by stacking the eleven uniform plane tilings, shown below, in parallel layers. (One of these honeycombs is the cubic, shown above.) The vertex figure of each is an irregular bipyramid whose faces are isosceles triangles. The C̃2×Ĩ1(∞), [4,4,2,∞], prismatic group There are only 3 unique honeycombs from the square tiling, but all 6 tiling truncations are listed below for completeness, and tiling images are shown by colors corresponding to each form. The G̃2xĨ1(∞), [6,3,2,∞] prismatic group Enumeration of Wythoff forms All nonprismatic Wythoff constructions by Coxeter groups are given below, along with their alternations. Uniform solutions are indexed with Branko Grünbaum's listing. Green backgrounds are shown on repeated honeycombs, with the relations are expressed in the extended symmetry diagrams. Examples The alternated cubic honeycomb is of special importance since its vertices form a cubic close-packing of spheres. The space-filling truss of packed octahedra and tetrahedra was apparently first discovered by Alexander Graham Bell and independently re-discovered by Buckminster Fuller (who called it the octet truss and patented it in the 1940s). . Octet trusses are now among the most common types of truss used in construction. Frieze forms If cells are allowed to be uniform tilings, more uniform honeycombs can be defined: Families: ×: [4,4,2] Cubic slab honeycombs (3 forms) ×: [6,3,2] Tri-hexagonal slab honeycombs (8 forms) ×: [(3,3,3),2] Triangular slab honeycombs (No new forms) ××: [∞,2,2] = Cubic column honeycombs (1 form) ×: [p,2,∞] Polygonal column honeycombs (analogous to duoprisms: these look like a single infinite tower of p-gonal prisms, with the remaining space filled with apeirogonal prisms) ××: [∞,2,∞,2] = [4,4,2] - = (Same as cubic slab honeycomb family) The first two forms shown above are semiregular (uniform with only regular facets), and were listed by Thorold Gosset in 1900 respectively as the 3-ic semi-check and tetroctahedric semi-check. Scaliform honeycomb A scaliform honeycomb is vertex-transitive, like a uniform honeycomb, with regular polygon faces while cells and higher elements are only required to be orbiforms, equilateral, with their vertices lying on hyperspheres. For 3D honeycombs, this allows a subset of Johnson solids along with the uniform polyhedra. Some scaliforms can be generated by an alternation process, leaving, for example, pyramid and cupola gaps. Hyperbolic forms There are 9 Coxeter group families of compact uniform honeycombs in hyperbolic 3-space, generated as Wythoff constructions, and represented by ring permutations of the Coxeter-Dynkin diagrams for each family. From these 9 families, there are a total of 76 unique honeycombs generated: [3,5,3] : - 9 forms [5,3,4] : - 15 forms [5,3,5] : - 9 forms [5,31,1] : - 11 forms (7 overlap with [5,3,4] family, 4 are unique) [(4,3,3,3)] : - 9 forms [(4,3,4,3)] : - 6 forms [(5,3,3,3)] : - 9 forms [(5,3,4,3)] : - 9 forms [(5,3,5,3)] : - 6 forms Several non-Wythoffian forms outside the list of 76 are known; it is not known how many there are. Paracompact hyperbolic forms There are also 23 paracompact Coxeter groups of rank 4. These families can produce uniform honeycombs with unbounded facets or vertex figure, including ideal vertices at infinity: References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, (2008) The Symmetries of Things, (Chapter 21, Naming the Archimedean and Catalan polyhedra and tilings, Architectonic and Catoptric tessellations, p 292–298, includes all the nonprismatic forms) Branko Grünbaum, (1994) Uniform tilings of 3-space. Geombinatorics 4, 49 - 56. Norman Johnson (1991) Uniform Polytopes, Manuscript (Chapter 5: Polyhedra packing and space filling) Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] (1.9 Uniform space-fillings) A. Andreini, (1905) Sulle reti di poliedri regolari e semiregolari e sulle corrispondenti reti correlative (On the regular and semiregular nets of polyhedra and on the corresponding correlative nets), Mem. Società Italiana della Scienze, Ser.3, 14 75–129. PDF D. M. Y. Sommerville, (1930) An Introduction to the Geometry of n Dimensions. New York, E. P. Dutton, . 196 pp. (Dover Publications edition, 1958) Chapter X: The Regular Polytopes Chapter 5. Joining polyhedra Crystallography of Quasicrystals: Concepts, Methods and Structures by Walter Steurer, Sofia Deloudi (2009), p. 54-55. 12 packings of 2 or more uniform polyhedra with cubic symmetry External links Uniform Honeycombs in 3-Space VRML models Elementary Honeycombs Vertex transitive space filling honeycombs with non-uniform cells. Uniform partitions of 3-space, their relatives and embedding, 1999 The Uniform Polyhedra Virtual Reality Polyhedra The Encyclopedia of Polyhedra octet truss animation Review: A. F. Wells, Three-dimensional nets and polyhedra, H. S. M. Coxeter (Source: Bull. Amer. Math. Soc. Volume 84, Number 3 (1978), 466-470.) Honeycombs (geometry)
Convex uniform honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
3,284
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
3,211
https://en.wikipedia.org/wiki/Atom%20probe
The atom probe was introduced at the 14th Field Emission Symposium in 1967 by Erwin Wilhelm Müller and J. A. Panitz. It combined a field ion microscope with a mass spectrometer having a single particle detection capability and, for the first time, an instrument could “... determine the nature of one single atom seen on a metal surface and selected from neighboring atoms at the discretion of the observer”. Atom probes are unlike conventional optical or electron microscopes, in that the magnification effect comes from the magnification provided by a highly curved electric field, rather than by the manipulation of radiation paths. The method is destructive in nature removing ions from a sample surface in order to image and identify them, generating magnifications sufficient to observe individual atoms as they are removed from the sample surface. Through coupling of this magnification method with time of flight mass spectrometry, ions evaporated by application of electric pulses can have their mass-to-charge ratio computed. Through successive evaporation of material, layers of atoms are removed from a specimen, allowing for probing not only of the surface, but also through the material itself. Computer methods are used to rebuild a three-dimensional view of the sample, prior to it being evaporated, providing atomic scale information on the structure of a sample, as well as providing the type atomic species information. The instrument allows the three-dimensional reconstruction of up to billions of atoms from a sharp tip (corresponding to specimen volumes of 10,000-10,000,000 nm3). Overview Atom probe samples are shaped to implicitly provide a highly curved electric potential to induce the resultant magnification, as opposed to direct use of lensing, such as via magnetic lenses. Furthermore, in normal operation (as opposed to a field ionization modes) the atom probe does not utilize a secondary source to probe the sample. Rather, the sample is evaporated in a controlled manner (field evaporation) and the evaporated ions are impacted onto a detector, which is typically 10 to 100 cm away. The samples are required to have a needle geometry and are produced by similar techniques as TEM sample preparation electropolishing, or focused ion beam methods. Since 2006, commercial systems with laser pulsing have become available and this has expanded applications from metallic only specimens into semiconducting, insulating such as ceramics, and even geological materials. Preparation is done, often by hand, to manufacture a tip radius sufficient to induce a high electric field, with radii on the order of 100 nm. To conduct an atom probe experiment a very sharp needle shaped specimen is placed in an ultra high vacuum chamber. After introduction into the vacuum system, the sample is reduced to cryogenic temperatures (typically 20-100 K) and manipulated such that the needle's point is aimed towards an ion detector. A high voltage is applied to the specimen, and either a laser pulse is applied to the specimen or a voltage pulse (typically 1-2 kV) with pulse repetition rates in the hundreds of kilohertz range is applied to a counter electrode. The application of the pulse to the sample allows for individual atoms at the sample surface to be ejected as an ion from the sample surface at a known time. Typically the pulse amplitude and the high voltage on the specimen are computer controlled to encourage only one atom to ionize at a time, but multiple ionizations are possible. The delay between application of the pulse and detection of the ion(s) at the detector allow for the computation of a mass-to-charge ratio. Whilst the uncertainty in the atomic mass computed by time-of-flight methods in atom probe is sufficiently small to allow for detection of individual isotopes within a material this uncertainty may still, in some cases, confound definitive identification of atomic species. Effects such as superposition of differing ions with multiple electrons removed, or through the presence of complex species formation during evaporation may cause two or more species to have sufficiently close time-of-flights to make definitive identification impossible. History Field ion microscopy Field ion microscopy is a modification of field emission microscopy where a stream of tunneling electrons is emitted from the apex of a sharp needle-like tip cathode when subjected to a sufficiently high electric field (~3-6 V/nm). The needle is oriented towards a phosphor screen to create a projected image of the work function at the tip apex. The image resolution is limited to (2-2.5 nm), due to quantum mechanical effects and lateral variations in the electron velocity. In field ion microscopy, the tip is cooled by a cryogen and its polarity is reversed. When an imaging gas (usually hydrogen or helium) is introduced at low pressures (< 0.1 Pascal) gas ions in the high electric field at the tip apex are field ionized and produce a projected image of protruding atoms at the tip apex. The image resolution is determined primarily by the temperature of the tip but even at 78 Kelvin atomic resolution is achieved. 10-cm Atom Probe The 10-cm Atom Probe, invented in 1973 by J. A. Panitz was a “new and simple atom probe which permits rapid, in depth species identification or the more usual atom-by atom analysis provided by its predecessors ... in an instrument having a volume of less than two liters in which tip movement is unnecessary and the problems of evaporation pulse stability and alignment common to previous designs have been eliminated.” This was accomplished by combining a time of flight (TOF) mass spectrometer with a proximity focussed, dual channel plate detector, an 11.8 cm drift region and a 38° field of view. An FIM image or a desorption image of the atoms removed from the apex of a field emitter tip could be obtained. The 10-cm Atom Probe has been called the progenitor of later atom probes including the commercial instruments. Imaging Atom Probe The Imaging Atom-Probe (IAP) was introduced in 1974 by J. A. Panitz. It incorporated the features of the 10-cm Atom-Probe yet “... departs completely from [previous] atom probe philosophy. Rather than attempt to determine the identity of a surface species producing a preselected ion-image spot, we wish to determine the complete crystallographic distribution of a surface species of preselected mass-to-charge ratio. Now suppose that instead of operating the [detector] continuously, it is turned on for a short time coincidentally with the arrival of a preselected species of interest by applying a gate pulse a time T after the evaporation pulse has reached the specimen. If the duration of the gate pulse is shorter than the travel time between adjacent species, only that surface species having the unique travel time T will be detected and its complete crystallographic distribution displayed.” It was patented in 1975 as the Field Desorption Spectrometer. The Imaging Atom-Probe moniker was coined by A. J. Waugh in 1978 and the instrument was described in detail by J. A. Panitz in the same year. Atom Probe Tomography (APT) Modern day atom probe tomography uses a position sensitive detector aka a FIM in a box to deduce the lateral location of atoms. The idea of the APT, inspired by J. A. Panitz's Field Desorption Spectrometer patent, was developed by Mike Miller starting in 1983 and culminated with the first prototype in 1986. Various refinements were made to the instrument, including the use of a so-called position-sensitive (PoS) detector by Alfred Cerezo, Terence Godfrey, and George D. W. Smith at Oxford University in 1988. The Tomographic Atom Probe (TAP), developed by researchers at the University of Rouen in France in 1993, introduced a multichannel timing system and multianode array. Both instruments (PoSAP and TAP) were commercialized by Oxford Nanoscience and CAMECA respectively. Since then, there have been many refinements to increase the field of view, mass and position resolution, and data acquisition rate of the instrument. The Local Electrode Atom Probe was first introduced in 2003 by Imago Scientific Instruments. In 2005, the commercialization of the pulsed laser atom probe (PLAP) expanded the avenues of research from highly conductive materials (metals) to poor conductors (semiconductors like silicon) and even insulating materials. AMETEK acquired CAMECA in 2007 and Imago Scientific Instruments (Madison, WI) in 2010, making the company the sole commercial developer of APTs with more than 110 instruments installed around the world in 2019. The first few decades of work with APT focused on metals. However, with the introduction of the laser pulsed atom probe systems applications have expanded to semiconductors, ceramic and geologic materials, with some work on biomaterials. The most advanced study of biological material to date using APT involved analyzing the chemical structure of teeth of the radula of chiton Chaetopleura apiculata. In this study, the use of APT showed chemical maps of organic fibers in the surrounding nano-crystalline magnetite in the chiton teeth, fibers which were often co-located with sodium or magnesium. This has been furthered to study elephant tusks, dentin and human enamel. Theory Field evaporation Field evaporation is an effect that can occur when an atom bonded at the surface of a material is in the presence of a sufficiently high and appropriately directed electric field, where the electric field is the differential of electric potential (voltage) with respect to distance. Once this condition is met, it is sufficient that local bonding at the specimen surface is capable of being overcome by the field, allowing for evaporation of an atom from the surface to which it is otherwise bonded. Ion flight Whether evaporated from the material itself, or ionised from the gas, the ions that are evaporated are accelerated by electrostatic force, acquiring most of their energy within a few tip-radii of the sample. Subsequently, the accelerative force on any given ion is controlled by the electrostatic equation, where n is the ionisation state of the ion, and e is the fundamental electric charge. This can be equated with the mass of the ion, m, via Newton's law (F=ma): Relativistic effects in the ion flight are usually ignored, as realisable ion speeds are only a very small fraction of the speed of light. Assuming that the ion is accelerated during a very short interval, the ion can be assumed to be travelling at constant velocity. As the ion will travel from the tip at voltage V1 to some nominal ground potential, the speed at which the ion is travelling can be estimated by the energy transferred into the ion during (or near) ionisation. Therefore, the ion speed can be computed with the following equation, which relates kinetic energy to energy gain due to the electric field, the negative arising from the loss of electrons forming a net positive charge. Where U is the ion velocity. Solving for U, the following relation is found: Let's say that for at a certain ionization voltage, a singly charged hydrogen ion acquires a resulting velocity of 1.4x10^6 ms−1 at 10~kV. A singly charged deuterium ion under the sample conditions would have acquired roughly 1.4x10^6/1.41 ms−1. If a detector was placed at a distance of 1 m, the ion flight times would be 1/1.4x10^6 and 1.41/1.4x10^6 s. Thus, the time of the ion arrival can be used to infer the ion type itself, if the evaporation time is known. From the above equation, it can be re-arranged to show that given a known flight distance. F, for the ion, and a known flight time, t, and thus one can substitute these values to obtain the mass-to-charge for the ion. Thus for an ion which traverses a 1 m flight path, across a time of 2000 ns, given an initial accelerating voltage of 5000 V (V in Si units is kg.m^2.s^-3.A^-1) and noting that one amu is 1×10−27 kg, the mass-to-charge ratio (more correctly the mass-to-ionisation value ratio) becomes ~3.86 amu/charge. The number of electrons removed, and thus net positive charge on the ion is not known directly, but can be inferred from the histogram (spectrum) of observed ions. Magnification The magnification in an atom is due to the projection of ions radially away from the small, sharp tip. Subsequently, in the far-field, the ions will be greatly magnified. This magnification is sufficient to observe field variations due to individual atoms, thus allowing in field ion and field evaporation modes for the imaging of single atoms. The standard projection model for the atom probe is an emitter geometry that is based upon a revolution of a conic section, such as a sphere, hyperboloid or paraboloid. For these tip models, solutions to the field may be approximated or obtained analytically. The magnification for a spherical emitter is inversely proportional to the radius of the tip, given a projection directly onto a spherical screen, the following equation can be obtained geometrically. Where rscreen is the radius of the detection screen from the tip centre, and rtip the tip radius. A practical tip to screen distances may range from several centimeters to several meters, with increased detector area required at larger to subtend the same field of view. Practically speaking, the usable magnification will be limited by several effects, such as lateral vibration of the atoms prior to evaporation. Whilst the magnification of both the field ion and atom probe microscopes is extremely high, the exact magnification is dependent upon conditions specific to the examined specimen, so unlike for conventional electron microscopes, there is often little direct control on magnification, and furthermore, obtained images may have strongly variable magnifications due to fluctuations in the shape of the electric field at the surface. Reconstruction The computational conversion of the ion sequence data, as obtained from a position-sensitive detector to a three-dimensional visualisation of atomic types, is termed "reconstruction". Reconstruction algorithms are typically geometrically based and have several literature formulations. Most models for reconstruction assume that the tip is a spherical object, and use empirical corrections to stereographic projection to convert detector positions back to a 2D surface embedded in 3D space, R3. By sweeping this surface through R3 as a function of the ion sequence input data, such as via ion-ordering, a volume is generated onto which positions the 2D detector positions can be computed and placed three-dimensional space. Typically the sweep takes the simple form of advancement of the surface, such that the surface is expanded in a symmetric manner about its advancement axis, with the advancement rate set by a volume attributed to each ion detected and identified. This causes the final reconstructed volume to assume a rounded-conical shape, similar to a badminton shuttlecock. The detected events thus become a point cloud data with attributed experimentally measured values, such as ion time of flight or experimentally derived quantities, e.g. time of flight or detector data. This form of data manipulation allows for rapid computer visualisation and analysis, with data presented as point cloud data with additional information, such as each ion's mass to charge (as computed from the velocity equation above), voltage or other auxiliary measured quantity or computation therefrom. Data features The canonical feature of atom probe data, is its high spatial resolution in the direction through the material, which has been attributed to an ordered evaporation sequence. This data can therefore image near atomically sharp buried interfaces with the associated chemical information. The data obtained from the evaporative process is however not without artefacts that form the physical evaporation or ionisation process. A key feature of the evaporation or field ion images is that the data density is highly inhomogeneous, due to the corrugation of the specimen surface at the atomic scale. This corrugation gives rise to strong electric field gradients in the near-tip zone (on the order of an atomic radii or less from the tip), which during ionisation deflects ions away from the electric field normal. The resultant deflection means that in these regions of high curvature, atomic terraces are belied by a strong anisotropy in the detection density. Where this occurs due to a few atoms on a surface is usually referred to as a "pole", as these are coincident with the crystallographic axes of the specimen (FCC, BCC, HCP) etc. Where the edges of an atomic terrace causes deflection, a low density line is formed and is termed a "zone line". These poles and zone-lines, whilst inducing fluctuations in data density in the reconstructed datasets, which can prove problematic during post-analysis, are critical for determining information such as angular magnification, as the crystallographic relationships between features are typically well known. When reconstructing the data, owing to the evaporation of successive layers of material from the sample, the lateral and in-depth reconstruction values are highly anisotropic. Determination of the exact resolution of the instrument is of limited use, as the resolution of the device is set by the physical properties of the material under analysis. Systems Many designs have been constructed since the method's inception. Initial field ion microscopes, precursors to modern atom probes, were usually glass blown devices developed by individual research laboratories. System layout At a minimum, an atom probe will consist of several key pieces of equipment. A vacuum system for maintaining the low pressures (~10−8 to 10−10 Pa) required, typically a classic 3 chambered UHV design. A system for the manipulation of samples inside the vacuum, including sample viewing systems. A cooling system to reduce atomic motion, such as a helium refrigeration circuit - providing sample temperatures as low as 15K. A high voltage system to raise the sample standing voltage near the threshold for field evaporation. A high voltage pulsing system, use to create timed field evaporation events A counter electrode that can be a simple disk shape (like earlier generation atom probes), or a cone-shaped Local Electrode. The voltage pulse (negative) is typically applied to the counter electrode. A detection system for single energetic ions that includes XY position and TOF information. Optionally, an atom probe may also include laser-optical systems for laser beam targeting and pulsing, if using laser-evaporation methods. In-situ reaction systems, heaters, or plasma treatment may also be employed for some studies as well as a pure noble gas introduction for FIM. Performance Collectable ion volumes were previously limited to several thousand, or tens of thousands of ionic events. Subsequent electronics and instrumentation development has increased the rate of data accumulation, with datasets of hundreds of million atoms (dataset volumes of 107 nm3). Data collection times vary considerably depending upon the experimental conditions and the number of ions collected. Experiments take from a few minutes, to many hours to complete. Applications Metallurgy Atom probe has typically been employed in the chemical analysis of alloy systems at the atomic level. This has arisen as a result of voltage pulsed atom probes providing good chemical and sufficient spatial information in these materials. Metal samples from large grained alloys may be simple to fabricate, particularly from wire samples, with hand-electropolishing techniques giving good results. Subsequently, atom probe has been used in the analysis of the chemical composition of a wide range of alloys. Such data is critical in determining the effect of alloy constituents in a bulk material, identification of solid-state reaction features, such as solid phase precipitates. Such information may not be amenable to analysis by other means (e.g. TEM) owing to the difficulty in generating a three-dimensional dataset with composition. Semiconductors Semi-conductor materials are often analysable in atom probe, however sample preparation may be more difficult, and interpretation of results may be more complex, particularly if the semi-conductor contains phases which evaporate at differing electric field strengths. Applications such as ion implantation may be used to identify the distribution of dopants inside a semi-conducting material, which is increasingly critical in the correct design of modern nanometre scale electronics. Limitations Materials implicitly control achievable spatial resolution. Specimen geometry during the analysis is uncontrolled, yet controls projection behaviour, hence there is little control over the magnification. This induces distortions into the computer generated 3D dataset. Features of interest might evaporate in a physically different manner to the bulk sample, altering projection geometry and the magnification of the reconstructed volume. This yields strong spatial distortions in the final image. Volume selectability can be limited. Site specific preparation methods, e.g. using Focussed ion beam preparation, although more time-consuming, may be used to bypass such limitations. Ion overlap in some samples (e.g. between oxygen and sulfur) resulted in ambiguous analysed species. This may be mitigated by selection of experiment temperature or laser input energy to influence the ionisation number (+, ++, 3+ etc.) of the ionised groups. Data analysis can be used in some cases to statistically recover overlaps. Low molecular weight gases (Hydrogen & Helium) may be difficult to be removed from the analysis chamber, and may be adsorbed and emitted from the specimen, even though not present in the original specimen. This may also limit identification of Hydrogen in some samples. For this reason, deuterated samples have been used to overcome limitations. Results may be contingent on the parameters used to convert the 2D detected data into 3D. In more problematic materials, correct reconstruction may not be done, due to limited knowledge of the true magnification; particularly if zone or pole regions cannot be observed. References Further reading Michael K. Miller, George D.W. Smith, Alfred Cerezo, Mark G. Hetherington (1996) Atom Probe Field Ion Microscopy Monographs on the Physics and Chemistry of Materials, Oxford: Oxford University Press. . Michael K. Miller (2000) Atom Probe Tomography: Analysis at the Atomic Level. New York: Kluwer Academic. Baptiste Gault, Michael P. Moody, Julie M. Cairney, SImon P. Ringer (2012) Atom Probe Microscopy, Springer Series in Materials Science, Vol. 160, New York: Springer. David J. Larson, Ty J. Prosa, Robert M. Ulfig, Brian P. Geiser, Thomas F. Kelly (2013) Local Electrode Atom Probe Tomography - A User's Guide, Springer Characterization & Evaluation of Materials, New York: Springer. External links Video demonstrating Field Ion images, and pulsed ion evaporation www.atomprobe.com - A CAMECA provided community resource with contact information and an interactive FAQ MyScope Atom Probe Tomography - An online learning environment for those who want to learn about atom probe provided by Microscopy Australia Scientific techniques Microscopes Nanotechnology
Atom probe
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
4,793
[ "Materials science", "Measuring instruments", "Microscopes", "Microscopy", "Nanotechnology" ]
3,370
https://en.wikipedia.org/wiki/Boron%20nitride
Boron nitride is a thermally and chemically resistant refractory compound of boron and nitrogen with the chemical formula BN. It exists in various crystalline forms that are isoelectronic to a similarly structured carbon lattice. The hexagonal form corresponding to graphite is the most stable and soft among BN polymorphs, and is therefore used as a lubricant and an additive to cosmetic products. The cubic (zincblende aka sphalerite structure) variety analogous to diamond is called c-BN; it is softer than diamond, but its thermal and chemical stability is superior. The rare wurtzite BN modification is similar to lonsdaleite but slightly softer than the cubic form. Because of excellent thermal and chemical stability, boron nitride ceramics are used in high-temperature equipment and metal casting. Boron nitride has potential use in nanotechnology. History Boron nitride was discovered by chemistry teacher of the Liverpool Institute in 1842 via reduction of boric acid with charcoal in the presence of potassium cyanide. Structure Boron nitride exists in multiple forms that differ in the arrangement of the boron and nitrogen atoms, giving rise to varying bulk properties of the material. Amorphous form (a-BN) The amorphous form of boron nitride (a-BN) is non-crystalline, lacking any long-distance regularity in the arrangement of its atoms. It is analogous to amorphous carbon. All other forms of boron nitride are crystalline. Hexagonal form (h-BN) The most stable crystalline form is the hexagonal one, also called h-BN, α-BN, g-BN, graphitic boron nitride and "white graphene". Hexagonal boron nitride (point group = D3h; space group = P63/mmc) has a layered structure similar to graphite. Within each layer, boron and nitrogen atoms are bound by strong covalent bonds, whereas the layers are held together by weak van der Waals forces. The interlayer "registry" of these sheets differs, however, from the pattern seen for graphite, because the atoms are eclipsed, with boron atoms lying over and above nitrogen atoms. This registry reflects the local polarity of the B–N bonds, as well as interlayer N-donor/B-acceptor characteristics. Likewise, many metastable forms consisting of differently stacked polytypes exist. Therefore, h-BN and graphite are very close neighbors, and the material can accommodate carbon as a substituent element to form BNCs. BC6N hybrids have been synthesized, where carbon substitutes for some B and N atoms. Hexagonal boron nitride monolayer is analogous to graphene, having a honeycomb lattice structure of nearly the same dimensions. Unlike graphene, which is black and an electrical conductor, h-BN monolayer is white and an insulator. It has been proposed for use as an atomic flat insulating substrate or a tunneling dielectric barrier in 2D electronics. . Cubic form (c-BN) Cubic boron nitride has a crystal structure analogous to that of diamond. Consistent with diamond being less stable than graphite, the cubic form is less stable than the hexagonal form, but the conversion rate between the two is negligible at room temperature, as it is for diamond. The cubic form has the sphalerite crystal structure (space group = F3m), the same as that of diamond (with ordered B and N atoms), and is also called β-BN or c-BN. Wurtzite form (w-BN) The wurtzite form of boron nitride (w-BN; point group = C6v; space group = P63mc) has the same structure as lonsdaleite, a rare hexagonal polymorph of carbon. As in the cubic form, the boron and nitrogen atoms are grouped into tetrahedra. In the wurtzite form, the boron and nitrogen atoms are grouped into 6-membered rings. In the cubic form all rings are in the chair configuration, whereas in w-BN the rings between 'layers' are in boat configuration. Earlier optimistic reports predicted that the wurtzite form was very strong, and was estimated by a simulation as potentially having a strength 18% stronger than that of diamond. Since only small amounts of the mineral exist in nature, this has not yet been experimentally verified. Its hardness is 46 GPa, slightly harder than commercial borides but softer than the cubic form of boron nitride. Properties Physical The partly ionic structure of BN layers in h-BN reduces covalency and electrical conductivity, whereas the interlayer interaction increases resulting in higher hardness of h-BN relative to graphite. The reduced electron-delocalization in hexagonal-BN is also indicated by its absence of color and a large band gap. Very different bonding – strong covalent within the basal planes (planes where boron and nitrogen atoms are covalently bonded) and weak between them – causes high anisotropy of most properties of h-BN. For example, the hardness, electrical and thermal conductivity are much higher within the planes than perpendicular to them. On the contrary, the properties of c-BN and w-BN are more homogeneous and isotropic. Those materials are extremely hard, with the hardness of bulk c-BN being slightly smaller and w-BN even higher than that of diamond. Polycrystalline c-BN with grain sizes on the order of 10 nm is also reported to have Vickers hardness comparable or higher than diamond. Because of much better stability to heat and transition metals, c-BN surpasses diamond in mechanical applications, such as machining steel. The thermal conductivity of BN is among the highest of all electric insulators (see table). Boron nitride can be doped p-type with beryllium and n-type with boron, sulfur, silicon or if co-doped with carbon and nitrogen. Both hexagonal and cubic BN are wide-gap semiconductors with a band-gap energy corresponding to the UV region. If voltage is applied to h-BN or c-BN, then it emits UV light in the range 215–250 nm and therefore can potentially be used as light-emitting diodes (LEDs) or lasers. Little is known on melting behavior of boron nitride. It degrades at 2973 °C, but melts at elevated pressure. Thermal stability Hexagonal and cubic BN (and probably w-BN) show remarkable chemical and thermal stabilities. For example, h-BN is stable to decomposition at temperatures up to 1000 °C in air, 1400 °C in vacuum, and 2800 °C in an inert atmosphere. The reactivity of h-BN and c-BN is relatively similar, and the data for c-BN are summarized in the table below. Thermal stability of c-BN can be summarized as follows: In air or oxygen: protective layer prevents further oxidation to ~1300 °C; no conversion to hexagonal form at 1400 °C. In nitrogen: some conversion to h-BN at 1525 °C after 12 h. In vacuum (): conversion to h-BN at 1550–1600 °C. Chemical stability Boron nitride is not attacked by the usual acids, but it is soluble in alkaline molten salts and nitrides, such as LiOH, KOH, NaOH-, , , , , or , which are therefore used to etch BN. Thermal conductivity The theoretical thermal conductivity of hexagonal boron nitride nanoribbons (BNNRs) can approach 1700–2000 W/(m⋅K), which has the same order of magnitude as the experimental measured value for graphene, and can be comparable to the theoretical calculations for graphene nanoribbons. Moreover, the thermal transport in the BNNRs is anisotropic. The thermal conductivity of zigzag-edged BNNRs is about 20% larger than that of armchair-edged nanoribbons at room temperature. Mechanical properties BN nanosheets consist of hexagonal boron nitride (h-BN). They are stable up to 800°C in air. The structure of monolayer BN is similar to that of graphene, which has exceptional strength, a high-temperature lubricant, and a substrate in electronic devices. The anisotropy of Young's modulus and Poisson's ratio depends on the system size. h-BN also exhibits strongly anisotropic strength and toughness, and maintains these over a range of vacancy defects, showing that the anisotropy is independent to the defect type. Natural occurrence In 2009, cubic form (c-BN) was reported in Tibet, and the name qingsongite proposed. The substance was found in dispersed micron-sized inclusions in chromium-rich rocks. In 2013, the International Mineralogical Association affirmed the mineral and the name. Synthesis Preparation and reactivity of hexagonal BN Hexagonal boron nitride is obtained by the treating boron trioxide () or boric acid () with ammonia () or urea () in an inert atmosphere: (T = 900 °C) (T = 900 °C) (T > 1000 °C) (T > 1500 °C) The resulting disordered (amorphous) material contains 92–95% BN and 5–8% . The remaining can be evaporated in a second step at temperatures in order to achieve BN concentration >98%. Such annealing also crystallizes BN, the size of the crystallites increasing with the annealing temperature. h-BN parts can be fabricated inexpensively by hot-pressing with subsequent machining. The parts are made from boron nitride powders adding boron oxide for better compressibility. Thin films of boron nitride can be obtained by chemical vapor deposition from borazine. ZYP Coatings also has developed boron nitride coatings that may be painted on a surface. Combustion of boron powder in nitrogen plasma at 5500 °C yields ultrafine boron nitride used for lubricants and toners. Boron nitride reacts with iodine fluoride to give in low yield. Boron nitride reacts with nitrides of lithium, alkaline earth metals and lanthanides to form nitridoborates. For example: Intercalation of hexagonal BN Various species intercalate into hexagonal BN, such as intercalate or alkali metals. Preparation of cubic BN c-BN is prepared analogously to the preparation of synthetic diamond from graphite. Direct conversion of hexagonal boron nitride to the cubic form has been observed at pressures between 5 and 18 GPa and temperatures between 1730 and 3230 °C, that is similar parameters as for direct graphite-diamond conversion. The addition of a small amount of boron oxide can lower the required pressure to 4–7 GPa and temperature to 1500 °C. As in diamond synthesis, to further reduce the conversion pressures and temperatures, a catalyst is added, such as lithium, potassium, or magnesium, their nitrides, their fluoronitrides, water with ammonium compounds, or hydrazine. Other industrial synthesis methods, again borrowed from diamond growth, use crystal growth in a temperature gradient, or explosive shock wave. The shock wave method is used to produce material called heterodiamond, a superhard compound of boron, carbon, and nitrogen. Low-pressure deposition of thin films of cubic boron nitride is possible. As in diamond growth, the major problem is to suppress the growth of hexagonal phases (h-BN or graphite, respectively). Whereas in diamond growth this is achieved by adding hydrogen gas, boron trifluoride is used for c-BN. Ion beam deposition, plasma-enhanced chemical vapor deposition, pulsed laser deposition, reactive sputtering, and other physical vapor deposition methods are used as well. Preparation of wurtzite BN Wurtzite BN can be obtained via static high-pressure or dynamic shock methods. The limits of its stability are not well defined. Both c-BN and w-BN are formed by compressing h-BN, but formation of w-BN occurs at much lower temperatures close to 1700 °C. Production statistics Whereas the production and consumption figures for the raw materials used for BN synthesis, namely boric acid and boron trioxide, are well known (see boron), the corresponding numbers for the boron nitride are not listed in statistical reports. An estimate for the 1999 world production is 300 to 350 metric tons. The major producers and consumers of BN are located in the United States, Japan, China and Germany. In 2000, prices varied from about $75–120/kg for standard industrial-quality h-BN and were about up to $200–400/kg for high purity BN grades. Applications Hexagonal BN Hexagonal BN (h-BN) is the most widely used polymorph. It is a good lubricant at both low and high temperatures (up to 900 °C, even in an oxidizing atmosphere). h-BN lubricant is particularly useful when the electrical conductivity or chemical reactivity of graphite (alternative lubricant) would be problematic. In internal combustion engines, where graphite could be oxidized and turn into carbon sludge, h-BN with its superior thermal stability can be added to engine lubricants. As with all nano-particle suspensions, Brownian-motion settlement is a problem. Settlement can clog engine oil filters, which limits solid lubricant applications in a combustion engine to automotive racing, where engine re-building is common. Since carbon has appreciable solubility in certain alloys (such as steels), which may lead to degradation of properties, BN is often superior for high temperature and/or high pressure applications. Another advantage of h-BN over graphite is that its lubricity does not require water or gas molecules trapped between the layers. Therefore, h-BN lubricants can be used in vacuum, such as space applications. The lubricating properties of fine-grained h-BN are used in cosmetics, paints, dental cements, and pencil leads. Hexagonal BN was first used in cosmetics around 1940 in Japan. Because of its high price, h-BN was abandoned for this application. Its use was revitalized in the late 1990s with the optimization h-BN production processes, and currently h-BN is used by nearly all leading producers of cosmetic products for foundations, make-up, eye shadows, blushers, kohl pencils, lipsticks and other skincare products. Because of its excellent thermal and chemical stability, boron nitride ceramics and coatings are used high-temperature equipment. h-BN can be included in ceramics, alloys, resins, plastics, rubbers, and other materials, giving them self-lubricating properties. Such materials are suitable for construction of e.g. bearings and in steelmaking. Many quantum devices use multilayer h-BN as a substrate material. It can also be used as a dielectric in resistive random access memories. Hexagonal BN is used in xerographic process and laser printers as a charge leakage barrier layer of the photo drum. In the automotive industry, h-BN mixed with a binder (boron oxide) is used for sealing oxygen sensors, which provide feedback for adjusting fuel flow. The binder utilizes the unique temperature stability and insulating properties of h-BN. Parts can be made by hot pressing from four commercial grades of h-BN. Grade HBN contains a boron oxide binder; it is usable up to 550–850 °C in oxidizing atmosphere and up to 1600 °C in vacuum, but due to the boron oxide content is sensitive to water. Grade HBR uses a calcium borate binder and is usable at 1600 °C. Grades HBC and HBT contain no binder and can be used up to 3000 °C. Boron nitride nanosheets (h-BN) can be deposited by catalytic decomposition of borazine at a temperature ~1100 °C in a chemical vapor deposition setup, over areas up to about 10 cm2. Owing to their hexagonal atomic structure, small lattice mismatch with graphene (~2%), and high uniformity they are used as substrates for graphene-based devices. BN nanosheets are also excellent proton conductors. Their high proton transport rate, combined with the high electrical resistance, may lead to applications in fuel cells and water electrolysis. h-BN has been used since the mid-2000s as a bullet and bore lubricant in precision target rifle applications as an alternative to molybdenum disulfide coating, commonly referred to as "moly". It is claimed to increase effective barrel life, increase intervals between bore cleaning and decrease the deviation in point of impact between clean bore first shots and subsequent shots. h-BN is used as a release agent in molten metal and glass applications. For example, ZYP Coatings developed and currently produces a line of paintable h-BN coatings that are used by manufacturers of molten aluminium, non-ferrous metal, and glass. Because h-BN is nonwetting and lubricious to these molten materials, the coated surface (i.e. mold or crucible) does not stick to the material. Cubic BN Cubic boron nitride (CBN or c-BN) is widely used as an abrasive. Its usefulness arises from its insolubility in iron, nickel, and related alloys at high temperatures, whereas diamond is soluble in these metals. Polycrystalline c-BN (PCBN) abrasives are therefore used for machining steel, whereas diamond abrasives are preferred for aluminum alloys, ceramics, and stone. When in contact with oxygen at high temperatures, BN forms a passivation layer of boron oxide. Boron nitride binds well with metals due to formation of interlayers of metal borides or nitrides. Materials with cubic boron nitride crystals are often used in the tool bits of cutting tools. For grinding applications, softer binders such as resin, porous ceramics and soft metals are used. Ceramic binders can be used as well. Commercial products are known under names "Borazon" (by Hyperion Materials & Technologies), and "Elbor" or "Cubonite" (by Russian vendors). Contrary to diamond, large c-BN pellets can be produced in a simple process (called sintering) of annealing c-BN powders in nitrogen flow at temperatures slightly below the BN decomposition temperature. This ability of c-BN and h-BN powders to fuse allows cheap production of large BN parts. Similar to diamond, the combination in c-BN of highest thermal conductivity and electrical resistivity is ideal for heat spreaders. As cubic boron nitride consists of light atoms and is very robust chemically and mechanically, it is one of the popular materials for X-ray membranes: low mass results in small X-ray absorption, and good mechanical properties allow usage of thin membranes, further reducing the absorption. Amorphous BN Layers of amorphous boron nitride (a-BN) are used in some semiconductor devices, e.g. MOSFETs. They can be prepared by chemical decomposition of trichloroborazine with caesium, or by thermal chemical vapor deposition methods. Thermal CVD can be also used for deposition of h-BN layers, or at high temperatures, c-BN. Other forms of boron nitride Atomically thin boron nitride Hexagonal boron nitride can be exfoliated to mono or few atomic layer sheets. Due to its analogous structure to that of graphene, atomically thin boron nitride is sometimes called white graphene. Mechanical properties Atomically thin boron nitride is one of the strongest electrically insulating materials. Monolayer boron nitride has an average Young's modulus of 0.865TPa and fracture strength of 70.5GPa, and in contrast to graphene, whose strength decreases dramatically with increased thickness, few-layer boron nitride sheets have a strength similar to that of monolayer boron nitride. Thermal conductivity Atomically thin boron nitride has one of the highest thermal conductivity coefficients (751 W/mK at room temperature) among semiconductors and electrical insulators, and its thermal conductivity increases with reduced thickness due to less intra-layer coupling. Thermal stability The air stability of graphene shows a clear thickness dependence: monolayer graphene is reactive to oxygen at 250 °C, strongly doped at 300 °C, and etched at 450 °C; in contrast, bulk graphite is not oxidized until 800 °C. Atomically thin boron nitride has much better oxidation resistance than graphene. Monolayer boron nitride is not oxidized till 700 °C and can sustain up to 850 °C in air; bilayer and trilayer boron nitride nanosheets have slightly higher oxidation starting temperatures. The excellent thermal stability, high impermeability to gas and liquid, and electrical insulation make atomically thin boron nitride potential coating materials for preventing surface oxidation and corrosion of metals and other two-dimensional (2D) materials, such as black phosphorus. Better surface adsorption Atomically thin boron nitride has been found to have better surface adsorption capabilities than bulk hexagonal boron nitride. According to theoretical and experimental studies, atomically thin boron nitride as an adsorbent experiences conformational changes upon surface adsorption of molecules, increasing adsorption energy and efficiency. The synergic effect of the atomic thickness, high flexibility, stronger surface adsorption capability, electrical insulation, impermeability, high thermal and chemical stability of BN nanosheets can increase the Raman sensitivity by up to two orders, and in the meantime attain long-term stability and reusability not readily achievable by other materials. Dielectric properties Atomically thin hexagonal boron nitride is an excellent dielectric substrate for graphene, molybdenum disulfide (), and many other 2D material-based electronic and photonic devices. As shown by electric force microscopy (EFM) studies, the electric field screening in atomically thin boron nitride shows a weak dependence on thickness, which is in line with the smooth decay of electric field inside few-layer boron nitride revealed by the first-principles calculations. Raman characteristics Raman spectroscopy has been a useful tool to study a variety of 2D materials, and the Raman signature of high-quality atomically thin boron nitride was first reported by Gorbachev et al. in 2011. and Li et al. However, the two reported Raman results of monolayer boron nitride did not agree with each other. Cai et al., therefore, conducted systematic experimental and theoretical studies to reveal the intrinsic Raman spectrum of atomically thin boron nitride. It reveals that atomically thin boron nitride without interaction with a substrate has a G band frequency similar to that of bulk hexagonal boron nitride, but strain induced by the substrate can cause Raman shifts. Nevertheless, the Raman intensity of G band of atomically thin boron nitride can be used to estimate layer thickness and sample quality. Boron nitride nanomesh Boron nitride nanomesh is a nanostructured two-dimensional material. It consists of a single BN layer, which forms by self-assembly a highly regular mesh after high-temperature exposure of a clean rhodium or ruthenium surface to borazine under ultra-high vacuum. The nanomesh looks like an assembly of hexagonal pores. The distance between two pore centers is 3.2 nm and the pore diameter is ~2 nm. Other terms for this material are boronitrene or white graphene. The boron nitride nanomesh is air-stable and compatible with some liquids. up to temperatures of 800 °C. Boron nitride nanotubes Boron nitride tubules were first made in 1989 by Shore and Dolan This work was patented in 1989 and published in 1989 thesis (Dolan) and then 1993 Science. The 1989 work was also the first preparation of amorphous BN by B-trichloroborazine and cesium metal. Boron nitride nanotubes were predicted in 1994 and experimentally discovered in 1995. They can be imagined as a rolled up sheet of h-boron nitride. Structurally, it is a close analog of the carbon nanotube, namely a long cylinder with diameter of several to hundred nanometers and length of many micrometers, except carbon atoms are alternately substituted by nitrogen and boron atoms. However, the properties of BN nanotubes are very different: whereas carbon nanotubes can be metallic or semiconducting depending on the rolling direction and radius, a BN nanotube is an electrical insulator with a bandgap of ~5.5 eV, basically independent of tube chirality and morphology. In addition, a layered BN structure is much more thermally and chemically stable than a graphitic carbon structure. Boron nitride aerogel Boron nitride aerogel is an aerogel made of highly porous BN. It typically consists of a mixture of deformed BN nanotubes and nanosheets. It can have a density as low as 0.6 mg/cm3 and a specific surface area as high as 1050 m2/g, and therefore has potential applications as an absorbent, catalyst support and gas storage medium. BN aerogels are highly hydrophobic and can absorb up to 160 times their weight in oil. They are resistant to oxidation in air at temperatures up to 1200 °C, and hence can be reused after the absorbed oil is burned out by flame. BN aerogels can be prepared by template-assisted chemical vapor deposition using borazine as the feed gas. Composites containing BN Addition of boron nitride to silicon nitride ceramics improves the thermal shock resistance of the resulting material. For the same purpose, BN is added also to silicon nitride-alumina and titanium nitride-alumina ceramics. Other materials being reinforced with BN include alumina and zirconia, borosilicate glasses, glass ceramics, enamels, and composite ceramics with titanium boride-boron nitride, titanium boride-aluminium nitride-boron nitride, and silicon carbide-boron nitride composition. Zirconia Stabilized Boron Nitride (ZSBN) is produced by adding zirconia to BN, enhancing its thermal shock resistance and mechanical strength through a sintering process. It offers better performance characteristics including Superior corrosion and erosion resistance over a wide temperature range. Its unique combination of thermal conductivity, lubricity, mechanical strength, and stability makes it suitable for various applications including cutting tools and wear-resistant coatings, thermal and electrical insulation, aerospace and defense, and high-temperature components. Pyrolytic boron nitride (PBN) Pyrolytic boron nitride (PBN), also known as Chemical vapour-deposited Boron Nitride(CVD-BN), is a high-purity ceramic material characterized by exceptional chemical resistance and mechanical strength at high temperatures. Pyrolytic boron nitride is typically prepared through the thermal decomposition of boron trichloride and ammonia vapors on graphite substrates at 1900°C. Pyrolytic boron nitride (PBN) generally has a hexagonal structure similar to hexagonal boron nitride (hBN), though it can exhibit stacking faults or deviations from the ideal lattice. Pyrolytic boron nitride (PBN) shows some remarkable attributes, including exceptional chemical inertness, high dielectric strength, excellent thermal shock resistance, non-wettability, non-toxicity, oxidation resistance, and minimal outgassing. Due to a highly ordered planar texture similar to pyrolytic graphite (PG), it exhibits anisotropic properties such as lower dielectric constant vertical to the crystal plane and higher bending strength along the crystal plane. PBN material has been widely manufactured as crucibles of compound semiconductor crystals, output windows and dielectric rods of traveling-wave tubes, high-temperature jigs and insulator. Health issues Boron nitride (along with , NbN, and BNC) is generally considered to be non-toxic and does not exhibit chemical activity in biological systems. Due to its excellent safety profile and lubricious properties, boron nitride finds widespread use in various applications, including cosmetics and food processing equipment. See also Beta carbon nitride Borazon Borocarbonitrides Boron suboxide Superhard materials Wide-bandgap semiconductors Notes References External links National Pollutant Inventory: Boron and Compounds Materials Safety Data Sheet at University of Oxford Boron compounds Ceramic materials Nitrides III-V semiconductors Non-petroleum based lubricants Dry lubricants Abrasives Superhard materials Neutron poisons Monolayers III-V compounds Boron–nitrogen compounds Zincblende crystal structure Wurtzite structure type
Boron nitride
[ "Physics", "Chemistry", "Engineering" ]
6,224
[ "Monolayers", "Inorganic compounds", "Semiconductor materials", "Materials", "Superhard materials", "Ceramic materials", "III-V semiconductors", "Ceramic engineering", "III-V compounds", "Atoms", "Matter" ]
3,755
https://en.wikipedia.org/wiki/Boron
Boron is a chemical element. It has the symbol B and atomic number 5. In its crystalline form it is a brittle, dark, lustrous metalloid; in its amorphous form it is a brown powder. As the lightest element of the boron group it has three valence electrons for forming covalent bonds, resulting in many compounds such as boric acid, the mineral sodium borate, and the ultra-hard crystals of boron carbide and boron nitride. Boron is synthesized entirely by cosmic ray spallation and supernovas and not by stellar nucleosynthesis, so it is a low-abundance element in the Solar System and in the Earth's crust. It constitutes about 0.001 percent by weight of Earth's crust. It is concentrated on Earth by the water-solubility of its more common naturally occurring compounds, the borate minerals. These are mined industrially as evaporites, such as borax and kernite. The largest known deposits are in Turkey, the largest producer of boron minerals. Elemental boron is found in small amounts in meteoroids, but chemically uncombined boron is not otherwise found naturally on Earth. Several allotropes exist: amorphous boron is a brown powder; crystalline boron is silvery to black, extremely hard (9.3 on the Mohs scale), and a poor electrical conductor at room temperature (1.5 × 10−6 Ω−1 cm−1 room temperature electrical conductivity). The primary use of the element itself is as boron filaments with applications similar to carbon fibers in some high-strength materials. Boron is primarily used in chemical compounds. About half of all production consumed globally is an additive in fiberglass for insulation and structural materials. The next leading use is in polymers and ceramics in high-strength, lightweight structural and heat-resistant materials. Borosilicate glass is desired for its greater strength and thermal shock resistance than ordinary soda lime glass. As sodium perborate, it is used as a bleach. A small amount is used as a dopant in semiconductors, and reagent intermediates in the synthesis of organic fine chemicals. A few boron-containing organic pharmaceuticals are used or are in study. Natural boron is composed of two stable isotopes, one of which (boron-10) has a number of uses as a neutron-capturing agent. Borates have low toxicity in mammals (similar to table salt) but are more toxic to arthropods and are occasionally used as insecticides. Boron-containing organic antibiotics are known. Although only traces are required, it is an essential plant nutrient. History The word boron was coined from borax, the mineral from which it was isolated, by analogy with carbon, which boron resembles chemically. Borax in its mineral form (then known as tincal) first saw use as a glaze, beginning in China circa 300 AD. Some crude borax traveled westward, and was apparently mentioned by the alchemist Jabir ibn Hayyan around 700 AD. Marco Polo brought some glazes back to Italy in the 13th century. Georgius Agricola, in around 1600, reported the use of borax as a flux in metallurgy. In 1777, boric acid was recognized in the hot springs (soffioni) near Florence, Italy, at which point it became known as sal sedativum, with ostensible medical benefits. The mineral was named sassolite, after Sasso Pisano in Italy. Sasso was the main source of European borax from 1827 to 1872, when American sources replaced it. Boron compounds were rarely used until the late 1800s when Francis Marion Smith's Pacific Coast Borax Company first popularized and produced them in volume at low cost. Boron was not recognized as an element until it was isolated by Sir Humphry Davy and by Joseph Louis Gay-Lussac and Louis Jacques Thénard. In 1808 Davy observed that electric current sent through a solution of borates produced a brown precipitate on one of the electrodes. In his subsequent experiments, he used potassium to reduce boric acid instead of electrolysis. He produced enough boron to confirm a new element and named it boracium. Gay-Lussac and Thénard used iron to reduce boric acid at high temperatures. By oxidizing boron with air, they showed that boric acid is its oxidation product. Jöns Jacob Berzelius identified it as an element in 1824. Pure boron was arguably first produced by the American chemist Ezekiel Weintraub in 1909. Characteristics of the element Isotopes Boron has two naturally occurring and stable isotopes, 11B (80.1%) and 10B (19.9%). The mass difference results in a wide range of δ11B values, which are defined as a fractional difference between the 11B and 10B and traditionally expressed in parts per thousand, in natural waters ranging from −16 to +59. There are 13 known isotopes of boron; the shortest-lived isotope is 7B which decays through proton emission and alpha decay with a half-life of 3.5×10−22 s. Isotopic fractionation of boron is controlled by the exchange reactions of the boron species B(OH)3 and [B(OH)4]−. Boron isotopes are also fractionated during mineral crystallization, during H2O phase changes in hydrothermal systems, and during hydrothermal alteration of rock. The latter effect results in preferential removal of the [10B(OH)4]− ion onto clays. It results in solutions enriched in 11B(OH)3 and therefore may be responsible for the large 11B enrichment in seawater relative to both oceanic crust and continental crust; this difference may act as an isotopic signature. The exotic 17B exhibits a nuclear halo, i.e. its radius is appreciably larger than that predicted by the liquid drop model. NMR spectroscopy Both 10B and 11B possess nuclear spin. The nuclear spin of 10B is 3 and that of 11B is . These isotopes are, therefore, of use in nuclear magnetic resonance spectroscopy; and spectrometers specially adapted to detecting the boron-11 nuclei are available commercially. The 10B and 11B nuclei also cause splitting in the resonances of attached nuclei. Allotropes Boron forms four major allotropes: α-rhombohedral and β-rhombohedral (α-R and β-R), γ-orthorhombic (γ) and β-tetragonal (β-T). All four phases are stable at ambient conditions, and β-rhombohedral is the most common and stable. An α-tetragonal phase also exists (α-T), but is very difficult to produce without significant contamination. Most of the phases are based on B12 icosahedra, but the γ phase can be described as a rocksalt-type arrangement of the icosahedra and B2 atomic pairs. It can be produced by compressing other boron phases to 12–20 GPa and heating to 1500–1800 °C; it remains stable after releasing the temperature and pressure. The β-T phase is produced at similar pressures, but higher temperatures of 1800–2200 °C. The α-T and β-T phases might coexist at ambient conditions, with the β-T phase being the more stable. Compressing boron above 160 GPa produces a boron phase with an as yet unknown structure, and this phase is a superconductor at temperatures below 6–12 K. Atomic structure Atomic boron is the lightest element having an electron in a p-orbital in its ground state. Its first three ionization energies are higher than those for heavier group III elements, reflecting its electropositive character. Chemistry of the element Preparation Elemental boron is rare and poorly studied because the pure material is extremely difficult to prepare. Most studies of "boron" involve samples that contain small amounts of carbon. Very pure boron is produced with difficulty because of contamination by carbon or other elements that resist removal. Some early routes to elemental boron involved the reduction of boric oxide with metals such as magnesium or aluminium. However, the product was often contaminated with borides of those metals. Pure boron can be prepared by reducing volatile boron halides with hydrogen at high temperatures. Ultrapure boron for use in the semiconductor industry is produced by the decomposition of diborane at high temperatures and then further purified by the zone melting or Czochralski processes. Reactions of the element Crystalline boron is a hard, black material with a melting point of above 2000 °C. Crystalline boron is chemically inert and resistant to attack by boiling hydrofluoric or hydrochloric acid. When finely divided, it is attacked slowly by hot concentrated hydrogen peroxide, hot concentrated nitric acid, hot sulfuric acid or hot mixture of sulfuric and chromic acids. Since elemental boron is very rare, its chemical reactions are of little significance practically speaking. The elemental form is not typically used as a precursor to compounds. Instead, the extensive inventory of boron compounds are produced from borates. When exposed to air, under normal conditions, a protective oxide or hydroxide layer forms on the surface of boron, which prevents further corrosion. The rate of oxidation of boron depends on the crystallinity, particle size, purity and temperature. At higher temperatures boron burns to form boron trioxide: 4 B + 3 O2 → 2 B2O3 Chemical compounds General trends In some ways, boron is comparable to carbon in its capability to form stable covalently bonded molecular networks (even nominally disordered (amorphous) boron contains boron icosahedra, which are bonded randomly to each other without long-range order.). In terms of chemical behavior, boron compounds resembles silicon. Aluminium, the heavier congener of boron, does not behave analogously to boron: it is far more electropositive, it is larger, and it tends not to form homoatomic Al-Al bonds. In the most familiar compounds, boron has the formal oxidation state III. These include the common oxides, sulfides, nitrides, and halides, as well as organic derivatives Boron compounds often violate the octet rule. Halides Boron forms the complete series of trihalides, i.e. BX3 (X = F, Cl, Br, I). The trifluoride is produced by treating borate salts with hydrogen fluoride, while the trichloride is produced by carbothermic reduction of boron oxides in the presence of chlorine gas: The trihalides adopt a planar trigonal structures, in contrast to the behavior of aluminium trihalides. All charge-neutral boron halides violate the octet rule, hence they typically are Lewis acidic. For example, boron trifluoride (BF3) combines eagerly with fluoride sources to give the tetrafluoroborate anion, BF4−. Boron trifluoride is used in the petrochemical industry as a catalyst. The halides react with water to form boric acid. Other boron halides include those with B-B bonding, such as B2F4 and B4Cl4. Oxide derivatives Boron-containing minerals exclusively exist as oxides of B(III), often associated with other elements. More than one hundred borate minerals are known. These minerals resemble silicates in some respect, although it is often found not only in a tetrahedral coordination with oxygen, but also in a trigonal planar configuration. The borates can be subdivided into two classes, anhydrous and the far more common hydrates. The hydrates contain B-OH groups and sometimes water of crystallization. A typical motif is exemplified by the tetraborate anions of the common mineral borax. The formal negative charge of the tetrahedral borate center is balanced by sodium (Na+). Some idea of the complexity of these materials is provided by the inventory of zinc borates, which are common wood preservatives and fire retardants: 4ZnO·B2O3·H2O, ZnO·B2O3·1.12H2O, ZnO·B2O3·2H2O, 6ZnO·5B2O3·3H2O, 2ZnO·3B2O3·7H2O, 2ZnO·3B2O3·3H2O, 3ZnO·5B2O3·14H2O, and ZnO·5B2O3·4.5H2O. As illustrated by the preceding examples, borate anions tend to condense by formation of B-O-B bonds. Borosilicates, with B-O-Si, and borophosphates, with B-O-P linkages, are also well represented in both minerals and synthetic compounds. Related to the oxides are the alkoxides and boronic acids with the formula B(OR)3 and R2BOH, respectively. Boron forms a wide variety of such metal-organic compounds, some of which are used in the synthesis of pharmaceuticals. These developments, especially the Suzuki reaction, was recognized with the 2010 Nobel Prize in Chemistry to Akira Suzuki. Hydrides Boranes and borohydrides are neutral and anionic compounds of boron and hydrogen, respectively. Sodium borohydride is the progenitor of the boranes. Sodium borohydride is obtained by hydrogenation of trimethylborate: Sodium borohydride is a white, fairly air-stable salt. Sodium borohydride converts to diborane by treatment with boron trifluoride: Diborane is the dimer of the elusive parent called borane, BH3. Having a formula akin to ethane's (C2H6), diborane adopts a very different structure, featuring a pair of bridging H atoms. This unusual structure, which was deduced only in the 1940's, was an early indication of the many surprises provided by boron chemistry. Pyrolysis of diborane gives boron hydride clusters, such as pentaborane(9) and decaborane . A large number of anionic boron hydrides are also known, e.g. [B12H12]2−. In these cluster compounds, boron has a coordination number greater than four. The analysis of the bonding in these polyhedra clusters earned William N. Lipscomb the 1976 Nobel Prize in Chemistry for "studies on the structure of boranes illuminating problems of chemical bonding". Not only are their structures unusual, many of the boranes are extremely reactive. For example, a widely used procedure for pentaborane states that it will "spontaneously inflame or explode in air". Organoboron compounds A large number of organoboron compounds, species with B-C bonds, are known. Many organoboron compounds are produced from hydroboration, the addition of B-H bonds to bonds. Diborane is traditionally used for such reactions, as illustrated by the preparation of trioctylborane: This regiochemistry, i.e. the tendency of B to attach to the terminal carbon - is explained by the polarization of the bonds in boranes, which is indicated as Bδ+-Hδ-. Hydroboration opened the doors for many subsequent reactions, several of which are useful in the synthesis of complex organic compounds. The significance of these methods was recognized by the award of Nobel Prize in Chemistry to H. C. Brown in 1979. Even complicated boron hydrides, such as decaborane undergo hydroboration. Like the volatile boranes, the alkyl boranes ignite spontaneously in air. In the 1950s, several studies examined the use of boranes as energy-increasing "Zip fuel" additives for jet fuel. Triorganoboron(III) compounds are trigonal planar and exhibit weak Lewis acidity. The resulting adducts are tetrahedral. This behavior contrasts with that of triorganoaluminium compounds (see trimethylaluminium), which are tetrahedral with bridging alkyl groups. Nitrides The boron-nitrides follow the pattern of avoiding B-B and N-N bonds: only B-N bonding is observed generally. The boron nitrides exhibit structures analogous to various allotropes of carbon, including graphite, diamond, and nanotubes. This similarity reflects the fact that B and N have eight valence electrons as does a pair of carbon atoms. In cubic boron nitride (tradename Borazon), boron and nitrogen atoms are tetrahedral, just like carbon in diamond. Cubic boron nitride, among other applications, is used as an abrasive, as its hardness is comparable with that of diamond. Hexagonal boron nitride (h-BN) is the BN analogue of graphite, consisting of sheets of alternating B and N atoms. These sheets stack with boron and nitrogen in registry between the sheets. Graphite and h-BN have very different properties, although both are lubricants, as these planes slip past each other easily. However, h-BN is a relatively poor electrical and thermal conductor in the planar directions. Molecular analogues of boron nitrides are represented by borazine, (BH)3(NH)3. Carbides Boron carbide is a ceramic material. It is obtained by carbothermal reduction of B2O3in an electric furnace: 2 B2O3 + 7 C → B4C + 6 CO Boron carbide's structure is only approximately reflected in its formula of B4C, and it shows a clear depletion of carbon from this suggested stoichiometric ratio. This is due to its very complex structure. The substance can be seen with empirical formula B12C3 (i.e., with B12 dodecahedra being a motif), but with less carbon, as the suggested C3 units are replaced with C-B-C chains, and some smaller (B6) octahedra are present as well (see the boron carbide article for structural analysis). The repeating polymer plus semi-crystalline structure of boron carbide gives it great structural strength per weight. Borides Binary metal-boron compounds, the metal borides, contain only boron and a metal. They are metallic, very hard, with high melting points. TiB2, ZrB2, and HfB2 have melting points above 3000 °C. Some metal borides find specialized applications as hard materials for cutting tools. Occurrence Boron is rare in the Universe and solar system. The amount of boron formed in the Big Bang is negligible. Boron is not generated in the normal course of stellar nucleosynthesis and is destroyed in stellar interiors. In the high oxygen environment of the Earth's surface, boron is always found fully oxidized to borate. Boron does not appear on Earth in elemental form. Extremely small traces of elemental boron were detected in Lunar regolith. Although boron is a relatively rare element in the Earth's crust, representing only 0.001% of the crust mass, it can be highly concentrated by the action of water, in which many borates are soluble. It is found naturally combined in compounds such as borax and boric acid (sometimes found in volcanic spring waters). About a hundred borate minerals are known. Production Economically important sources of boron are the minerals colemanite, rasorite (kernite), ulexite and tincal. Together these constitute 90% of mined boron-containing ore. The largest global borax deposits known, many still untapped, are in Central and Western Turkey, including the provinces of Eskişehir, Kütahya and Balıkesir. Global proven boron mineral mining reserves exceed one billion metric tonnes, against a yearly production of about four million tonnes. Turkey and the United States are the largest producers of boron products. Turkey produces about half of the global yearly demand, through Eti Mine Works () a Turkish state-owned mining and chemicals company focusing on boron products. It holds a government monopoly on the mining of borate minerals in Turkey, which possesses 72% of the world's known deposits. In 2012, it held a 47% share of production of global borate minerals, ahead of its main competitor, Rio Tinto Group. Almost a quarter (23%) of global boron production comes from the Rio Tinto Borax Mine (also known as the U.S. Borax Boron Mine) near Boron, California. Market trend The average cost of crystalline elemental boron is US$5/g. Elemental boron is chiefly used in making boron fibers, where it is deposited by chemical vapor deposition on a tungsten core (see below). Boron fibers are used in lightweight composite applications, such as high strength tapes. This use is a very small fraction of total boron use. Boron is introduced into semiconductors as boron compounds, by ion implantation. Estimated global consumption of boron (almost entirely as boron compounds) was about 4 million tonnes of B2O3 in 2012. As compounds such as borax and kernite its cost was US$377/tonne in 2019. Increasing demand for boric acid has led a number of producers to invest in additional capacity. Turkey's state-owned Eti Mine Works opened a new boric acid plant with the production capacity of 100,000 tonnes per year at Emet in 2003. Rio Tinto Group increased the capacity of its boron plant from 260,000 tonnes per year in 2003 to 310,000 tonnes per year by May 2005, with plans to grow this to 366,000 tonnes per year in 2006. Chinese boron producers have been unable to meet rapidly growing demand for high quality borates. This has led to imports of sodium tetraborate (borax) growing by a hundredfold between 2000 and 2005 and boric acid imports increasing by 28% per year over the same period. The rise in global demand has been driven by high growth rates in glass fiber, fiberglass and borosilicate glassware production. A rapid increase in the manufacture of reinforcement-grade boron-containing fiberglass in Asia, has offset the development of boron-free reinforcement-grade fiberglass in Europe and the US. The recent rises in energy prices may lead to greater use of insulation-grade fiberglass, with consequent growth in the boron consumption. Roskill Consulting Group forecasts that world demand for boron will grow by 3.4% per year to reach 21 million tonnes by 2010. The highest growth in demand is expected to be in Asia where demand could rise by an average 5.7% per year. Applications Nearly all boron ore extracted from the Earth is refined as boric acid and sodium tetraborate pentahydrate. In the United States, 70% of the boron is used for the production of glass and ceramics. The major global industrial-scale use of boron compounds (about 46% of end-use) is in production of glass fiber for boron-containing insulating and structural fiberglasses, especially in Asia. Boron is added to the glass as borax pentahydrate or boron oxide, to influence the strength or fluxing qualities of the glass fibers. Another 10% of global boron production is for borosilicate glass as used in high strength glassware. About 15% of global boron is used in boron ceramics, including super-hard materials discussed below. Agriculture consumes 11% of global boron production, and bleaches and detergents about 6%. Boronated fiberglass Fiberglasses, a fiber reinforced polymer sometimes contain borosilicate, borax, or boron oxide, and is added to increase the strength of the glass. The highly boronated glasses, E-glass (named for "Electrical" use) are alumino-borosilicate glass. Another common high-boron glasses, C-glass, also has a high boron oxide content, used for glass staple fibers and insulation. D-glass, a borosilicate glass, named for its low dielectric constant. Because of the ubiquitous use of fiberglass in construction and insulation, boron-containing fiberglasses consume over half the global production of boron, and are the single largest commercial boron market. Borosilicate glass Borosilicate glass, which is typically 12–15% B2O3, 80% SiO2, and 2% Al2O3, has a low coefficient of thermal expansion, giving it a good resistance to thermal shock. Schott AG's "Duran" and Owens-Corning's trademarked Pyrex are two major brand names for this glass, used both in laboratory glassware and in consumer cookware and bakeware, chiefly for this resistance. Elemental boron fiber Boron fibers (boron filaments) are high-strength, lightweight materials that are used chiefly for advanced aerospace structures as a component of composite materials, as well as limited production consumer and sporting goods such as golf clubs and fishing rods. The fibers can be produced by chemical vapor deposition of boron on a tungsten filament. Boron fibers and sub-millimeter sized crystalline boron springs are produced by laser-assisted chemical vapor deposition. Translation of the focused laser beam allows production of even complex helical structures. Such structures show good mechanical properties (elastic modulus 450 GPa, fracture strain 3.7%, fracture stress 17 GPa) and can be applied as reinforcement of ceramics or in micromechanical systems. Boron carbide ceramic Boron carbide's ability to absorb neutrons without forming long-lived radionuclides (especially when doped with extra boron-10) makes the material attractive as an absorbent for neutron radiation arising in nuclear power plants. Nuclear applications of boron carbide include shielding, control rods and shut-down pellets. Within control rods, boron carbide is often powdered, to increase its surface area. High-hardness and abrasive compounds Boron carbide and cubic boron nitride powders are widely used as abrasives. Boron nitride is a material isoelectronic to carbon. Similar to carbon, it has both hexagonal (soft graphite-like h-BN) and cubic (hard, diamond-like c-BN) forms. h-BN is used as a high temperature component and lubricant. c-BN, also known under commercial name borazon, is a superior abrasive. Its hardness is only slightly smaller than, but its chemical stability is superior, to that of diamond. Heterodiamond (also called BCN) is another diamond-like boron compound. Metallurgy Boron is added to boron steels at the level of a few parts per million to increase hardenability. Higher percentages are added to steels used in the nuclear industry due to boron's neutron absorption ability. Boron can also increase the surface hardness of steels and alloys through boriding. Additionally metal borides are used for coating tools through chemical vapor deposition or physical vapor deposition. Implantation of boron ions into metals and alloys, through ion implantation or ion beam deposition, results in a spectacular increase in surface resistance and microhardness. Laser alloying has also been successfully used for the same purpose. These borides are an alternative to diamond coated tools, and their (treated) surfaces have similar properties to those of the bulk boride. For example, rhenium diboride can be produced at ambient pressures, but is rather expensive because of rhenium. The hardness of ReB2 exhibits considerable anisotropy because of its hexagonal layered structure. Its value is comparable to that of tungsten carbide, silicon carbide, titanium diboride or zirconium diboride. Similarly, AlMgB14 + TiB2 composites possess high hardness and wear resistance and are used in either bulk form or as coatings for components exposed to high temperatures and wear loads. Detergent formulations and bleaching agents Borax is used in various household laundry and cleaning products. It is also present in some tooth bleaching formulas. Sodium perborate serves as a source of active oxygen in many detergents, laundry detergents, cleaning products, and laundry bleaches. However, despite its name, "Borateem" laundry bleach no longer contains any boron compounds, using sodium percarbonate instead as a bleaching agent. Insecticides and antifungals Zinc borates and boric acid, popularized as fire retardants, are widely used as wood preservatives and insecticides. Boric acid is also used as a domestic insecticide. Semiconductors Boron is a useful dopant for such semiconductors as silicon, germanium, and silicon carbide. Having one fewer valence electron than the host atom, it donates a hole resulting in p-type conductivity. Traditional method of introducing boron into semiconductors is via its atomic diffusion at high temperatures. This process uses either solid (B2O3), liquid (BBr3), or gaseous boron sources (B2H6 or BF3). However, after the 1970s, it was mostly replaced by ion implantation, which relies mostly on BF3 as a boron source. Boron trichloride gas is also an important chemical in semiconductor industry, however, not for doping but rather for plasma etching of metals and their oxides. Triethylborane is also injected into vapor deposition reactors as a boron source. Examples are the plasma deposition of boron-containing hard carbon films, silicon nitride–boron nitride films, and for doping of diamond film with boron. Magnets Boron is a component of neodymium magnets (Nd2Fe14B), which are among the strongest type of permanent magnet. These magnets are found in a variety of electromechanical and electronic devices, such as magnetic resonance imaging (MRI) medical imaging systems, in compact and relatively small motors and actuators. As examples, computer HDDs (hard disk drives), CD (compact disk) and DVD (digital versatile disk) players rely on neodymium magnet motors to deliver intense rotary power in a remarkably compact package. In mobile phones 'Neo' magnets provide the magnetic field which allows tiny speakers to deliver appreciable audio power. Shielding and neutron absorber in nuclear reactors Boron shielding is used as a control for nuclear reactors, taking advantage of its high cross-section for neutron capture. In pressurized water reactors a variable concentration of boronic acid in the cooling water is used as a neutron poison to compensate the variable reactivity of the fuel. When new rods are inserted the concentration of boronic acid is maximal, and is reduced during the lifetime. Other nonmedical uses Because of its distinctive green flame, amorphous boron is used in pyrotechnic flares. Some anti-corrosion systems contain borax. Sodium borates are used as a flux for soldering silver and gold and with ammonium chloride for welding ferrous metals. They are also fire retarding additives to plastics and rubber articles. Boric acid (also known as orthoboric acid) H3BO3 is used in the production of textile fiberglass and flat panel displays and in many PVAc- and PVOH-based adhesives. Triethylborane is a substance which ignites the JP-7 fuel of the Pratt & Whitney J58 turbojet/ramjet engines powering the Lockheed SR-71 Blackbird. It was also used to ignite the F-1 Engines on the Saturn V Rocket utilized by NASA's Apollo and Skylab programs from 1967 until 1973. Today SpaceX uses it to ignite the engines on their Falcon 9 rocket. Triethylborane is suitable for this because of its pyrophoric properties, especially the fact that it burns with a very high temperature. Triethylborane is an industrial initiator in radical reactions, where it is effective even at low temperatures. Borates are used as environmentally benign wood preservatives. Pharmaceutical and biological applications Boron plays a role in pharmaceutical and biological applications as it is found in various antibiotics produced by bacteria, such as boromycins, aplasmomycins, borophycins, and tartrolons. These antibiotics have shown inhibitory effects on the growth of certain bacteria, fungi, and protozoa. Boron is also being studied for its potential medicinal applications, including its incorporation into biologically active molecules for therapies like boron neutron capture therapy for brain tumors. Some boron-containing biomolecules may act as signaling molecules interacting with cell surfaces, suggesting a role in cellular communication. Boric acid has antiseptic, antifungal, and antiviral properties and, for these reasons, is applied as a water clarifier in swimming pool water treatment. Mild solutions of boric acid have been used as eye antiseptics. Bortezomib (marketed as Velcade and Cytomib). Boron appears as an active element in the organic pharmaceutical bortezomib, a new class of drug called the proteasome inhibitor, for treating myeloma and one form of lymphoma (it is currently in experimental trials against other types of lymphoma). The boron atom in bortezomib binds the catalytic site of the 26S proteasome with high affinity and specificity. A number of potential boronated pharmaceuticals using boron-10, have been prepared for use in boron neutron capture therapy (BNCT). Some boron compounds show promise in treating arthritis, though none have as yet been generally approved for the purpose. Tavaborole (marketed as Kerydin) is an Aminoacyl tRNA synthetase inhibitor which is used to treat toenail fungus. It gained FDA approval in July 2014. Dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies or red blood cells, which allows for positron emission tomography (PET) imaging of cancer and hemorrhages, respectively. A Human-Derived, Genetic, Positron-emitting and Fluorescent (HD-GPF) reporter system uses a human protein, PSMA and non-immunogenic, and a small molecule that is positron-emitting (boron bound 18F) and fluorescence for dual modality PET and fluorescent imaging of genome modified cells, e.g. cancer, CRISPR/Cas9, or CAR T-cells, in an entire mouse. The dual-modality small molecule targeting PSMA was tested in humans and found the location of primary and metastatic prostate cancer, fluorescence-guided removal of cancer, and detects single cancer cells in tissue margins. Research MgB2 Magnesium diboride (MgB2) is a superconductor with the transition temperature of 39 K. MgB2 wires are produced with the powder-in-tube process and applied in superconducting magnets. A project at CERN to make MgB2 cables has resulted in superconducting test cables able to carry 20,000 amperes for extremely high current distribution applications, such as the contemplated high luminosity version of the Large Hadron Collider. Commercial isotope enrichment Because of its high neutron cross-section, boron-10 is often used to control fission in nuclear reactors as a neutron-capturing substance. Several industrial-scale enrichment processes have been developed; however, only the fractionated vacuum distillation of the dimethyl ether adduct of boron trifluoride (DME-BF3) and column chromatography of borates are being used. Radiation-hardened semiconductors Cosmic radiation will produce secondary neutrons if it hits spacecraft structures. Those neutrons will be captured in 10B, if it is present in the spacecraft's semiconductors, producing a gamma ray, an alpha particle, and a lithium ion. Those resultant decay products may then irradiate nearby semiconductor "chip" structures, causing data loss (bit flipping, or single event upset). In radiation-hardened semiconductor designs, one countermeasure is to use depleted boron, which is greatly enriched in 11B and contains almost no 10B. This is useful because 11B is largely immune to radiation damage. Depleted boron is a byproduct of the nuclear industry (see above). Proton-boron fusion 11B is also a candidate as a fuel for aneutronic fusion. When struck by a proton with energy of about 500 keV, it produces three alpha particles and 8.7 MeV of energy. Most other fusion reactions involving hydrogen and helium produce penetrating neutron radiation, which weakens reactor structures and induces long-term radioactivity, thereby endangering operating personnel. The alpha particles from 11B fusion can be turned directly into electric power, and all radiation stops as soon as the reactor is turned off. Enriched boron (boron-10) The 10B isotope is useful for capturing thermal neutrons (see neutron cross section#Typical cross sections). The nuclear industry enriches natural boron to nearly pure 10B. The less-valuable by-product, depleted boron, is nearly pure 11B. Enriched boron or 10B is used in both radiation shielding and is the primary nuclide used in neutron capture therapy of cancer. In the latter ("boron neutron capture therapy" or BNCT), a compound containing 10B is incorporated into a pharmaceutical which is selectively taken up by a malignant tumor and tissues near it. The patient is then treated with a beam of low energy neutrons at a relatively low neutron radiation dose. The neutrons, however, trigger energetic and short-range secondary alpha particle and lithium-7 heavy ion radiation that are products of the boron-neutron nuclear reaction, and this ion radiation additionally bombards the tumor, especially from inside the tumor cells. In nuclear reactors, 10B is used for reactivity control and in emergency shutdown systems. It can serve either function in the form of borosilicate control rods or as boric acid. In pressurized water reactors, 10B boric acid is added to the reactor coolant after the plant is shut down for refueling. When the plant is started up again, the boric acid is slowly filtered out over many months as fissile material is used up and the fuel becomes less reactive. Nuclear fusion Boron has been investigated for possible applications in nuclear fusion research. It is commonly used for conditioning the walls in fusion reactors by depositing boron coatings on plasma-facing components and walls to reduce the release of hydrogen and impurities from the surfaces. It is also being used for the dissipation of energy in the fusion plasma boundary to suppress excessive energy bursts and heat fluxes to the walls. Neutron capture therapy In neutron capture therapy (BNCT) for malignant brain tumors, boron is researched to be used for selectively targeting and destroying tumor cells. The goal is to deliver higher concentrations of the non-radioactive boron isotope (10B) to the tumor cells than to the surrounding normal tissues. When these 10B-containing cells are irradiated with low-energy thermal neutrons, they undergo nuclear capture reactions, releasing high linear energy transfer (LET) particles such as α-particles and lithium-7 nuclei within a limited path length. These high-LET particles can destroy the adjacent tumor cells without causing significant harm to nearby normal cells. Boron acts as a selective agent due to its ability to absorb thermal neutrons and produce short-range physical effects primarily affecting the targeted tissue region. This binary approach allows for precise tumor cell killing while sparing healthy tissues. The effective delivery of boron involves administering boron compounds or carriers capable of accumulating selectively in tumor cells compared to surrounding tissue. BSH and BPA have been used clinically, but research continues to identify more optimal carriers. Accelerator-based neutron sources have also been developed recently as an alternative to reactor-based sources, leading to improved efficiency and enhanced clinical outcomes in BNCT. By employing the properties of boron isotopes and targeted irradiation techniques, BNCT offers a potential approach to treating malignant brain tumors by selectively killing cancer cells while minimizing the damage caused by traditional radiation therapies. BNCT has shown promising results in clinical trials for various other malignancies, including glioblastoma, head and neck cancer, cutaneous melanoma, hepatocellular carcinoma, lung cancer, and extramammary Paget's disease. The treatment involves a nuclear reaction between nonradioactive boron-10 isotope and low-energy thermal or high-energy epithermal neutrons to generate α particles and lithium nuclei that selectively destroy DNA in tumor cells. The primary challenge lies in developing efficient boron agents with higher content and specific targeting properties tailored for BNCT. Integration of tumor-targeting strategies with BNCT could potentially establish it as a practical personalized treatment option for different types of cancers. Ongoing research explores new boron compounds, optimization strategies, theranostic agents, and radiobiological advances to overcome limitations and cost-effectively improve patient outcomes. Biological role Boron is an essential plant nutrient, required primarily for maintaining the integrity of cell walls. However, high soil concentrations of greater than 1.0 ppm lead to marginal and tip necrosis in leaves as well as poor overall growth performance. Levels as low as 0.8 ppm produce these same symptoms in plants that are particularly sensitive to boron in the soil. Nearly all plants, even those somewhat tolerant of soil boron, will show at least some symptoms of boron toxicity when soil boron content is greater than 1.8 ppm. When this content exceeds 2.0 ppm, few plants will perform well and some may not survive. Some boron-containing antibiotics exist in nature. The first one found was boromycin, isolated from streptomyces in the 1960s. Others are tartrolons, a group of antibiotics discovered in the 1990s from culture broth of the myxobacterium Sorangium cellulosum. In 2013, chemist and synthetic biologist Steve Benner suggested that the conditions on Mars three billion years ago were much more favorable to the stability of RNA and formation of oxygen-containing boron and molybdenum catalysts found in life. According to Benner's theory, primitive life, which is widely believed to have originated from RNA, first formed on Mars before migrating to Earth. In human health It is thought that boron plays several essential roles in animals, including humans, but the exact physiological role is poorly understood. Boron deficiency has only been clearly established in livestock; in humans, boron deficiency may affect bone mineral density, though it has been noted that additional research on the effects of bone health is necessary. Boron is not classified as an essential human nutrient because research has not established a clear biological function for it. The U.S. Food and Nutrition Board (FNB) found the existing data insufficient to derive a Recommended Dietary Allowance (RDA), Adequate Intake (AI), or Estimated Average Requirement (EAR) for boron and the U.S. Food and Drug Administration (FDA) has not established a daily value for boron for food and dietary supplement labeling purposes. While low boron status can be detrimental to health, probably increasing the risk of osteoporosis, poor immune function, and cognitive decline, high boron levels are associated with cell damage and toxicity. Still, studies suggest that boron may exert beneficial effects on reproduction and development, calcium metabolism, bone formation, brain function, insulin and energy substrate metabolism, immunity, and steroid hormone (including estrogen) and vitamin D function, among other functions. A small human trial published in 1987 reported on postmenopausal women first made boron deficient and then repleted with 3 mg/day. Boron supplementation markedly reduced urinary calcium excretion and elevated the serum concentrations of 17 beta-estradiol and testosterone. Environmental boron appears to be inversely correlated with arthritis. The exact mechanism by which boron exerts its physiological effects is not fully understood, but may involve interactions with adenosine monophosphate (ADP) and S-adenosyl methionine (SAM-e), two compounds involved in important cellular functions. Furthermore, boron appears to inhibit cyclic ADP-ribose, thereby affecting the release of calcium ions from the endoplasmic reticulum and affecting various biological processes. Some studies suggest that boron may reduce levels of inflammatory biomarkers. Congenital endothelial dystrophy type 2, a rare form of corneal dystrophy, is linked to mutations in SLC4A11 gene that encodes a transporter reportedly regulating the intracellular concentration of boron. In humans, boron is usually consumed with food that contains boron, such as fruits, leafy vegetables, and nuts. Foods that are particularly rich in boron include avocados, dried fruits such as raisins, peanuts, pecans, prune juice, grape juice, wine and chocolate powder. According to 2-day food records from the respondents to the Third National Health and Nutrition Examination Survey (NHANES III), adult dietary intake was recorded at 0.9 to 1.4 mg/day. Health issues and toxicity Elemental boron, boron oxide, boric acid, borates, and many organoboron compounds are relatively nontoxic to humans and animals (with toxicity similar to that of table salt). The LD50 (dose at which there is 50% mortality) for animals is about 6 g per kg of body weight. Substances with an LD50 above 2 g/kg are considered nontoxic. An intake of 4 g/day of boric acid was reported without incident, but more than this is considered toxic in more than a few doses. Intakes of more than 0.5 grams per day for 50 days cause minor digestive and other problems suggestive of toxicity. Boric acid is more toxic to insects than to mammals, and is routinely used as an insecticide. However, it has been used in neutron capture therapy alongside other boron compounds such as sodium borocaptate and boronophenylalanine with reported low toxicity levels. The boranes (boron hydrogen compounds) and similar gaseous compounds are quite poisonous. As usual, boron is not an element that is intrinsically poisonous, but the toxicity of these compounds depends on structure (for another example of this phenomenon, see phosphine). The boranes are also highly flammable and require special care when handling, some combinations of boranes and other compounds are highly explosive. Sodium borohydride presents a fire hazard owing to its reducing nature and the liberation of hydrogen on contact with acid. Boron halides are corrosive. Boron is necessary for plant growth, but an excess of boron is toxic to plants, and occurs particularly in acidic soil. It presents as a yellowing from the tip inwards of the oldest leaves and black spots in barley leaves, but it can be confused with other stresses such as magnesium deficiency in other plants. See also Allotropes of boron Boron deficiency Boron oxide Boron nitride Boron neutron capture therapy Boronic acid Hydroboration-oxidation reaction Suzuki coupling Notes References External links Boron at The Periodic Table of Videos (University of Nottingham) J. B. Calvert: Boron, 2004, private website (archived version) Chemical elements Metalloids Neutron poisons Pyrotechnic fuels Rocket fuels Nuclear fusion fuels Dietary minerals Reducing agents Articles containing video clips Chemical elements with rhombohedral structure
Boron
[ "Physics", "Chemistry" ]
10,163
[ "Chemical elements", "Redox", "Reducing agents", "Atoms", "Matter" ]
3,756
https://en.wikipedia.org/wiki/Bromine
Bromine is a chemical element; it has symbol Br and atomic number 35. It is a volatile red-brown liquid at room temperature that evaporates readily to form a similarly coloured vapour. Its properties are intermediate between those of chlorine and iodine. Isolated independently by two chemists, Carl Jacob Löwig (in 1825) and Antoine Jérôme Balard (in 1826), its name was derived , referring to its sharp and pungent smell. Elemental bromine is very reactive and thus does not occur as a free element in nature. Instead, it can be isolated from colourless soluble crystalline mineral halide salts analogous to table salt, a property it shares with the other halogens. While it is rather rare in the Earth's crust, the high solubility of the bromide ion (Br) has caused its accumulation in the oceans. Commercially the element is easily extracted from brine evaporation ponds, mostly in the United States and Israel. The mass of bromine in the oceans is about one three-hundredth that of chlorine. At standard conditions for temperature and pressure it is a liquid; the only other element that is liquid under these conditions is mercury. At high temperatures, organobromine compounds readily dissociate to yield free bromine atoms, a process that stops free radical chemical chain reactions. This effect makes organobromine compounds useful as fire retardants, and more than half the bromine produced worldwide each year is put to this purpose. The same property causes ultraviolet sunlight to dissociate volatile organobromine compounds in the atmosphere to yield free bromine atoms, causing ozone depletion. As a result, many organobromine compounds—such as the pesticide methyl bromide—are no longer used. Bromine compounds are still used in well drilling fluids, in photographic film, and as an intermediate in the manufacture of organic chemicals. Large amounts of bromide salts are toxic from the action of soluble bromide ions, causing bromism. However, bromine is beneficial for human eosinophils, and is an essential trace element for collagen development in all animals. Hundreds of known organobromine compounds are generated by terrestrial and marine plants and animals, and some serve important biological roles. As a pharmaceutical, the simple bromide ion (Br) has inhibitory effects on the central nervous system, and bromide salts were once a major medical sedative, before replacement by shorter-acting drugs. They retain niche uses as antiepileptics. History Bromine was discovered independently by two chemists, Carl Jacob Löwig and Antoine Balard, in 1825 and 1826, respectively. Löwig isolated bromine from a mineral water spring from his hometown Bad Kreuznach in 1825. Löwig used a solution of the mineral salt saturated with chlorine and extracted the bromine with diethyl ether. After evaporation of the ether, a brown liquid remained. With this liquid as a sample of his work he applied for a position in the laboratory of Leopold Gmelin in Heidelberg. The publication of the results was delayed and Balard published his results first. Balard found bromine chemicals in the ash of seaweed from the salt marshes of Montpellier. The seaweed was used to produce iodine, but also contained bromine. Balard distilled the bromine from a solution of seaweed ash saturated with chlorine. The properties of the resulting substance were intermediate between those of chlorine and iodine; thus he tried to prove that the substance was iodine monochloride (ICl), but after failing to do so he was sure that he had found a new element and named it muride, derived from the Latin word ("brine"). After the French chemists Louis Nicolas Vauquelin, Louis Jacques Thénard, and Joseph-Louis Gay-Lussac approved the experiments of the young pharmacist Balard, the results were presented at a lecture of the Académie des Sciences and published in Annales de Chimie et Physique. In his publication, Balard stated that he changed the name from muride to brôme on the proposal of M. Anglada. The name brôme (bromine) derives from the Greek (, "stench"). Other sources claim that the French chemist and physicist Joseph-Louis Gay-Lussac suggested the name brôme for the characteristic smell of the vapors. Bromine was not produced in large quantities until 1858, when the discovery of salt deposits in Stassfurt enabled its production as a by-product of potash. Apart from some minor medical applications, the first commercial use was the daguerreotype. In 1840, bromine was discovered to have some advantages over the previously used iodine vapor to create the light sensitive silver halide layer in daguerreotypy. By 1864, a 25% solution of liquid bromine in .75 molar aqueous potassium bromide was widely used to treat gangrene during the American Civil War, before the publications of Joseph Lister and Pasteur. Potassium bromide and sodium bromide were used as anticonvulsants and sedatives in the late 19th and early 20th centuries, but were gradually superseded by chloral hydrate and then by the barbiturates. In the early years of the First World War, bromine compounds such as xylyl bromide were used as poison gas. Properties Bromine is the third halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to those of fluorine, chlorine, and iodine, and tend to be intermediate between those of chlorine and iodine, the two neighbouring halogens. Bromine has the electron configuration [Ar]4s3d4p, with the seven electrons in the fourth and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between chlorine and iodine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than chlorine and more reactive than iodine. It is also a weaker oxidising agent than chlorine, but a stronger one than iodine. Conversely, the bromide ion is a weaker reducing agent than iodide, but a stronger one than chloride. These similarities led to chlorine, bromine, and iodine together being classified as one of the original triads of Johann Wolfgang Döbereiner, whose work foreshadowed the periodic law for chemical elements. It is intermediate in atomic radius between chlorine and iodine, and this leads to many of its atomic properties being similarly intermediate in value between chlorine and iodine, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X molecule (X = Cl, Br, I), ionic radius, and X–X bond length. The volatility of bromine accentuates its very penetrating, choking, and unpleasant odour. All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of bromine are intermediate between those of chlorine and iodine. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of bromine are again intermediate between those of chlorine and iodine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: fluorine is a very pale yellow gas, chlorine is greenish-yellow, and bromine is a reddish-brown volatile liquid that freezes at −7.2 °C and boils at 58.8 °C. (Iodine is a shiny black solid.) This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as bromine, results from the electron transition between the highest occupied antibonding π molecular orbital and the lowest vacant antibonding σ molecular orbital. The colour fades at low temperatures so that solid bromine at −195 °C is pale yellow. Liquid bromine is infrared-transparent. Like solid chlorine and iodine, solid bromine crystallises in the orthorhombic crystal system, in a layered arrangement of Br molecules. The Br–Br distance is 227 pm (close to the gaseous Br–Br distance of 228 pm) and the Br···Br distance between molecules is 331 pm within a layer and 399 pm between layers (compare the van der Waals radius of bromine, 195 pm). This structure means that bromine is a very poor conductor of electricity, with a conductivity of around 5 × 10 Ω cm just below the melting point, although this is higher than the essentially undetectable conductivity of chlorine. At a pressure of 55 GPa (roughly 540,000 times atmospheric pressure) bromine undergoes an insulator-to-metal transition. At 75 GPa it changes to a face-centered orthorhombic structure. At 100 GPa it changes to a body centered orthorhombic monatomic form. Isotopes Bromine has two stable isotopes, Br and Br. These are its only two natural isotopes, with Br making up 51% of natural bromine and Br making up the remaining 49%. Both have nuclear spin 3/2− and thus may be used for nuclear magnetic resonance, although Br is more favourable. The relatively 1:1 distribution of the two isotopes in nature is helpful in identification of bromine containing compounds using mass spectroscopy. Other bromine isotopes are all radioactive, with half-lives too short to occur in nature. Of these, the most important are Br (t = 17.7 min), Br (t = 4.421 h), and Br (t = 35.28 h), which may be produced from the neutron activation of natural bromine. The most stable bromine radioisotope is Br (t = 57.04 h). The primary decay mode of isotopes lighter than Br is electron capture to isotopes of selenium; that of isotopes heavier than Br is beta decay to isotopes of krypton; and Br may decay by either mode to stable Se or Kr. Br isotopes from 87Br and heavier undergo beta decay with neutron emission and are of practical importance because they are fission products. Chemistry and compounds Bromine is intermediate in reactivity between chlorine and iodine, and is one of the most reactive elements. Bond energies to bromine tend to be lower than those to chlorine but higher than those to iodine, and bromine is a weaker oxidising agent than chlorine but a stronger one than iodine. This can be seen from the standard electrode potentials of the X/X couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). Bromination often leads to higher oxidation states than iodination but lower or equal oxidation states to chlorination. Bromine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Br bonds. Hydrogen bromide The simplest compound of bromine is hydrogen bromide, HBr. It is mainly used in the production of inorganic bromides and alkyl bromides, and as a catalyst for many reactions in organic chemistry. Industrially, it is mainly produced by the reaction of hydrogen gas with bromine gas at 200–400 °C with a platinum catalyst. However, reduction of bromine with red phosphorus is a more practical way to produce hydrogen bromide in the laboratory: 2 P + 6 HO + 3 Br → 6 HBr + 2 HPO HPO + HO + Br → 2 HBr + HPO At room temperature, hydrogen bromide is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative bromine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen bromide at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Aqueous hydrogen bromide is known as hydrobromic acid, which is a strong acid (pK = −9) because the hydrogen bonds to bromine are too weak to inhibit dissociation. The HBr/HO system also involves many hydrates HBr·nHO for n = 1, 2, 3, 4, and 6, which are essentially salts of bromine anions and hydronium cations. Hydrobromic acid forms an azeotrope with boiling point 124.3 °C at 47.63 g HBr per 100 g solution; thus hydrobromic acid cannot be concentrated beyond this point by distillation. Unlike hydrogen fluoride, anhydrous liquid hydrogen bromide is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into HBr and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and bromine, though its salts with very large and weakly polarising cations such as Cs and (R = Me, Et, Bu) may still be isolated. Anhydrous hydrogen bromide is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. Other binary bromides Nearly all elements in the periodic table form binary bromides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the very unstable XeBr); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than bromine's (oxygen, nitrogen, fluorine, and chlorine), so that the resultant binary compounds are formally not bromides but rather oxides, nitrides, fluorides, or chlorides of bromine. (Nonetheless, nitrogen tribromide is named as a bromide as it is analogous to the other nitrogen trihalides.) Bromination of metals with Br tends to yield lower oxidation states than chlorination with Cl when a variety of oxidation states is available. Bromides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrobromic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen bromide gas. These methods work best when the bromide product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative bromination of the element with bromine or hydrogen bromide, high-temperature bromination of a metal oxide or other halide by bromine, a volatile metal bromide, carbon tetrabromide, or an organic bromide. For example, niobium(V) oxide reacts with carbon tetrabromide at 370 °C to form niobium(V) bromide. Another method is halogen exchange in the presence of excess "halogenating reagent", for example: FeCl + BBr (excess) → FeBr + BCl When a lower bromide is wanted, either a higher halide may be reduced using hydrogen or a metal as a reducing agent, or thermal decomposition or disproportionation may be used, as follows: 3 WBr + Al 3 WBr + AlBr EuBr + H → EuBr + HBr 2 TaBr TaBr + TaBr Most metal bromides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular bromides, as do metals in high oxidation states from +3 and above. Both ionic and covalent bromides are known for metals in oxidation state +3 (e.g. scandium bromide is mostly ionic, but aluminium bromide is not). Silver bromide is very insoluble in water and is thus often used as a qualitative test for bromine. Bromine halides The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY, XY, and XY (where X is heavier than Y), and bromine is no exception. Bromine forms a monofluoride and monochloride, as well as a trifluoride and pentafluoride. Some cationic and anionic derivatives are also characterised, such as , , , , and . Apart from these, some pseudohalides are also known, such as cyanogen bromide (BrCN), bromine thiocyanate (BrSCN), and bromine azide (BrN). The pale-brown bromine monofluoride (BrF) is unstable at room temperature, disproportionating quickly and irreversibly into bromine, bromine trifluoride, and bromine pentafluoride. It thus cannot be obtained pure. It may be synthesised by the direct reaction of the elements, or by the comproportionation of bromine and bromine trifluoride at high temperatures. Bromine monochloride (BrCl), a red-brown gas, quite readily dissociates reversibly into bromine and chlorine at room temperature and thus also cannot be obtained pure, though it can be made by the reversible direct reaction of its elements in the gas phase or in carbon tetrachloride. Bromine monofluoride in ethanol readily leads to the monobromination of the aromatic compounds PhX (para-bromination occurs for X = Me, Bu, OMe, Br; meta-bromination occurs for the deactivating X = –COEt, –CHO, –NO); this is due to heterolytic fission of the Br–F bond, leading to rapid electrophilic bromination by Br. At room temperature, bromine trifluoride (BrF) is a straw-coloured liquid. It may be formed by directly fluorinating bromine at room temperature and is purified through distillation. It reacts violently with water and explodes on contact with flammable materials, but is a less powerful fluorinating reagent than chlorine trifluoride. It reacts vigorously with boron, carbon, silicon, arsenic, antimony, iodine, and sulfur to give fluorides, and will also convert most metals and many metal compounds to fluorides; as such, it is used to oxidise uranium to uranium hexafluoride in the nuclear power industry. Refractory oxides tend to be only partially fluorinated, but here the derivatives KBrF and BrFSbF remain reactive. Bromine trifluoride is a useful nonaqueous ionising solvent, since it readily dissociates to form and and thus conducts electricity. Bromine pentafluoride (BrF) was first synthesised in 1930. It is produced on a large scale by direct reaction of bromine with excess fluorine at temperatures higher than 150 °C, and on a small scale by the fluorination of potassium bromide at 25 °C. It also reacts violently with water and is a very strong fluorinating agent, although chlorine trifluoride is still stronger. Polybromine compounds Although dibromine is a strong oxidising agent with a high first ionisation energy, very strong oxidisers such as peroxydisulfuryl fluoride (SOF) can oxidise it to form the cherry-red cation. A few other bromine cations are known, namely the brown and dark brown . The tribromide anion, , has also been characterised; it is analogous to triiodide. Bromine oxides and oxoacids Bromine oxides are not as well-characterised as chlorine oxides or iodine oxides, as they are all fairly unstable: it was once thought that they could not exist at all. Dibromine monoxide is a dark-brown solid which, while reasonably stable at −60 °C, decomposes at its melting point of −17.5 °C; it is useful in bromination reactions and may be made from the low-temperature decomposition of bromine dioxide in a vacuum. It oxidises iodine to iodine pentoxide and benzene to 1,4-benzoquinone; in alkaline solutions, it gives the hypobromite anion. So-called "bromine dioxide", a pale yellow crystalline solid, may be better formulated as bromine perbromate, BrOBrO. It is thermally unstable above −40 °C, violently decomposing to its elements at 0 °C. Dibromine trioxide, syn-BrOBrO, is also known; it is the anhydride of hypobromous acid and bromic acid. It is an orange crystalline solid which decomposes above −40 °C; if heated too rapidly, it explodes around 0 °C. A few other unstable radical oxides are also known, as are some poorly characterised oxides, such as dibromine pentoxide, tribromine octoxide, and bromine trioxide. The four oxoacids, hypobromous acid (HOBr), bromous acid (HOBrO), bromic acid (HOBrO), and perbromic acid (HOBrO), are better studied due to their greater stability, though they are only so in aqueous solution. When bromine dissolves in aqueous solution, the following reactions occur: {| |- | Br + HO || HOBr + H + Br || K = 7.2 × 10 mol l |- | Br + 2 OH || OBr + HO + Br || K = 2 × 10 mol l |} Hypobromous acid is unstable to disproportionation. The hypobromite ions thus formed disproportionate readily to give bromide and bromate: {| |- | 3 BrO 2 Br + || K = 10 |} Bromous acids and bromites are very unstable, although the strontium and barium bromites are known. More important are the bromates, which are prepared on a small scale by oxidation of bromide by aqueous hypochlorite, and are strong oxidising agents. Unlike chlorates, which very slowly disproportionate to chloride and perchlorate, the bromate anion is stable to disproportionation in both acidic and aqueous solutions. Bromic acid is a strong acid. Bromides and bromates may comproportionate to bromine as follows: + 5 Br + 6 H → 3 Br + 3 HO There were many failed attempts to obtain perbromates and perbromic acid, leading to some rationalisations as to why they should not exist, until 1968 when the anion was first synthesised from the radioactive beta decay of unstable . Today, perbromates are produced by the oxidation of alkaline bromate solutions by fluorine gas. Excess bromate and fluoride are precipitated as silver bromate and calcium fluoride, and the perbromic acid solution may be purified. The perbromate ion is fairly inert at room temperature but is thermodynamically extremely oxidising, with extremely strong oxidising agents needed to produce it, such as fluorine or xenon difluoride. The Br–O bond in is fairly weak, which corresponds to the general reluctance of the 4p elements arsenic, selenium, and bromine to attain their group oxidation state, as they come after the scandide contraction characterised by the poor shielding afforded by the radial-nodeless 3d orbitals. Organobromine compounds Like the other carbon–halogen bonds, the C–Br bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the bromide anion. Due to the difference of electronegativity between bromine (2.96) and carbon (2.55), the carbon atom in a C–Br bond is electron-deficient and thus electrophilic. The reactivity of organobromine compounds resembles but is intermediate between the reactivity of organochlorine and organoiodine compounds. For many applications, organobromides represent a compromise of reactivity and cost. Organobromides are typically produced by additive or substitutive bromination of other organic precursors. Bromine itself can be used, but due to its toxicity and volatility, safer brominating reagents are normally used, such as N-bromosuccinimide. The principal reactions for organobromides include dehydrobromination, Grignard reactions, reductive coupling, and nucleophilic substitution. Organobromides are the most common organohalides in nature, even though the concentration of bromide is only 0.3% of that for chloride in sea water, because of the easy oxidation of bromide to the equivalent of Br, a potent electrophile. The enzyme bromoperoxidase catalyzes this reaction. The oceans are estimated to release 1–2 million tons of bromoform and 56,000 tons of bromomethane annually. An old qualitative test for the presence of the alkene functional group is that alkenes turn brown aqueous bromine solutions colourless, forming a bromohydrin with some of the dibromoalkane also produced. The reaction passes through a short-lived strongly electrophilic bromonium intermediate. This is an example of a halogen addition reaction. Occurrence and production Bromine is significantly less abundant in the crust than fluorine or chlorine, comprising only 2.5 parts per million of the Earth's crustal rocks, and then only as bromide salts. It is the 46th most abundant element in Earth's crust. It is significantly more abundant in the oceans, resulting from long-term leaching. There, it makes up 65 parts per million, corresponding to a ratio of about one bromine atom for every 660 chlorine atoms. Salt lakes and brine wells may have higher bromine concentrations: for example, the Dead Sea contains 0.4% bromide ions. It is from these sources that bromine extraction is mostly economically feasible. Bromine is the tenth most abundant element in seawater. The main sources of bromine production are Israel and Jordan. The element is liberated by halogen exchange, using chlorine gas to oxidise Br to Br. This is then removed with a blast of steam or air, and is then condensed and purified. Today, bromine is transported in large-capacity metal drums or lead-lined tanks that can hold hundreds of kilograms or even tonnes of bromine. The bromine industry is about one-hundredth the size of the chlorine industry. Laboratory production is unnecessary because bromine is commercially available and has a long shelf life. Applications A wide variety of organobromine compounds are used in industry. Some are prepared from bromine and others are prepared from hydrogen bromide, which is obtained by burning hydrogen in bromine. Flame retardants Brominated flame retardants represent a commodity of growing importance, and make up the largest commercial use of bromine. When the brominated material burns, the flame retardant produces hydrobromic acid which interferes in the radical chain reaction of the oxidation reaction of the fire. The mechanism is that the highly reactive hydrogen radicals, oxygen radicals, and hydroxyl radicals react with hydrobromic acid to form less reactive bromine radicals (i.e., free bromine atoms). Bromine atoms may also react directly with other radicals to help terminate the free radical chain-reactions that characterise combustion. To make brominated polymers and plastics, bromine-containing compounds can be incorporated into the polymer during polymerisation. One method is to include a relatively small amount of brominated monomer during the polymerisation process. For example, vinyl bromide can be used in the production of polyethylene, polyvinyl chloride or polypropylene. Specific highly brominated molecules can also be added that participate in the polymerisation process. For example, tetrabromobisphenol A can be added to polyesters or epoxy resins, where it becomes part of the polymer. Epoxies used in printed circuit boards are normally made from such flame retardant resins, indicated by the FR in the abbreviation of the products (FR-4 and FR-2). In some cases, the bromine-containing compound may be added after polymerisation. For example, decabromodiphenyl ether can be added to the final polymers. A number of gaseous or highly volatile brominated halomethane compounds are non-toxic and make superior fire suppressant agents by this same mechanism, and are particularly effective in enclosed spaces such as submarines, airplanes, and spacecraft. However, they are expensive and their production and use has been greatly curtailed due to their effect as ozone-depleting agents. They are no longer used in routine fire extinguishers, but retain niche uses in aerospace and military automatic fire suppression applications. They include bromochloromethane (Halon 1011, CHBrCl), bromochlorodifluoromethane (Halon 1211, CBrClF), and bromotrifluoromethane (Halon 1301, CBrF). Other uses Silver bromide is used, either alone or in combination with silver chloride and silver iodide, as the light sensitive constituent of photographic emulsions. Ethylene bromide was an additive in gasolines containing lead anti-engine knocking agents. It scavenges lead by forming volatile lead bromide, which is exhausted from the engine. This application accounted for 77% of the bromine use in 1966 in the US. This application has declined since the 1970s due to environmental regulations (see below). Brominated vegetable oil (BVO), a complex mixture of plant-derived triglycerides that have been reacted to contain atoms of the element bromine bonded to the molecules, is used primarily to help emulsify citrus-flavored soft drinks, preventing them from separating during distribution. Poisonous bromomethane was widely used as pesticide to fumigate soil and to fumigate housing, by the tenting method. Ethylene bromide was similarly used. These volatile organobromine compounds are all now regulated as ozone depletion agents. The Montreal Protocol on Substances that Deplete the Ozone Layer scheduled the phase out for the ozone depleting chemical by 2005, and organobromide pesticides are no longer used (in housing fumigation they have been replaced by such compounds as sulfuryl fluoride, which contain neither the chlorine or bromine organics which harm ozone). Before the Montreal protocol in 1991 (for example) an estimated 35,000 tonnes of the chemical were used to control nematodes, fungi, weeds and other soil-borne diseases. In pharmacology, inorganic bromide compounds, especially potassium bromide, were frequently used as general sedatives in the 19th and early 20th century. Bromides in the form of simple salts are still used as anticonvulsants in both veterinary and human medicine, although the latter use varies from country to country. For example, the U.S. Food and Drug Administration (FDA) does not approve bromide for the treatment of any disease, and sodium bromide was removed from over-the-counter sedative products like Bromo-Seltzer, in 1975. Commercially available organobromine pharmaceuticals include the vasodilator nicergoline, the sedative brotizolam, the anticancer agent pipobroman, and the antiseptic merbromin. Otherwise, organobromine compounds are rarely pharmaceutically useful, in contrast to the situation for organofluorine compounds. Several drugs are produced as the bromide (or equivalents, hydrobromide) salts, but in such cases bromide serves as an innocuous counterion of no biological significance. Other uses of organobromine compounds include high-density drilling fluids, dyes (such as Tyrian purple and the indicator bromothymol blue), and pharmaceuticals. Bromine itself, as well as some of its compounds, are used in water treatment, and is the precursor of a variety of inorganic compounds with an enormous number of applications (e.g. silver bromide for photography). Zinc–bromine batteries are hybrid flow batteries used for stationary electrical power backup and storage; from household scale to industrial scale. Bromine is used in cooling towers (in place of chlorine) for controlling bacteria, algae, fungi, and zebra mussels. Because it has similar antiseptic qualities to chlorine, bromine can be used in the same manner as chlorine as a disinfectant or antimicrobial in applications such as swimming pools. Bromine came into this use in the United States during World War II due to a predicted shortage of chlorine. However, bromine is usually not used outside for these applications due to it being relatively more expensive than chlorine and the absence of a stabilizer to protect it from the sun. For indoor pools, it can be a good option as it is effective at a wider pH range. It is also more stable in a heated pool or hot tub. Biological role and toxicity A 2014 study suggests that bromine (in the form of bromide ion) is a necessary cofactor in the biosynthesis of collagen IV, making the element essential to basement membrane architecture and tissue development in animals. Nevertheless, no clear deprivation symptoms or syndromes have been documented in mammals. In other biological functions, bromine may be non-essential but still beneficial when it takes the place of chlorine. For example, in the presence of hydrogen peroxide, HO, formed by the eosinophil, and either chloride, iodide, thiocyanate, or bromide ions, eosinophil peroxidase provides a potent mechanism by which eosinophils kill multicellular parasites (such as the nematode worms involved in filariasis) and some bacteria (such as tuberculosis bacteria). Eosinophil peroxidase is a haloperoxidase that preferentially uses bromide over chloride for this purpose, generating hypobromite (hypobromous acid), although the use of chloride is possible. α-Haloesters are generally thought of as highly reactive and consequently toxic intermediates in organic synthesis. Nevertheless, mammals, including humans, cats, and rats, appear to biosynthesize traces of an α-bromoester, 2-octyl 4-bromo-3-oxobutanoate, which is found in their cerebrospinal fluid and appears to play a yet unclarified role in inducing REM sleep. Neutrophil myeloperoxidase can use HO and Br to brominate deoxycytidine, which could result in DNA mutations. Marine organisms are the main source of organobromine compounds, and it is in these organisms that bromine is more firmly shown to be essential. More than 1600 such organobromine compounds were identified by 1999. The most abundant is methyl bromide (CHBr), of which an estimated 56,000 tonnes is produced by marine algae each year. The essential oil of the Hawaiian alga Asparagopsis taxiformis consists of 80% bromoform. Most of such organobromine compounds in the sea are made by the action of a unique algal enzyme, vanadium bromoperoxidase. The bromide anion is not very toxic: a normal daily intake is 2 to 8 milligrams. However, high levels of bromide chronically impair the membrane of neurons, which progressively impairs neuronal transmission, leading to toxicity, known as bromism. Bromide has an elimination half-life of 9 to 12 days, which can lead to excessive accumulation. Doses of 0.5 to 1 gram per day of bromide can lead to bromism. Historically, the therapeutic dose of bromide is about 3 to 5 grams of bromide, thus explaining why chronic toxicity (bromism) was once so common. While significant and sometimes serious disturbances occur to neurologic, psychiatric, dermatological, and gastrointestinal functions, death from bromism is rare. Bromism is caused by a neurotoxic effect on the brain which results in somnolence, psychosis, seizures and delirium. Elemental bromine (Br) is toxic and causes chemical burns on human flesh. Inhaling bromine gas results in similar irritation of the respiratory tract, causing coughing, choking, shortness of breath, and death if inhaled in large enough amounts. Chronic exposure may lead to frequent bronchial infections and a general deterioration of health. As a strong oxidising agent, bromine is incompatible with most organic and inorganic compounds. Caution is required when transporting bromine; it is commonly carried in steel tanks lined with lead, supported by strong metal frames. The Occupational Safety and Health Administration (OSHA) of the United States has set a permissible exposure limit (PEL) for bromine at a time-weighted average (TWA) of 0.1 ppm. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 0.1 ppm and a short-term limit of 0.3 ppm. The exposure to bromine immediately dangerous to life and health (IDLH) is 3 ppm. Bromine is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. References General and cited references Chemical elements Diatomic nonmetals Gases with color Halogens Oxidizing agents Reactive nonmetals
Bromine
[ "Physics", "Chemistry", "Materials_science" ]
8,256
[ "Chemical elements", "Redox", "Diatomic nonmetals", "Nonmetals", "Oxidizing agents", "Reactive nonmetals", "Atoms", "Matter" ]
3,757
https://en.wikipedia.org/wiki/Barium
Barium is a chemical element; it has symbol Ba and atomic number 56. It is the fifth element in group 2 and is a soft, silvery alkaline earth metal. Because of its high chemical reactivity, barium is never found in nature as a free element. The most common minerals of barium are barite (barium sulfate, BaSO4) and witherite (barium carbonate, BaCO3). The name barium originates from the alchemical derivative "baryta", from Greek (), meaning 'heavy'. Baric is the adjectival form of barium. Barium was identified as a new element in 1772, but not reduced to a metal until 1808 with the advent of electrolysis. Barium has few industrial applications. Historically, it was used as a getter for vacuum tubes and in oxide form as the emissive coating on indirectly heated cathodes. It is a component of YBCO (high-temperature superconductors) and electroceramics, and is added to steel and cast iron to reduce the size of carbon grains within the microstructure. Barium compounds are added to fireworks to impart a green color. Barium sulfate is used as an insoluble additive to oil well drilling fluid. In a purer form it is used as X-ray radiocontrast agents for imaging the human gastrointestinal tract. Water-soluble barium compounds are poisonous and have been used as rodenticides. Characteristics Physical properties Barium is a soft, silvery-white metal, with a slight golden shade when ultrapure. The silvery-white color of barium metal rapidly vanishes upon oxidation in air yielding a dark gray layer containing the oxide. Barium has a medium specific weight and high electrical conductivity. Because barium is difficult to purify, many of its properties have not been accurately determined. At room temperature and pressure, barium metal adopts a body-centered cubic structure, with a barium–barium distance of 503 picometers, expanding with heating at a rate of approximately 1.8/°C. It is a soft metal with a Mohs hardness of 1.25. Its melting temperature of is intermediate between those of the lighter strontium () and heavier radium (); however, its boiling point of exceeds that of strontium (). The density (3.62 g/cm3) is again intermediate between those of strontium (2.36 g/cm3) and radium (≈5 g/cm3). Chemical reactivity Barium is chemically similar to magnesium, calcium, and strontium, but more reactive. Its compounds are almost invariably found in the +2 oxidation state. As expected for a highly electropositive metal, barium's reaction with chalcogens is highly exothermic (release energy). Barium reacts with atmospheric oxygen in air at room temperature. For this reason, metallic barium is often stored under oil or in an inert atmosphere. Reactions with other nonmetals, such as carbon, nitrogen, phosphorus, silicon, and hydrogen, proceed upon heating. Reactions with water and alcohols are also exothermic and release hydrogen gas: Ba + 2 ROH → Ba(OR)2 + H2↑ (R is an alkyl group or a hydrogen atom) Barium reacts with ammonia to form the electride [Ba(NH3)6](e−)2, which near room temperature gives the amide Ba(NH2)2. The metal is readily attacked by acids. Sulfuric acid is a notable exception because passivation stops the reaction by forming the insoluble barium sulfate on the surface. Barium combines with several other metals, including aluminium, zinc, lead, and tin, forming intermetallic phases and alloys. Compounds Barium salts are typically white when solid and colorless when dissolved. They are denser than the strontium or calcium analogs, except for the halides (see table; zinc is given for comparison). Barium hydroxide ("baryta") was known to alchemists, who produced it by heating barium carbonate. Unlike calcium hydroxide, it absorbs very little CO2 in aqueous solutions and is therefore insensitive to atmospheric fluctuations. This property is used in calibrating pH equipment. Barium compounds burn with a green to pale green flame, which is an efficient test to detect a barium compound. The color results from spectral lines at 455.4, 493.4, 553.6, and 611.1 nm. Organobarium compounds are a growing field of knowledge: recently discovered are dialkylbariums and alkylhalobariums. Isotopes Barium found in the Earth's crust is a mixture of seven primordial nuclides, barium-130, 132, and 134 through 138. Barium-130 undergoes very slow radioactive decay to xenon-130 by double beta plus decay, with a half-life of (0.5–2.7)×1021 years (about 1011 times the age of the universe). Its abundance is ≈0.1% that of natural barium. Theoretically, barium-132 can similarly undergo double beta decay to xenon-132; this decay has not been detected. The radioactivity of these isotopes is so weak that they pose no danger to life. Of the stable isotopes, barium-138 composes 71.7% of all barium; other isotopes have decreasing abundance with decreasing mass number. In total, barium has 40 known isotopes, ranging in mass between 114 and 153. The most stable artificial radioisotope is barium-133 with a half-life of approximately 10.51 years. Five other isotopes have half-lives longer than a day. Barium also has 10 meta states, of which barium-133m1 is the most stable with a half-life of about 39 hours. History Alchemists in the early Middle Ages knew about some barium minerals. Smooth pebble-like stones of mineral baryte were found in volcanic rock near Bologna, Italy, and so were called "Bologna stones". Alchemists were attracted to them because after exposure to light they would glow for years. The phosphorescent properties of baryte heated with organics were described by V. Casciorolus in 1602. Carl Scheele determined that baryte contained a new element in 1772, but could not isolate barium, only barium oxide. Johan Gottlieb Gahn also isolated barium oxide two years later in similar studies. Oxidized barium was at first called "barote" by Guyton de Morveau, a name that was changed by Antoine Lavoisier to baryte (in French) or baryta (in Latin). Also in the 18th century, English mineralogist William Withering noted a heavy mineral in the lead mines of Cumberland, now known to be witherite. Barium was first isolated by electrolysis of molten barium salts in 1808 by Sir Humphry Davy in England. Davy, by analogy with calcium, named "barium" after baryta, with the "-ium" ending signifying a metallic element. Robert Bunsen and Augustus Matthiessen obtained pure barium by electrolysis of a molten mixture of barium chloride and ammonium chloride. The production of pure oxygen in the Brin process was a large-scale application of barium peroxide in the 1880s, before it was replaced by electrolysis and fractional distillation of liquefied air in the early 1900s. In this process barium oxide reacts at with air to form barium peroxide, which decomposes above by releasing oxygen: 2 BaO + O2 ⇌ 2 BaO2 Barium sulfate was first applied as a radiocontrast agent in X-ray imaging of the digestive system in 1908. Occurrence and production The abundance of barium is 0.0425% in the Earth's crust and 13 μg/L in sea water. The primary commercial source of barium is baryte (also called barytes or heavy spar), a barium sulfate mineral. with deposits in many parts of the world. Another commercial source, far less important than baryte, is witherite, barium carbonate. The main deposits are located in Britain, Romania, and the former USSR. The baryte reserves are estimated between 0.7 and 2 billion tonnes. The highest production, 8.3 million tonnes, was achieved in 1981, but only 7–8% was used for barium metal or compounds. Baryte production has risen since the second half of the 1990s from 5.6 million tonnes in 1996 to 7.6 in 2005 and 7.8 in 2011. China accounts for more than 50% of this output, followed by India (14% in 2011), Morocco (8.3%), US (8.2%), Iran and Kazakhstan (2.6% each) and Turkey (2.5%). The mined ore is washed, crushed, classified, and separated from quartz. If the quartz penetrates too deeply into the ore, or the iron, zinc, or lead content is abnormally high, then froth flotation is used. The product is a 98% pure baryte (by mass); the purity should be no less than 95%, with a minimal content of iron and silicon dioxide. It is then reduced by carbon to barium sulfide: BaSO4 + 2 C → BaS + 2 CO2 The water-soluble barium sulfide is the starting point for other compounds: treating BaS with oxygen produces the sulfate, with nitric acid the nitrate, with aqueous carbon dioxide the carbonate, and so on. The nitrate can be thermally decomposed to yield the oxide. Barium metal is produced by reduction with aluminium at . The intermetallic compound BaAl4 is produced first: 3 BaO + 14 Al → 3 BaAl4 + Al2O3 BaAl4 is an intermediate reacted with barium oxide to produce the metal. Note that not all barium is reduced. 8 BaO + BaAl4 → Ba↓ + 7 BaAl2O4 The remaining barium oxide reacts with the formed aluminium oxide: BaO + Al2O3 → BaAl2O4 and the overall reaction is 4 BaO + 2 Al → 3 Ba↓ + BaAl2O4 Barium vapor is condensed and packed into molds in an atmosphere of argon. This method is used commercially, yielding ultrapure barium. Commonly sold barium is about 99% pure, with main impurities being strontium and calcium (up to 0.8% and 0.25%) and other contaminants contributing less than 0.1%. A similar reaction with silicon at yields barium and barium metasilicate. Electrolysis is not used because barium readily dissolves in molten halides and the product is rather impure. Gemstone The barium mineral, benitoite (barium titanium silicate), occurs as a very rare blue fluorescent gemstone, and is the official state gem of California. Barium in seawater Barium exists in seawater as the Ba2+ ion with an average oceanic concentration of 109 nmol/kg. Barium also exists in the ocean as BaSO4, or barite. Barium has a nutrient-like profile with a residence time of 10,000 years. Barium shows a relatively consistent concentration in upper ocean seawater, excepting regions of high river inputs and regions with strong upwelling. There is little depletion of barium concentrations in the upper ocean for an ion with a nutrient-like profile, thus lateral mixing is important. Barium isotopic values show basin-scale balances instead of local or short-term processes. Applications Metal and alloys Barium, as a metal or when alloyed with aluminium, is used to remove unwanted gases (gettering) from vacuum tubes, such as TV picture tubes. Barium is suitable for this purpose because of its low vapor pressure and reactivity towards oxygen, nitrogen, carbon dioxide, and water; it can even partly remove noble gases by dissolving them in the crystal lattice. This application is gradually disappearing due to the rising popularity of the tubeless LCD, LED, and plasma sets. Other uses of elemental barium are minor and include an additive to silumin (aluminium–silicon alloys) that refines their structure, as well as bearing alloys; lead–tin soldering alloys – to increase the creep resistance; alloy with nickel for spark plugs; additive to steel and cast iron as an inoculant; alloys with calcium, manganese, silicon, and aluminium as high-grade steel deoxidizers. Barium sulfate and baryte Barium sulfate (the mineral baryte, BaSO4) is important to the petroleum industry as a drilling fluid in oil and gas wells. The precipitate of the compound (called "blanc fixe", from the French for "permanent white") is used in paints and varnishes; as a filler in ringing ink, plastics, and rubbers; as a paper coating pigment; and in nanoparticles, to improve physical properties of some polymers, such as epoxies. Barium sulfate has a low toxicity and relatively high density of ca. 4.5 g/cm3 (and thus opacity to X-rays). For this reason it is used as a radiocontrast agent in X-ray imaging of the digestive system ("barium meals" and "barium enemas"). Lithopone, a pigment that contains barium sulfate and zinc sulfide, is a permanent white with good covering power that does not darken when exposed to sulfides. Other barium compounds Other compounds of barium find only niche applications, limited by the toxicity of Ba2+ ions (barium carbonate is a rat poison), which is not a problem for the insoluble BaSO4. Barium oxide coating on the electrodes of fluorescent lamps facilitates the release of electrons. By its great atomic density, barium carbonate increases the refractive index and luster of glass and reduces leaks of X-rays from cathode-ray tubes (CRTs) TV sets. Barium, typically as barium nitrate imparts a yellow or "apple" green color to fireworks; for brilliant green barium chloride is used. Barium peroxide is a catalyst in the aluminothermic reaction (thermite) for welding rail tracks. It is also a green flare in tracer ammunition and a bleaching agent. Barium titanate is a promising electroceramic. Barium fluoride is used for optics in infrared applications because of its wide transparency range of 0.15–12 micrometers. YBCO was the first high-temperature superconductor cooled by liquid nitrogen, with a transition temperature of greater than the boiling point of nitrogen (). Ferrite, a type of sintered ceramic composed of iron oxide (Fe2O3) and barium oxide (BaO), is both electrically nonconductive and ferrimagnetic, and can be temporarily or permanently magnetized. Palaeoceanography The lateral mixing of barium is caused by water mass mixing and ocean circulation. Global ocean circulation reveals a strong correlation between dissolved barium and silicic acid. The large-scale ocean circulation combined with remineralization of barium show a similar correlation between dissolved barium and ocean alkalinity. Dissolved barium's correlation with silicic acid can be seen both vertically and spatially. Particulate barium shows a strong correlation with particulate organic carbon or POC. Barium is becoming more popular as a base for palaeoceanographic proxies. With both dissolved and particulate barium's links with silicic acid and POC, it can be used to determine historical variations in the biological pump, carbon cycle, and global climate. The barium particulate barite (BaSO4), as one of many proxies, can be used to provide a host of historical information on processes in different oceanic settings (water column, sediments, and hydrothermal sites). In each setting there are differences in isotopic and elemental composition of the barite particulate. Barite in the water column, known as marine or pelagic barite, reveals information on seawater chemistry variation over time. Barite in sediments, known as diagenetic or cold seeps barite, gives information about sedimentary redox processes. Barite formed via hydrothermal activity at hydrothermal vents, known as hydrothermal barite, reveals alterations in the condition of the earth's crust around those vents. Toxicity Soluble barium compounds have LD50 near 10 mg/kg (oral rats). Symptoms include "convulsions... paralysis of the peripheral nerve system ... severe inflammation of the gastrointestinal tract". The insoluble sulfate is nontoxic and is not classified as a dangerous goods in transport regulations. Little is known about the long term effects of barium exposure. The US EPA considers it unlikely that barium is carcinogenic when consumed orally. Inhaled dust containing insoluble barium compounds can accumulate in the lungs, causing a benign condition called baritosis. See also Han purple and Han blue – synthetic barium copper silicate pigments developed and used in ancient and imperial China References External links Barium at The Periodic Table of Videos (University of Nottingham) Elementymology & Elements Multidict 3-D Holographic Display Using Strontium Barium Niobate Chemical elements Alkaline earth metals Toxicology Reducing agents Chemical elements with body-centered cubic structure
Barium
[ "Physics", "Chemistry", "Environmental_science" ]
3,724
[ "Toxicology", "Chemical elements", "Redox", "Reducing agents", "Atoms", "Matter" ]
3,758
https://en.wikipedia.org/wiki/Berkelium
Berkelium is a synthetic chemical element; it has symbol Bk and atomic number 97. It is a member of the actinide and transuranium element series. It is named after the city of Berkeley, California, the location of the Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory) where it was discovered in December 1949. Berkelium was the fifth transuranium element discovered after neptunium, plutonium, curium and americium. The major isotope of berkelium, 249Bk, is synthesized in minute quantities in dedicated high-flux nuclear reactors, mainly at the Oak Ridge National Laboratory in Tennessee, United States, and at the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. The longest-lived and second-most important isotope, 247Bk, can be synthesized via irradiation of 244Cm with high-energy alpha particles. Just over one gram of berkelium has been produced in the United States since 1967. There is no practical application of berkelium outside scientific research which is mostly directed at the synthesis of heavier transuranium elements and superheavy elements. A 22-milligram batch of berkelium-249 was prepared during a 250-day irradiation period and then purified for a further 90 days at Oak Ridge in 2009. This sample was used to synthesize the new element tennessine for the first time in 2009 at the Joint Institute for Nuclear Research, Russia, after it was bombarded with calcium-48 ions for 150 days. This was the culmination of the Russia–US collaboration on the synthesis of the heaviest elements on the periodic table. Berkelium is a soft, silvery-white, radioactive metal. The berkelium-249 isotope emits low-energy electrons and thus is relatively safe to handle. It decays with a half-life of 330 days to californium-249, which is a strong emitter of ionizing alpha particles. This gradual transformation is an important consideration when studying the properties of elemental berkelium and its chemical compounds, since the formation of californium brings not only chemical contamination, but also free-radical effects and self-heating from the emitted alpha particles. Characteristics Physical Berkelium is a soft, silvery-white, radioactive actinide metal. In the periodic table, it is located to the right of the actinide curium, to the left of the actinide californium and below the lanthanide terbium with which it shares many similarities in physical and chemical properties. Its density of 14.78 g/cm3 lies between those of curium (13.52 g/cm3) and californium (15.1 g/cm3), as does its melting point of 986 °C, below that of curium (1340 °C) but higher than that of californium (900 °C). Berkelium is relatively soft and has one of the lowest bulk moduli among the actinides, at about 20 GPa (2 Pa). ions shows two sharp fluorescence peaks at 652 nanometers (red light) and 742 nanometers (deep red – near-infrared) due to internal transitions at the f-electron shell. The relative intensity of these peaks depends on the excitation power and temperature of the sample. This emission can be observed, for example, after dispersing berkelium ions in a silicate glass, by melting the glass in presence of berkelium oxide or halide. Between 70 K and room temperature, berkelium behaves as a Curie–Weiss paramagnetic material with an effective magnetic moment of 9.69 Bohr magnetons (μB) and a Curie temperature of 101 K. This magnetic moment is almost equal to the theoretical value of 9.72 μB calculated within the simple atomic L-S coupling model. Upon cooling to about 34 K, berkelium undergoes a transition to an antiferromagnetic state. The enthalpy of dissolution in hydrochloric acid at standard conditions is −600 kJ/mol, from which the standard enthalpy of formation (ΔfH°) of aqueous ions is obtained as −601 kJ/mol. The standard electrode potential /Bk is −2.01 V. The ionization potential of a neutral berkelium atom is 6.23 eV. Allotropes At ambient conditions, berkelium assumes its most stable α form which has a hexagonal symmetry, space group P63/mmc, lattice parameters of 341 pm and 1107 pm. The crystal has a double-hexagonal close packing structure with the layer sequence ABAC and so is isotypic (having a similar structure) with α-lanthanum and α-forms of actinides beyond curium. This crystal structure changes with pressure and temperature. When compressed at room temperature to 7 GPa, α-berkelium transforms to the β modification, which has a face-centered cubic (fcc) symmetry and space group Fmm. This transition occurs without change in volume, but the enthalpy increases by 3.66 kJ/mol. Upon further compression to 25 GPa, berkelium transforms to an orthorhombic γ-berkelium structure similar to that of α-uranium. This transition is accompanied by a 12% volume decrease and delocalization of the electrons at the 5f electron shell. No further phase transitions are observed up to 57 GPa. Upon heating, α-berkelium transforms into another phase with an fcc lattice (but slightly different from β-berkelium), space group Fmm and the lattice constant of 500 pm; this fcc structure is equivalent to the closest packing with the sequence ABC. This phase is metastable and will gradually revert to the original α-berkelium phase at room temperature. The temperature of the phase transition is believed to be quite close to the melting point. Chemical Like all actinides, berkelium dissolves in various aqueous inorganic acids, liberating gaseous hydrogen and converting into the state. This trivalent oxidation state (+3) is the most stable, especially in aqueous solutions, but tetravalent (+4), pentavalent (+5), and possibly divalent (+2) berkelium compounds are also known. The existence of divalent berkelium salts is uncertain and has only been reported in mixed lanthanum(III) chloride-strontium chloride melts. A similar behavior is observed for the lanthanide analogue of berkelium, terbium. Aqueous solutions of ions are green in most acids. The color of ions is yellow in hydrochloric acid and orange-yellow in sulfuric acid. Berkelium does not react rapidly with oxygen at room temperature, possibly due to the formation of a protective oxide layer surface. However, it reacts with molten metals, hydrogen, halogens, chalcogens and pnictogens to form various binary compounds. Isotopes Nineteen isotopes and six nuclear isomers (excited states of an isotope) of berkelium have been characterized, with mass numbers ranging from 233 to 253 (except 235 and 237). All of them are radioactive. The longest half-lives are observed for 247Bk (1,380 years), 248Bk (over 300 years), and 249Bk (330 days); the half-lives of the other isotopes range from microseconds to several days. The isotope which is the easiest to synthesize is berkelium-249. This emits mostly soft β-particles which are inconvenient for detection. Its alpha radiation is rather weak (1.45%) with respect to the β-radiation, but is sometimes used to detect this isotope. The second important berkelium isotope, berkelium-247, is an alpha-emitter, as are most actinide isotopes. Occurrence All berkelium isotopes have a half-life far too short to be primordial. Therefore, any primordial berkelium − that is, berkelium present on the Earth during its formation − has decayed by now. On Earth, berkelium is mostly concentrated in certain areas, which were used for the atmospheric nuclear weapons tests between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster, Three Mile Island accident and 1968 Thule Air Base B-52 crash. Analysis of the debris at the testing site of the first United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides, including berkelium. For reasons of military secrecy, this result was not published until 1956. Nuclear reactors produce mostly, among the berkelium isotopes, berkelium-249. During the storage and before the fuel disposal, most of it beta decays to californium-249. The latter has a half-life of 351 years, which is relatively long compared to the half-lives of other isotopes produced in the reactor, and is therefore undesirable in the disposal products. The transuranium elements from americium to fermium, including berkelium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Berkelium is also one of the elements that have theoretically been detected in Przybylski's Star. History Although very small amounts of berkelium were possibly produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in December 1949 by Glenn T. Seaborg, Albert Ghiorso, Stanley Gerald Thompson, and Kenneth Street Jr. They used the 60-inch cyclotron at the University of California, Berkeley. Similar to the nearly simultaneous discovery of americium (element 95) and curium (element 96) in 1944, the new elements berkelium and californium (element 98) were both produced in 1949–1950. The name choice for element 97 followed the previous tradition of the Californian group to draw an analogy between the newly discovered actinide and the lanthanide element positioned above it in the periodic table. Previously, americium was named after a continent as its analogue europium, and curium honored scientists Marie and Pierre Curie as the lanthanide above it, gadolinium, was named after the explorer of the rare-earth elements Johan Gadolin. Thus the discovery report by the Berkeley group reads: "It is suggested that element 97 be given the name berkelium (symbol Bk) after the city of Berkeley in a manner similar to that used in naming its chemical homologue terbium (atomic number 65) whose name was derived from the town of Ytterby, Sweden, where the rare earth minerals were first found." This tradition ended with berkelium, though, as the naming of the next discovered actinide, californium, was not related to its lanthanide analogue dysprosium, but after the discovery place. The most difficult steps in the synthesis of berkelium were its separation from the final products and the production of sufficient quantities of americium for the target material. First, americium (241Am) nitrate solution was coated on a platinum foil, the solution was evaporated and the residue converted by annealing to americium dioxide (). This target was irradiated with 35 MeV alpha particles for 6 hours in the 60-inch cyclotron at the Lawrence Radiation Laboratory, University of California, Berkeley. The (α,2n) reaction induced by the irradiation yielded the 243Bk isotope and two free neutrons: + → + 2 After the irradiation, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The product was centrifugated and re-dissolved in nitric acid. To separate berkelium from the unreacted americium, this solution was added to a mixture of ammonium and ammonium sulfate and heated to convert all the dissolved americium into the oxidation state +6. Unoxidized residual americium was precipitated by the addition of hydrofluoric acid as americium(III) fluoride (). This step yielded a mixture of the accompanying product curium and the expected element 97 in form of trifluorides. The mixture was converted to the corresponding hydroxides by treating it with potassium hydroxide, and after centrifugation, was dissolved in perchloric acid. Further separation was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH≈3.5), using ion exchange at elevated temperature. The chromatographic separation behavior was unknown for the element 97 at the time, but was anticipated by analogy with terbium. The first results were disappointing because no alpha-particle emission signature could be detected from the elution product. With further analysis, searching for characteristic X-rays and conversion electron signals, a berkelium isotope was eventually detected. Its mass number was uncertain between 243 and 244 in the initial report, but was later established as 243. Synthesis and extraction Preparation of isotopes Berkelium is produced by bombarding lighter actinides uranium (238U) or plutonium (239Pu) with neutrons in a nuclear reactor. In a more common case of uranium fuel, plutonium is produced first by neutron capture (the so-called (n,γ) reaction or neutron fusion) followed by beta-decay: ^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu (the times are half-lives) Plutonium-239 is further irradiated by a source that has a high neutron flux, several times higher than a conventional nuclear reactor, such as the 85-megawatt High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, US. The higher flux promotes fusion reactions involving not one but several neutrons, converting 239Pu to 244Cm and then to 249Cm: Curium-249 has a short half-life of 64 minutes, and thus its further conversion to 250Cm has a low probability. Instead, it transforms by beta-decay into 249Bk: ^{249}_{96}Cm ->[{\beta^-}][64.15 \ \ce{min}] ^{249}_{97}Bk ->[\beta^-][330 \ \ce{d}] ^{249}_{98}Cf The thus-produced 249Bk has a long half-life of 330 days and thus can capture another neutron. However, the product, 250Bk, again has a relatively short half-life of 3.212 hours and thus does not yield any heavier berkelium isotopes. It instead decays to the californium isotope 250Cf: ^{249}_{97}Bk ->[\ce{(n,\gamma)}] ^{250}_{97}Bk ->[\beta^-][3.212 \ \ce{h}] ^{250}_{98}Cf Although 247Bk is the most stable isotope of berkelium, its production in nuclear reactors is very difficult because its potential progenitor 247Cm has never been observed to undergo beta decay. Thus, 249Bk is the most accessible isotope of berkelium, which still is available only in small quantities (only 0.66 grams have been produced in the US over the period 1967–1983) at a high price of the order 185 USD per microgram. It is the only berkelium isotope available in bulk quantities, and thus the only berkelium isotope whose properties can be extensively studied. The isotope 248Bk was first obtained in 1956 by bombarding a mixture of curium isotopes with 25 MeV α-particles. Although its direct detection was hindered by strong signal interference with 245Bk, the existence of a new isotope was proven by the growth of the decay product 248Cf which had been previously characterized. The half-life of 248Bk was estimated as hours, though later 1965 work gave a half-life in excess of 300 years (which may be due to an isomeric state). Berkelium-247 was produced during the same year by irradiating 244Cm with alpha-particles: Berkelium-242 was synthesized in 1979 by bombarding 235U with 11B, 238U with 10B, 232Th with 14N or 232Th with 15N. It converts by electron capture to 242Cm with a half-life of minutes. A search for an initially suspected isotope 241Bk was then unsuccessful; 241Bk has since been synthesized. Separation The fact that berkelium readily assumes oxidation state +4 in solids, and is relatively stable in this state in liquids greatly assists separation of berkelium away from many other actinides. These are inevitably produced in relatively large amounts during the nuclear synthesis and often favor the +3 state. This fact was not yet known in the initial experiments, which used a more complex separation procedure. Various inorganic oxidation agents can be applied to the solutions to convert it to the +4 state, such as bromates (), bismuthates (), chromates ( and ), silver(I) thiolate (), lead(IV) oxide (), ozone (), or photochemical oxidation procedures. More recently, it has been discovered that some organic and bio-inspired molecules, such as the chelator called 3,4,3-LI(1,2-HOPO), can also oxidize Bk(III) and stabilize Bk(IV) under mild conditions. is then extracted with ion exchange, extraction chromatography or liquid-liquid extraction using HDEHP (bis-(2-ethylhexyl) phosphoric acid), amines, tributyl phosphate or various other reagents. These procedures separate berkelium from most trivalent actinides and lanthanides, except for the lanthanide cerium (lanthanides are absent in the irradiation target but are created in various nuclear fission decay chains). A more detailed procedure adopted at the Oak Ridge National Laboratory was as follows: the initial mixture of actinides is processed with ion exchange using lithium chloride reagent, then precipitated as hydroxides, filtered and dissolved in nitric acid. It is then treated with high-pressure elution from cation exchange resins, and the berkelium phase is oxidized and extracted using one of the procedures described above. Reduction of the thus-obtained to the +3 oxidation state yields a solution, which is nearly free from other actinides (but contains cerium). Berkelium and cerium are then separated with another round of ion-exchange treatment. Bulk metal preparation In order to characterize chemical and physical properties of solid berkelium and its compounds, a program was initiated in 1952 at the Material Testing Reactor, Arco, Idaho, US. It resulted in preparation of an eight-gram plutonium-239 target and in the first production of macroscopic quantities (0.6 micrograms) of berkelium by Burris B. Cunningham and Stanley Gerald Thompson in 1958, after a continuous reactor irradiation of this target for six years. This irradiation method was and still is the only way of producing weighable amounts of the element, and most solid-state studies of berkelium have been conducted on microgram or submicrogram-sized samples. The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor at the Oak Ridge National Laboratory in Tennessee, USA, and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium elements (atomic number greater than 96). These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, although the quantities produced at NIIAR are not publicly reported. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium-249 and einsteinium, and picogram quantities of fermium. In total, just over one gram of berkelium-249 has been produced at Oak Ridge since 1967. The first berkelium metal sample weighing 1.7 micrograms was prepared in 1971 by the reduction of fluoride with lithium vapor at 1000 °C; the fluoride was suspended on a tungsten wire above a tantalum crucible containing molten lithium. Later, metal samples weighing up to 0.5 milligrams were obtained with this method. Similar results are obtained with fluoride. Berkelium metal can also be produced by the reduction of oxide with thorium or lanthanum. Compounds Oxides Two oxides of berkelium are known, with the berkelium oxidation state of +3 () and +4 (). oxide is a brown solid, while oxide is a yellow-green solid with a melting point of 1920 °C and is formed from BkO2 by reduction with molecular hydrogen: Upon heating to 1200 °C, the oxide undergoes a phase change; it undergoes another phase change at 1750 °C. Such three-phase behavior is typical for the actinide sesquioxides. oxide, BkO, has been reported as a brittle gray solid but its exact chemical composition remains uncertain. Halides In halides, berkelium assumes the oxidation states +3 and +4. The +3 state is the most stable, especially in solutions, while the tetravalent halides and are only known in the solid phase. The coordination of berkelium atom in its trivalent fluoride and chloride is tricapped trigonal prismatic, with the coordination number of 9. In trivalent bromide, it is bicapped trigonal prismatic (coordination 8) or octahedral (coordination 6), and in the iodide it is octahedral. fluoride () is a yellow-green ionic solid and is isotypic with uranium tetrafluoride or zirconium tetrafluoride. fluoride () is also a yellow-green solid, but it has two crystalline structures. The most stable phase at low temperatures is isotypic with yttrium(III) fluoride, while upon heating to between 350 and 600 °C, it transforms to the structure found in lanthanum trifluoride. Visible amounts of chloride () were first isolated and characterized in 1962, and weighed only 3 billionths of a gram. It can be prepared by introducing hydrogen chloride vapors into an evacuated quartz tube containing berkelium oxide at a temperature about 500 °C. This green solid has a melting point of 600 °C, and is isotypic with uranium(III) chloride. Upon heating to nearly melting point, converts into an orthorhombic phase. Two forms of bromide are known: one with berkelium having coordination 6, and one with coordination 8. The latter is less stable and transforms to the former phase upon heating to about 350 °C. An important phenomenon for radioactive solids has been studied on these two crystal forms: the structure of fresh and aged 249BkBr3 samples was probed by X-ray diffraction over a period longer than 3 years, so that various fractions of berkelium-249 had beta decayed to californium-249. No change in structure was observed upon the 249BkBr3—249CfBr3 transformation. However, other differences were noted for 249BkBr3 and 249CfBr3. For example, the latter could be reduced with hydrogen to 249CfBr2, but the former could not – this result was reproduced on individual 249BkBr3 and 249CfBr3 samples, as well on the samples containing both bromides. The intergrowth of californium in berkelium occurs at a rate of 0.22% per day and is an intrinsic obstacle in studying berkelium properties. Beside a chemical contamination, 249Cf, being an alpha emitter, brings undesirable self-damage of the crystal lattice and the resulting self-heating. The chemical effect however can be avoided by performing measurements as a function of time and extrapolating the obtained results. Other inorganic compounds The pnictides of berkelium-249 of the type BkX are known for the elements nitrogen, phosphorus, arsenic and antimony. They crystallize in the rock-salt structure and are prepared by the reaction of either hydride () or metallic berkelium with these elements at elevated temperature (about 600 °C) under high vacuum. sulfide, , is prepared by either treating berkelium oxide with a mixture of hydrogen sulfide and carbon disulfide vapors at 1130 °C, or by directly reacting metallic berkelium with elemental sulfur. These procedures yield brownish-black crystals. and hydroxides are both stable in 1 molar solutions of sodium hydroxide. phosphate () has been prepared as a solid, which shows strong fluorescence under excitation with a green light. Berkelium hydrides are produced by reacting metal with hydrogen gas at temperatures about 250 °C. They are non-stoichiometric with the nominal formula (0 < x < 1). Several other salts of berkelium are known, including an oxysulfide (), and hydrated nitrate (), chloride (), sulfate () and oxalate (). Thermal decomposition at about 600 °C in an argon atmosphere (to avoid oxidation to ) of yields the crystals of oxysulfate (). This compound is thermally stable to at least 1000 °C in inert atmosphere. Organoberkelium compounds Berkelium forms a trigonal (η5–C5H5)3Bk metallocene complex with three cyclopentadienyl rings, which can be synthesized by reacting chloride with the molten beryllocene () at about 70 °C. It has an amber color and a density of 2.47 g/cm3. The complex is stable to heating to at least 250 °C, and sublimates without melting at about 350 °C. The high radioactivity of berkelium gradually destroys the compound (within a period of weeks). One cyclopentadienyl ring in (η5–C5H5)3Bk can be substituted by chlorine to yield . The optical absorption spectra of this compound are very similar to those of (η5–C5H5)3Bk. Applications There is currently no use for any isotope of berkelium outside basic scientific research. Berkelium-249 is a common target nuclide to prepare still heavier transuranium elements and superheavy elements, such as lawrencium, rutherfordium and bohrium. It is also useful as a source of the isotope californium-249, which is used for studies on the chemistry of californium in preference to the more radioactive californium-252 that is produced in neutron bombardment facilities such as the HFIR. A 22 milligram batch of berkelium-249 was prepared in a 250-day irradiation and then purified for 90 days at Oak Ridge in 2009. This target yielded the first 6 atoms of tennessine at the Joint Institute for Nuclear Research (JINR), Dubna, Russia, after bombarding it with calcium ions in the U400 cyclotron for 150 days. This synthesis was a culmination of the Russia-US collaboration between JINR and Lawrence Livermore National Laboratory on the synthesis of elements 113 to 118 which was initiated in 1989. Nuclear fuel cycle The nuclear fission properties of berkelium are different from those of the neighboring actinides curium and californium, and they suggest berkelium to perform poorly as a fuel in a nuclear reactor. Specifically, berkelium-249 has a moderately large neutron capture cross section of 710 barns for thermal neutrons, 1200 barns resonance integral, but very low fission cross section for thermal neutrons. In a thermal reactor, much of it will therefore be converted to berkelium-250 which quickly decays to californium-250. In principle, berkelium-249 can sustain a nuclear chain reaction in a fast breeder reactor. Its critical mass is relatively high at 192 kg, which can be reduced with a water or steel reflector but would still exceed the world production of this isotope. Berkelium-247 can maintain a chain reaction both in a thermal-neutron and in a fast-neutron reactor, however, its production is rather complex and thus the availability is much lower than its critical mass, which is about 75.7 kg for a bare sphere, 41.2 kg with a water reflector and 35.2 kg with a steel reflector (30 cm thickness). Health issues Little is known about the effects of berkelium on human body, and analogies with other elements may not be drawn because of different radiation products (electrons for berkelium and alpha particles, neutrons, or both for most other actinides). The low energy of electrons emitted from berkelium-249 (less than 126 keV) hinders its detection, due to signal interference with other decay processes, but also makes this isotope relatively harmless to humans as compared to other actinides. However, berkelium-249 transforms with a half-life of only 330 days to the strong alpha-emitter californium-249, which is rather dangerous and has to be handled in a glovebox in a dedicated laboratory. Most available berkelium toxicity data originate from research on animals. Upon ingestion by rats, only about 0.01% of berkelium ends in the blood stream. From there, about 65% goes to the bones, where it remains for about 50 years, 25% to the lungs (biological half-life about 20 years), 0.035% to the testicles or 0.01% to the ovaries where berkelium stays indefinitely. The balance of about 10% is excreted. In all these organs berkelium might promote cancer, and in the skeleton, its radiation can damage red blood cells. The maximum permissible amount of berkelium-249 in the human skeleton is 0.4 nanograms. References Bibliography External links Berkelium at The Periodic Table of Videos (University of Nottingham) Chemical elements Chemical elements with double hexagonal close-packed structure Actinides Synthetic elements
Berkelium
[ "Physics", "Chemistry" ]
6,514
[ "Matter", "Chemical elements", "Synthetic materials", "Synthetic elements", "Atoms", "Radioactivity" ]
3,821
https://en.wikipedia.org/wiki/Binary-coded%20decimal
In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications (e.g. error or overflow). In byte-oriented systems (i.e. most modern computers), the term unpacked BCD usually implies a full byte for each digit (often including a sign), whereas packed BCD typically encodes two digits within a single byte by taking advantage of the fact that four bits are enough to represent the range 0 to 9. The precise four-bit encoding, however, may vary for technical reasons (e.g. Excess-3). The ten states representing a BCD digit are sometimes called tetrades (the nibble typically needed to hold them is also known as a tetrade) while the unused, don't care-states are named pseudo-tetrad(e)s, pseudo-decimals, or pseudo-decimal digits. BCD's main virtue, in comparison to binary positional systems, is its more accurate representation and rounding of decimal quantities, as well as its ease of conversion into conventional human-readable representations. Its principal drawbacks are a slight increase in the complexity of the circuits needed to implement basic arithmetic as well as slightly less dense storage. BCD was used in many early decimal computers, and is implemented in the instruction set of machines such as the IBM System/360 series and its descendants, Digital Equipment Corporation's VAX, the Burroughs B1700, and the Motorola 68000-series processors. BCD per se is not as widely used as in the past, and is unavailable or limited in newer instruction sets (e.g., ARM; x86 in long mode). However, decimal fixed-point and decimal floating-point formats are still important and continue to be used in financial, commercial, and industrial computing, where the subtle conversion and fractional rounding errors that are inherent in binary floating point formats cannot be tolerated. Background BCD takes advantage of the fact that any one decimal numeral can be represented by a four-bit pattern. An obvious way of encoding digits is Natural BCD (NBCD), where each decimal digit is represented by its corresponding four-bit binary value, as shown in the following table. This is also called "8421" encoding. This scheme can also be referred to as Simple Binary-Coded Decimal (SBCD) or BCD 8421, and is the most common encoding. Others include the so-called "4221" and "7421" encoding – named after the weighting used for the bits – and "Excess-3". For example, the BCD digit 6, in 8421 notation, is in 4221 (two encodings are possible), in 7421, while in Excess-3 it is (). The following table represents decimal digits from 0 to 9 in various BCD encoding systems. In the headers, the "8421" indicates the weight of each bit. In the fifth column ("BCD 84−2−1"), two of the weights are negative. Both ASCII and EBCDIC character codes for the digits, which are examples of zoned BCD, are also shown. As most computers deal with data in 8-bit bytes, it is possible to use one of the following methods to encode a BCD number: Unpacked: Each decimal digit is encoded into one byte, with four bits representing the number and the remaining bits having no significance. Packed: Two decimal digits are encoded into a single byte, with one digit in the least significant nibble (bits 0 through 3) and the other numeral in the most significant nibble (bits 4 through 7). As an example, encoding the decimal number 91 using unpacked BCD results in the following binary pattern of two bytes: Decimal: 9 1 Binary : 0000 1001 0000 0001 In packed BCD, the same number would fit into a single byte: Decimal: 9 1 Binary : 1001 0001 Hence the numerical range for one unpacked BCD byte is zero through nine inclusive, whereas the range for one packed BCD byte is zero through ninety-nine inclusive. To represent numbers larger than the range of a single byte any number of contiguous bytes may be used. For example, to represent the decimal number 12345 in packed BCD, using big-endian format, a program would encode as follows: Decimal: 0 1 2 3 4 5 Binary : 0000 0001 0010 0011 0100 0101 Here, the most significant nibble of the most significant byte has been encoded as zero, so the number is stored as 012345 (but formatting routines might replace or remove leading zeros). Packed BCD is more efficient in storage usage than unpacked BCD; encoding the same number (with the leading zero) in unpacked format would consume twice the storage. Shifting and masking operations are used to pack or unpack a packed BCD digit. Other bitwise operations are used to convert a numeral to its equivalent bit pattern or reverse the process. Packed BCD In packed BCD (or packed decimal), each nibble represents a decimal digit. Packed BCD has been in use since at least the 1960s and is implemented in all IBM mainframe hardware since then. Most implementations are big endian, i.e. with the more significant digit in the upper half of each byte, and with the leftmost byte (residing at the lowest memory address) containing the most significant digits of the packed decimal value. The lower nibble of the rightmost byte is usually used as the sign flag, although some unsigned representations lack a sign flag. As an example, a 4-byte value consists of 8 nibbles, wherein the upper 7 nibbles store the digits of a 7-digit decimal value, and the lowest nibble indicates the sign of the decimal integer value. Standard sign values are 1100 (hex C) for positive (+) and 1101 (D) for negative (−). This convention comes from the zone field for EBCDIC characters and the signed overpunch representation. Other allowed signs are 1010 (A) and 1110 (E) for positive and 1011 (B) for negative. IBM System/360 processors will use the 1010 (A) and 1011 (B) signs if the A bit is set in the PSW, for the ASCII-8 standard that never passed. Most implementations also provide unsigned BCD values with a sign nibble of 1111 (F). ILE RPG uses 1111 (F) for positive and 1101 (D) for negative. These match the EBCDIC zone for digits without a sign overpunch. In packed BCD, the number 127 is represented by 0001 0010 0111 1100 (127C) and −127 is represented by 0001 0010 0111 1101 (127D). Burroughs systems used 1101 (D) for negative, and any other value is considered a positive sign value (the processors will normalize a positive sign to 1100 (C)). No matter how many bytes wide a word is, there is always an even number of nibbles because each byte has two of them. Therefore, a word of n bytes can contain up to (2n)−1 decimal digits, which is always an odd number of digits. A decimal number with d digits requires (d+1) bytes of storage space. For example, a 4-byte (32-bit) word can hold seven decimal digits plus a sign and can represent values ranging from ±9,999,999. Thus the number −1,234,567 is 7 digits wide and is encoded as: 0001 0010 0011 0100 0101 0110 0111 1101 1 2 3 4 5 6 7 − Like character strings, the first byte of the packed decimal that with the most significant two digits is usually stored in the lowest address in memory, independent of the endianness of the machine. In contrast, a 4-byte binary two's complement integer can represent values from −2,147,483,648 to +2,147,483,647. While packed BCD does not make optimal use of storage (using about 20% more memory than binary notation to store the same numbers), conversion to ASCII, EBCDIC, or the various encodings of Unicode is made trivial, as no arithmetic operations are required. The extra storage requirements are usually offset by the need for the accuracy and compatibility with calculator or hand calculation that fixed-point decimal arithmetic provides. Denser packings of BCD exist which avoid the storage penalty and also need no arithmetic operations for common conversions. Packed BCD is supported in the COBOL programming language as the "COMPUTATIONAL-3" (an IBM extension adopted by many other compiler vendors) or "PACKED-DECIMAL" (part of the 1985 COBOL standard) data type. It is supported in PL/I as "FIXED DECIMAL". Beside the IBM System/360 and later compatible mainframes, packed BCD is implemented in the native instruction set of the original VAX processors from Digital Equipment Corporation and some models of the SDS Sigma series mainframes, and is the native format for the Burroughs Medium Systems line of mainframes (descended from the 1950s Electrodata 200 series). Ten's complement representations for negative numbers offer an alternative approach to encoding the sign of packed (and other) BCD numbers. In this case, positive numbers always have a most significant digit between 0 and 4 (inclusive), while negative numbers are represented by the 10's complement of the corresponding positive number. As a result, this system allows for 32-bit packed BCD numbers to range from −50,000,000 to +49,999,999, and −1 is represented as 99999999. (As with two's complement binary numbers, the range is not symmetric about zero.) Fixed-point packed decimal Fixed-point decimal numbers are supported by some programming languages (such as COBOL and PL/I). These languages allow the programmer to specify an implicit decimal point in front of one of the digits. For example, a packed decimal value encoded with the bytes 12 34 56 7C represents the fixed-point value +1,234.567 when the implied decimal point is located between the fourth and fifth digits: 12 34 56 7C 12 34.56 7+ The decimal point is not actually stored in memory, as the packed BCD storage format does not provide for it. Its location is simply known to the compiler, and the generated code acts accordingly for the various arithmetic operations. Higher-density encodings If a decimal digit requires four bits, then three decimal digits require 12 bits. However, since 210 (1,024) is greater than 103 (1,000), if three decimal digits are encoded together, only 10 bits are needed. Two such encodings are Chen–Ho encoding and densely packed decimal (DPD). The latter has the advantage that subsets of the encoding encode two digits in the optimal seven bits and one digit in four bits, as in regular BCD. Zoned decimal Some implementations, for example IBM mainframe systems, support zoned decimal numeric representations. Each decimal digit is stored in one byte, with the lower four bits encoding the digit in BCD form. The upper four bits, called the "zone" bits, are usually set to a fixed value so that the byte holds a character value corresponding to the digit. EBCDIC systems use a zone value of 1111 (hex F); this yields bytes in the range F0 to F9 (hex), which are the EBCDIC codes for the characters "0" through "9". Similarly, ASCII systems use a zone value of 0011 (hex 3), giving character codes 30 to 39 (hex). For signed zoned decimal values, the rightmost (least significant) zone nibble holds the sign digit, which is the same set of values that are used for signed packed decimal numbers (see above). Thus a zoned decimal value encoded as the hex bytes F1 F2 D3 represents the signed decimal value −123: F1 F2 D3 1 2 −3 EBCDIC zoned decimal conversion table (*) Note: These characters vary depending on the local character code page setting. Fixed-point zoned decimal Some languages (such as COBOL and PL/I) directly support fixed-point zoned decimal values, assigning an implicit decimal point at some location between the decimal digits of a number. For example, given a six-byte signed zoned decimal value with an implied decimal point to the right of the fourth digit, the hex bytes F1 F2 F7 F9 F5 C0 represent the value +1,279.50: F1 F2 F7 F9 F5 C0 1 2 7 9. 5 +0 Operations with BCD Addition It is possible to perform addition by first adding in binary, and then converting to BCD afterwards. Conversion of the simple sum of two digits can be done by adding 6 (that is, 16 − 10) when the five-bit result of adding a pair of digits has a value greater than 9. The reason for adding 6 is that there are 16 possible 4-bit BCD values (since 24 = 16), but only 10 values are valid (0000 through 1001). For example: 1001 + 1000 = 10001 9 + 8 = 17 10001 is the binary, not decimal, representation of the desired result, but the most significant 1 (the "carry") cannot fit in a 4-bit binary number. In BCD as in decimal, there cannot exist a value greater than 9 (1001) per digit. To correct this, 6 (0110) is added to the total, and then the result is treated as two nibbles: 10001 + 0110 = 00010111 => 0001 0111 17 + 6 = 23 1 7 The two nibbles of the result, 0001 and 0111, correspond to the digits "1" and "7". This yields "17" in BCD, which is the correct result. This technique can be extended to adding multiple digits by adding in groups from right to left, propagating the second digit as a carry, always comparing the 5-bit result of each digit-pair sum to 9. Some CPUs provide a half-carry flag to facilitate BCD arithmetic adjustments following binary addition and subtraction operations. The Intel 8080, the Zilog Z80 and the CPUs of the x86 family provide the opcode DAA (Decimal Adjust Accumulator). Subtraction Subtraction is done by adding the ten's complement of the subtrahend to the minuend. To represent the sign of a number in BCD, the number 0000 is used to represent a positive number, and 1001 is used to represent a negative number. The remaining 14 combinations are invalid signs. To illustrate signed BCD subtraction, consider the following problem: 357 − 432. In signed BCD, 357 is 0000 0011 0101 0111. The ten's complement of 432 can be obtained by taking the nine's complement of 432, and then adding one. So, 999 − 432 = 567, and 567 + 1 = 568. By preceding 568 in BCD by the negative sign code, the number −432 can be represented. So, −432 in signed BCD is 1001 0101 0110 1000. Now that both numbers are represented in signed BCD, they can be added together: 0000 0011 0101 0111 0 3 5 7 + 1001 0101 0110 1000 9 5 6 8 = 1001 1000 1011 1111 9 8 11 15 Since BCD is a form of decimal representation, several of the digit sums above are invalid. In the event that an invalid entry (any BCD digit greater than 1001) exists, 6 is added to generate a carry bit and cause the sum to become a valid entry. So, adding 6 to the invalid entries results in the following: 1001 1000 1011 1111 9 8 11 15 + 0000 0000 0110 0110 0 0 6 6 = 1001 1001 0010 0101 9 9 2 5 Thus the result of the subtraction is 1001 1001 0010 0101 (−925). To confirm the result, note that the first digit is 9, which means negative. This seems to be correct since 357 − 432 should result in a negative number. The remaining nibbles are BCD, so 1001 0010 0101 is 925. The ten's complement of 925 is 1000 − 925 = 75, so the calculated answer is −75. If there are a different number of nibbles being added together (such as 1053 − 2), the number with the fewer digits must first be prefixed with zeros before taking the ten's complement or subtracting. So, with 1053 − 2, 2 would have to first be represented as 0002 in BCD, and the ten's complement of 0002 would have to be calculated. BCD in computers IBM IBM used the terms Binary-Coded Decimal Interchange Code (BCDIC, sometimes just called BCD), for 6-bit alphanumeric codes that represented numbers, upper-case letters and special characters. Some variation of BCDIC alphamerics is used in most early IBM computers, including the IBM 1620 (introduced in 1959), IBM 1400 series, and non-decimal architecture members of the IBM 700/7000 series. The IBM 1400 series are character-addressable machines, each location being six bits labeled B, A, 8, 4, 2 and 1, plus an odd parity check bit (C) and a word mark bit (M). For encoding digits 1 through 9, B and A are zero and the digit value represented by standard 4-bit BCD in bits 8 through 1. For most other characters bits B and A are derived simply from the "12", "11", and "0" "zone punches" in the punched card character code, and bits 8 through 1 from the 1 through 9 punches. A "12 zone" punch set both B and A, an "11 zone" set B, and a "0 zone" (a 0 punch combined with any others) set A. Thus the letter A, which is (12,1) in the punched card format, is encoded (B,A,1). The currency symbol $, (11,8,3) in the punched card, was encoded in memory as (B,8,2,1). This allows the circuitry to convert between the punched card format and the internal storage format to be very simple with only a few special cases. One important special case is digit 0, represented by a lone 0 punch in the card, and (8,2) in core memory. The memory of the IBM 1620 is organized into 6-bit addressable digits, the usual 8, 4, 2, 1 plus F, used as a flag bit and C, an odd parity check bit. BCD alphamerics are encoded using digit pairs, with the "zone" in the even-addressed digit and the "digit" in the odd-addressed digit, the "zone" being related to the 12, 11, and 0 "zone punches" as in the 1400 series. Input/output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes. In the decimal architecture IBM 7070, IBM 7072, and IBM 7074 alphamerics are encoded using digit pairs (using two-out-of-five code in the digits, not BCD) of the 10-digit word, with the "zone" in the left digit and the "digit" in the right digit. Input/output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes. With the introduction of System/360, IBM expanded 6-bit BCD alphamerics to 8-bit EBCDIC, allowing the addition of many more characters (e.g., lowercase letters). A variable length packed BCD numeric data type is also implemented, providing machine instructions that perform arithmetic directly on packed decimal data. On the IBM 1130 and 1800, packed BCD is supported in software by IBM's Commercial Subroutine Package. Today, BCD data is still heavily used in IBM databases such as IBM Db2 and processors such as z/Architecture and POWER6 and later Power ISA processors. In these products, the BCD is usually zoned BCD (as in EBCDIC or ASCII), packed BCD (two decimal digits per byte), or "pure" BCD encoding (one decimal digit stored as BCD in the low four bits of each byte). All of these are used within hardware registers and processing units, and in software. Other computers The Digital Equipment Corporation VAX series includes instructions that can perform arithmetic directly on packed BCD data and convert between packed BCD data and other integer representations. The VAX's packed BCD format is compatible with that on IBM System/360 and IBM's later compatible processors. The MicroVAX and later VAX implementations dropped this ability from the CPU but retained code compatibility with earlier machines by implementing the missing instructions in an operating system-supplied software library. This is invoked automatically via exception handling when the defunct instructions are encountered, so that programs using them can execute without modification on the newer machines. Many processors have hardware support for BCD-encoded integer arithmetic. For example, the 6502, the Motorola 68000 series, and the x86 series. The Intel x86 architecture supports a unique 18-digit (ten-byte) BCD format that can be loaded into and stored from the floating point registers, from where computations can be performed. In more recent computers such capabilities are almost always implemented in software rather than the CPU's instruction set, but BCD numeric data are still extremely common in commercial and financial applications. There are tricks for implementing packed BCD and zoned decimal add–or–subtract operations using short but difficult to understand sequences of word-parallel logic and binary arithmetic operations. For example, the following code (written in C) computes an unsigned 8-digit packed BCD addition using 32-bit binary operations: uint32_t BCDadd(uint32_t a, uint32_t b) { uint32_t t1, t2; // unsigned 32-bit intermediate values t1 = a + 0x06666666; t2 = t1 ^ b; // sum without carry propagation t1 = t1 + b; // provisional sum t2 = t1 ^ t2; // all the binary carry bits t2 = ~t2 & 0x11111110; // just the BCD carry bits t2 = (t2 >> 2) | (t2 >> 3); // correction return t1 - t2; // corrected BCD sum } BCD in electronics BCD is common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing with such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to an overall simpler system than converting to and from binary. Most pocket calculators do all their calculations in BCD. The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, representing numbers internally in BCD format results in smaller code, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature dedicated arithmetic modes, which assist when writing routines that manipulate BCD quantities. Comparison with pure binary Advantages Scaling by a power of 10 is simple. Rounding at a decimal digit boundary is simpler. Addition and subtraction in decimal do not require rounding. The alignment of two decimal numbers (for example 1.3 + 27.08) is a simple, exact shift. Conversion to a character form or for display (e.g., to a text-based format such as XML, or to drive signals for a seven-segment display) is a simple per-digit mapping, and can be done in linear (O(n)) time. Conversion from pure binary involves relatively complex logic that spans digits, and for large numbers, no linear-time conversion algorithm is known (see ). Many non-integral values, such as decimal 0.2, have an infinite place-value representation in binary (.001100110011...) but have a finite place-value in binary-coded decimal (0.0010). Consequently, a system based on binary-coded decimal representations of decimal fractions avoids errors representing and calculating such values. This is useful in financial calculations. Disadvantages Practical existing implementations of BCD are typically slower than operations on binary representations, especially on embedded systems, due to limited processor support for native BCD operations. Some operations are more complex to implement. Adders require extra logic to cause them to wrap and generate a carry early. Also, 15 to 20 per cent more circuitry is needed for BCD add compared to pure binary. Multiplication requires the use of algorithms that are somewhat more complex than shift-mask-add (a binary multiplication, requiring binary shifts and adds or the equivalent, per-digit or group of digits is required). Standard BCD requires four bits per digit, roughly 20 per cent more space than a binary encoding (the ratio of 4 bits to log210 bits is 1.204). When packed so that three digits are encoded in ten bits, the storage overhead is greatly reduced, at the expense of an encoding that is unaligned with the 8-bit byte boundaries common on existing hardware, resulting in slower implementations on these systems. Representational variations Various BCD implementations exist that employ other representations for numbers. Programmable calculators manufactured by Texas Instruments, Hewlett-Packard, and others typically employ a floating-point BCD format, typically with two or three digits for the (decimal) exponent. The extra bits of the sign digit may be used to indicate special numeric values, such as infinity, underflow/overflow, and error (a blinking display). Signed variations Signed decimal values may be represented in several ways. The COBOL programming language, for example, supports five zoned decimal formats, with each one encoding the numeric sign in a different way: Telephony binary-coded decimal (TBCD) 3GPP developed TBCD, an expansion to BCD where the remaining (unused) bit combinations are used to add specific telephony characters, with digits similar to those found in telephone keypads original design. The mentioned 3GPP document defines TBCD-STRING with swapped nibbles in each byte. Bits, octets and digits indexed from 1, bits from the right, digits and octets from the left. bits 8765 of octet n encoding digit 2n bits 4321 of octet n encoding digit 2(n – 1) + 1 Meaning number 1234, would become 21 43 in TBCD. This format is used in modern mobile telephony to send dialed numbers, as well as operator ID (the MCC/MNC tuple), IMEI, IMSI (SUPI), et.c. Alternative encodings If errors in representation and computation are more important than the speed of conversion to and from display, a scaled binary representation may be used, which stores a decimal number as a binary-encoded integer and a binary-encoded signed decimal exponent. For example, 0.2 can be represented as 2. This representation allows rapid multiplication and division, but may require shifting by a power of 10 during addition and subtraction to align the decimal points. It is appropriate for applications with a fixed number of decimal places that do not then require this adjustment—particularly financial applications where 2 or 4 digits after the decimal point are usually enough. Indeed, this is almost a form of fixed point arithmetic since the position of the radix point is implied. The Hertz and Chen–Ho encodings provide Boolean transformations for converting groups of three BCD-encoded digits to and from 10-bit values that can be efficiently encoded in hardware with only 2 or 3 gate delays. Densely packed decimal (DPD) is a similar scheme that is used for most of the significand, except the lead digit, for one of the two alternative decimal encodings specified in the IEEE 754-2008 floating-point standard. Application The BIOS in many personal computers stores the date and time in BCD because the MC6818 real-time clock chip used in the original IBM PC AT motherboard provided the time encoded in BCD. This form is easily converted into ASCII for display. The Atari 8-bit computers use a BCD format for floating point numbers. The MOS Technology 6502 processor has a BCD mode for the addition and subtraction instructions. The Psion Organiser 1 handheld computer's manufacturer-supplied software also uses BCD to implement floating point; later Psion models use binary exclusively. Early models of the PlayStation 3 store the date and time in BCD. This led to a worldwide outage of the console on 1 March 2010. The last two digits of the year stored as BCD were misinterpreted as 16 causing an error in the unit's date, rendering most functions inoperable. This has been referred to as the Year 2010 problem. Legal history In the 1972 case Gottschalk v. Benson, the U.S. Supreme Court overturned a lower court's decision that had allowed a patent for converting BCD-encoded numbers to binary on a computer. The decision noted that a patent "would wholly pre-empt the mathematical formula and in practical effect would be a patent on the algorithm itself". This was a landmark judgement that determined the patentability of software and algorithms. See also Bi-quinary coded decimal Binary-coded ternary (BCT) Binary integer decimal (BID) Bitmask Chen–Ho encoding Decimal computer Densely packed decimal (DPD) Double dabble, an algorithm for converting binary numbers to BCD Year 2000 problem Notes References Further reading and (NB. At least some batches of the Krieger reprint edition were misprints with defective pages 115–146.) (Also: ACM SIGPLAN Notices, Vol. 22 #10, IEEE Computer Society Press #87CH2440-6, October 1987) External links Convert BCD to decimal, binary and hexadecimal and vice versa BCD for Java Computer arithmetic Numeral systems Non-standard positional numeral systems Binary arithmetic Articles with example C code
Binary-coded decimal
[ "Mathematics" ]
6,565
[ "Mathematical objects", "Computer arithmetic", "Numeral systems", "Arithmetic", "Binary arithmetic", "Numbers" ]
3,876
https://en.wikipedia.org/wiki/Binomial%20distribution
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability ) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., , the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size drawn with replacement from a population of size . If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for much larger than , the binomial distribution remains a good approximation, and is widely used. Definitions Probability mass function If the random variable follows the binomial distribution with parameters and , we write . The probability of getting exactly successes in independent Bernoulli trials (with the same rate ) is given by the probability mass function: for , where is the binomial coefficient. The formula can be understood as follows: is the probability of obtaining the sequence of independent Bernoulli trials in which trials are "successes" and the remaining trials result in "failure". Since the trials are independent with probabilities remaining constant between them, any sequence of trials with successes (and failures) has the same probability of being achieved (regardless of positions of successes within the sequence). There are such sequences, since the binomial coefficient counts the number of ways to choose the positions of the successes among the trials. The binomial distribution is concerned with the probability of obtaining any of these sequences, meaning the probability of obtaining one of them () must be added times, hence . In creating reference tables for binomial distribution probability, usually, the table is filled in up to values. This is because for , the probability can be calculated by its complement as Looking at the expression as a function of , there is a value that maximizes it. This value can be found by calculating and comparing it to 1. There is always an integer that satisfies is monotone increasing for and monotone decreasing for , with the exception of the case where is an integer. In this case, there are two values for which is maximal: and . is the most probable outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the mode. Equivalently, . Taking the floor function, we obtain . Example Suppose a biased coin comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is Cumulative distribution function The cumulative distribution function can be expressed as: where is the "floor" under , i.e. the greatest integer less than or equal to . It can also be represented in terms of the regularized incomplete beta function, as follows: which is equivalent to the cumulative distribution functions of the beta distribution and of the -distribution: Some closed-form bounds for the cumulative distribution function are given below. Properties Expected value and variance If , that is, is a binomially distributed random variable, being the total number of experiments and p the probability of each experiment yielding a successful result, then the expected value of is: This follows from the linearity of the expected value along with the fact that is the sum of identical Bernoulli random variables, each with expected value . In other words, if are identical (and independent) Bernoulli random variables with parameter , then and The variance is: This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances. Higher moments The first 6 central moments, defined as , are given by The non-central moments satisfy and in general where are the Stirling numbers of the second kind, and is the th falling power of . A simple bound follows by bounding the Binomial moments via the higher Poisson moments: This shows that if , then is at most a constant factor away from Mode Usually the mode of a binomial distribution is equal to , where is the floor function. However, when is an integer and is neither 0 nor 1, then the distribution has two modes: and . When is equal to 0 or 1, the mode will be 0 and correspondingly. These cases can be summarized as follows: Proof: Let For only has a nonzero value with . For we find and for . This proves that the mode is 0 for and for . Let . We find . From this follows So when is an integer, then and is a mode. In the case that , then only is a mode. Median In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However, several special results have been established: If is an integer, then the mean, median, and mode coincide and equal . Any median must lie within the interval . A median cannot lie too far away from the mean: . The median is unique and equal to when (except for the case when and is odd). When is a rational number (with the exception of \ and odd) the median is unique. When and is odd, any number in the interval is a median of the binomial distribution. If and is even, then is the unique median. Tail bounds For , upper bounds can be derived for the lower tail of the cumulative distribution function , the probability that there are at most successes. Since , these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for . Hoeffding's inequality yields the simple bound which is however not very tight. In particular, for , we have that (for fixed , with ), but Hoeffding's bound evaluates to a positive constant. A sharper bound can be obtained from the Chernoff bound: where is the relative entropy (or Kullback-Leibler divergence) between an -coin and a -coin (i.e. between the and distribution): Asymptotically, this bound is reasonably tight; see for details. One can also obtain lower bounds on the tail , known as anti-concentration bounds. By approximating the binomial coefficient with Stirling's formula it can be shown that which implies the simpler but looser bound For and for even , it is possible to make the denominator constant: Statistical inference Estimation of parameters When is known, the parameter can be estimated using the proportion of successes: This estimator is found using maximum likelihood estimator and also the method of moments. This estimator is unbiased and uniformly with minimum variance, proven using Lehmann–Scheffé theorem, since it is based on a minimal sufficient and complete statistic (i.e.: ). It is also consistent both in probability and in MSE. This statistic is asymptotically normal thanks to the central limit theorem, because it is the same as taking the mean over Bernoulli samples. It has a variance of , a property which is used in various ways, such as in Wald's confidence intervals. A closed form Bayes estimator for also exists when using the Beta distribution as a conjugate prior distribution. When using a general as a prior, the posterior mean estimator is: The Bayes estimator is asymptotically efficient and as the sample size approaches infinity (), it approaches the MLE solution. The Bayes estimator is biased (how much depends on the priors), admissible and consistent in probability. Using the Bayesian estimator with the Beta distribution can be used with Thompson sampling. For the special case of using the standard uniform distribution as a non-informative prior, , the posterior mean estimator becomes: (A posterior mode should just lead to the standard estimator.) This method is called the rule of succession, which was introduced in the 18th century by Pierre-Simon Laplace. When relying on Jeffreys prior, the prior is , which leads to the estimator: When estimating with very rare events and a small (e.g.: if ), then using the standard estimator leads to which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators. One way is to use the Bayes estimator , leading to: Another method is to use the upper bound of the confidence interval obtained using the rule of three: Confidence intervals for the parameter p Even for quite large values of n, the actual distribution of the mean is significantly nonnormal. Because of this problem several methods to estimate confidence intervals have been proposed. In the equations for confidence intervals below, the variables have the following meaning: n1 is the number of successes out of n, the total number of trials is the proportion of successes is the quantile of a standard normal distribution (i.e., probit) corresponding to the target error rate . For example, for a 95% confidence level the error  = 0.05, so  = 0.975 and  = 1.96. Wald method A continuity correction of may be added. Agresti–Coull method Here the estimate of is modified to This method works well for and . See here for . For use the Wilson (score) method below. Arcsine method Wilson (score) method The notation in the formula below differs from the previous formulas in two respects: Firstly, has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the th quantile of the standard normal distribution', rather than being a shorthand for 'the th quantile'. Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use to get the lower bound, or use to get the upper bound. For example: for a 95% confidence level the error  = 0.05, so one gets the lower bound by using , and one gets the upper bound by using . Comparison The so-called "exact" (Clopper–Pearson) method is the most conservative. (Exact does not mean perfectly accurate; rather, it indicates that the estimates will not be less conservative than the true value.) The Wald method, although commonly recommended in textbooks, is the most biased. Related distributions Sums of binomials If and are independent binomial variables with the same probability , then is again a binomial variable; its distribution is : A Binomial distributed random variable can be considered as the sum of Bernoulli distributed random variables. So the sum of two Binomial distributed random variables and is equivalent to the sum of Bernoulli distributed random variables, which means . This can also be proven directly using the addition rule. However, if and do not have the same probability , then the variance of the sum will be smaller than the variance of a binomial variable distributed as . Poisson binomial distribution The binomial distribution is a special case of the Poisson binomial distribution, which is the distribution of a sum of independent non-identical Bernoulli trials . Ratio of two binomial distributions This result was first derived by Katz and coauthors in 1978. Let and be independent. Let . Then log(T) is approximately normally distributed with mean log(p1/p2) and variance . Conditional binomials If X ~ B(n, p) and Y | X ~ B(X, q) (the conditional distribution of Y, given X), then Y is a simple binomial random variable with distribution Y ~ B(n, pq). For example, imagine throwing n balls to a basket UX and taking the balls that hit and throwing them to another basket UY. If p is the probability to hit UX then X ~ B(n, p) is the number of balls that hit UX. If q is the probability to hit UY then the number of balls that hit UY is Y ~ B(X, q) and therefore Y ~ B(n, pq). Since and , by the law of total probability, Since the equation above can be expressed as Factoring and pulling all the terms that don't depend on out of the sum now yields After substituting in the expression above, we get Notice that the sum (in the parentheses) above equals by the binomial theorem. Substituting this in finally yields and thus as desired. Bernoulli distribution The Bernoulli distribution is a special case of the binomial distribution, where . Symbolically, has the same meaning as . Conversely, any binomial distribution, , is the distribution of the sum of independent Bernoulli trials, , each with the same probability . Normal approximation If is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to is given by the normal distribution and this basic approximation can be improved in a simple way by using a suitable continuity correction. The basic approximation generally improves as increases (at least 20) and is better when is not near to 0 or 1. Various rules of thumb may be used to decide whether is large enough, and is far enough from the extremes of zero or one: One rule is that for the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if This can be made precise using the Berry–Esseen theorem. A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above. The rule is totally equivalent to request that Moving terms around yields: Since , we can apply the square power and divide by the respective factors and , to obtain the desired conditions: Notice that these conditions automatically imply that . On the other hand, apply again the square root and divide by 3, Subtracting the second set of inequalities from the first one yields: and so, the desired first rule is satisfied, Another commonly used rule is that both values and must be greater than or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs. Assume that both values and are greater than 9. Since , we easily have that We only have to divide now by the respective factors and , to deduce the alternative form of the 3-standard-deviation rule: The following is an example of applying a continuity correction. Suppose one wishes to calculate for a binomial random variable . If has a distribution given by the normal approximation, then is approximated by . The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results. This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1738. Nowadays, it can be seen as a consequence of the central limit theorem since is a sum of independent, identically distributed Bernoulli variables with parameter . This fact is the basis of a hypothesis test, a "proportion z-test", for the value of using , the sample proportion and estimator of , in a common test statistic. For example, suppose one randomly samples people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of n people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion p of agreement in the population and with standard deviation Poisson approximation The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product converges to a finite limit. Therefore, the Poisson distribution with parameter can be used as an approximation to of the binomial distribution if is sufficiently large and is sufficiently small. According to rules of thumb, this approximation is good if and such that , or if and such that , or if and . Concerning the accuracy of Poisson approximation, see Novak, ch. 4, and references therein. Limiting distributions Poisson limit theorem: As approaches and approaches 0 with the product held fixed, the distribution approaches the Poisson distribution with expected value . de Moivre–Laplace theorem: As approaches while remains fixed, the distribution of approaches the normal distribution with expected value 0 and variance 1. This result is sometimes loosely stated by saying that the distribution of is asymptotically normal with expected value 0 and variance 1. This result is a specific case of the central limit theorem. Beta distribution The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of successes given independent events each with a probability of success. Mathematically, when and , the beta distribution and the binomial distribution are related by a factor of : Beta distributions also provide a family of prior probability distributions for binomial distributions in Bayesian inference: Given a uniform prior, the posterior distribution for the probability of success given independent events with observed successes is a beta distribution. Computational methods Random number generation Methods for random number generation where the marginal distribution is a binomial distribution are well-established. One way to generate random variates samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that for all values from through . (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a pseudorandom number generator to generate samples uniformly between 0 and 1, one can transform the calculated samples into discrete numbers by using the probabilities calculated in the first step. History This distribution was derived by Jacob Bernoulli. He considered the case where where is the probability of success and and are positive integers. Blaise Pascal had earlier considered the case where , tabulating the corresponding binomial coefficients in what is now recognized as Pascal's triangle. See also Logistic regression Multinomial distribution Negative binomial distribution Beta-binomial distribution Binomial measure, an example of a multifractal measure. Statistical mechanics Piling-up lemma, the resulting probability when XOR-ing independent Boolean variables References Further reading External links Interactive graphic: Univariate Distribution Relationships Binomial distribution formula calculator Difference of two binomial variables: X-Y or |X-Y| Querying the binomial probability distribution in WolframAlpha Confidence (credible) intervals for binomial probability, p: online calculator available at causaScientia.org Discrete distributions Factorial and binomial topics Conjugate prior distributions Exponential family distributions
Binomial distribution
[ "Mathematics" ]
4,007
[ "Factorial and binomial topics", "Combinatorics" ]
3,878
https://en.wikipedia.org/wiki/Biostatistics
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results. History Biostatistics and genetics Biostatistical modeling forms an important part of numerous modern biological theories. Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools. Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism. Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed by William Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, as Raphael Weldon, Arthur Dukinfield Darbishire and Karl Pearson, and Mendelians, who supported Bateson's (and Mendel's) ideas, such as Charles Davenport and Wilhelm Johannsen. Later, biometricians could not reproduce Galton conclusions in different experiments, and Mendel's ideas prevailed. By the 1930s, models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinian modern evolutionary synthesis. Solving these differences also allowed to define the concept of population genetics and brought together genetics and evolution. The three leading figures in the establishment of population genetics and this synthesis all relied on statistics and developed its use in biology. Ronald Fisher worked alongside statistician Betty Allan developing several basic statistical methods in support of his work studying the crop experiments at Rothamsted Research, published in Fisher's books Statistical Methods for Research Workers (1925) and The Genetical Theory of Natural Selection (1930), as well as Allan's scientific papers. Fisher went on to give many contributions to genetics and statistics. Some of them include the ANOVA, p-value concepts, Fisher's exact test and Fisher's equation for population dynamics. He is credited for the sentence "Natural selection is a mechanism for generating an exceedingly high degree of improbability". Sewall G. Wright developed F-statistics and methods of computing them and defined inbreeding coefficient. J. B. S. Haldane's book, The Causes of Evolution, reestablished natural selection as the premier mechanism of evolution by explaining it in terms of the mathematical consequences of Mendelian genetics. He also developed the theory of primordial soup. These and other biostatisticians, mathematical biologists, and statistically inclined geneticists helped bring together evolutionary biology and genetics into a consistent, coherent whole that could begin to be quantitatively modeled. In parallel to this overall development, the pioneering work of D'Arcy Thompson in On Growth and Form also helped to add quantitative discipline to biological study. Despite the fundamental importance and frequent necessity of statistical reasoning, there may nonetheless have been a tendency among biologists to distrust or deprecate results which are not qualitatively apparent. One anecdote describes Thomas Hunt Morgan banning the Friden calculator from his department at Caltech, saying "Well, I am like a guy who is prospecting for gold along the banks of the Sacramento River in 1849. With a little intelligence, I can reach down and pick up big nuggets of gold. And as long as I can do that, I'm not going to let any people in my department waste scarce resources in placer mining." Research planning Any research in life sciences is proposed to answer a scientific question we might have. To answer this question with a high certainty, we need accurate results. The correct definition of the main hypothesis and the research plan will reduce errors while taking a decision in understanding a phenomenon. The research plan might include the research question, the hypothesis to be tested, the experimental design, data collection methods, data analysis perspectives and costs involved. It is essential to carry the study based on the three basic principles of experimental statistics: randomization, replication, and local control. Research question The research question will define the objective of a study. The research will be headed by the question, so it needs to be concise, at the same time it is focused on interesting and novel topics that may improve science and knowledge and that field. To define the way to ask the scientific question, an exhaustive literature review might be necessary. So the research can be useful to add value to the scientific community. Hypothesis definition Once the aim of the study is defined, the possible answers to the research question can be proposed, transforming this question into a hypothesis. The main propose is called null hypothesis (H0) and is usually based on a permanent knowledge about the topic or an obvious occurrence of the phenomena, sustained by a deep literature review. We can say it is the standard expected answer for the data under the situation in test. In general, HO assumes no association between treatments. On the other hand, the alternative hypothesis is the denial of HO. It assumes some degree of association between the treatment and the outcome. Although, the hypothesis is sustained by question research and its expected and unexpected answers. As an example, consider groups of similar animals (mice, for example) under two different diet systems. The research question would be: what is the best diet? In this case, H0 would be that there is no difference between the two diets in mice metabolism (H0: μ1 = μ2) and the alternative hypothesis would be that the diets have different effects over animals metabolism (H1: μ1 ≠ μ2). The hypothesis is defined by the researcher, according to his/her interests in answering the main question. Besides that, the alternative hypothesis can be more than one hypothesis. It can assume not only differences across observed parameters, but their degree of differences (i.e. higher or shorter). Sampling Usually, a study aims to understand an effect of a phenomenon over a population. In biology, a population is defined as all the individuals of a given species, in a specific area at a given time. In biostatistics, this concept is extended to a variety of collections possible of study. Although, in biostatistics, a population is not only the individuals, but the total of one specific component of their organisms, as the whole genome, or all the sperm cells, for animals, or the total leaf area, for a plant, for example. It is not possible to take the measures from all the elements of a population. Because of that, the sampling process is very important for statistical inference. Sampling is defined as to randomly get a representative part of the entire population, to make posterior inferences about the population. So, the sample might catch the most variability across a population. The sample size is determined by several things, since the scope of the research to the resources available. In clinical research, the trial type, as inferiority, equivalence, and superiority is a key in determining sample size. Experimental design Experimental designs sustain those basic principles of experimental statistics. There are three basic experimental designs to randomly allocate treatments in all plots of the experiment. They are completely randomized design, randomized block design, and factorial designs. Treatments can be arranged in many ways inside the experiment. In agriculture, the correct experimental design is the root of a good study and the arrangement of treatments within the study is essential because environment largely affects the plots (plants, livestock, microorganisms). These main arrangements can be found in the literature under the names of "lattices", "incomplete blocks", "split plot", "augmented blocks", and many others. All of the designs might include control plots, determined by the researcher, to provide an error estimation during inference. In clinical studies, the samples are usually smaller than in other biological studies, and in most cases, the environment effect can be controlled or measured. It is common to use randomized controlled clinical trials, where results are usually compared with observational study designs such as case–control or cohort. Data collection Data collection methods must be considered in research planning, because it highly influences the sample size and experimental design. Data collection varies according to type of data. For qualitative data, collection can be done with structured questionnaires or by observation, considering presence or intensity of disease, using score criterion to categorize levels of occurrence. For quantitative data, collection is done by measuring numerical information using instruments. In agriculture and biology studies, yield data and its components can be obtained by metric measures. However, pest and disease injuries in plats are obtained by observation, considering score scales for levels of damage. Especially, in genetic studies, modern methods for data collection in field and laboratory should be considered, as high-throughput platforms for phenotyping and genotyping. These tools allow bigger experiments, while turn possible evaluate many plots in lower time than a human-based only method for data collection. Finally, all data collected of interest must be stored in an organized data frame for further analysis. Analysis and data interpretation Descriptive tools Data can be represented through tables or graphical representation, such as line charts, bar charts, histograms, scatter plot. Also, measures of central tendency and variability can be very useful to describe an overview of the data. Follow some examples: Frequency tables One type of table is the frequency table, which consists of data arranged in rows and columns, where the frequency is the number of occurrences or repetitions of data. Frequency can be: Absolute: represents the number of times that a determined value appear; Relative: obtained by the division of the absolute frequency by the total number; In the next example, we have the number of genes in ten operons of the same organism. Line graph Line graphs represent the variation of a value over another metric, such as time. In general, values are represented in the vertical axis, while the time variation is represented in the horizontal axis. Bar chart A bar chart is a graph that shows categorical data as bars presenting heights (vertical bar) or widths (horizontal bar) proportional to represent values. Bar charts provide an image that could also be represented in a tabular format. In the bar chart example, we have the birth rate in Brazil for the December months from 2010 to 2016. The sharp fall in December 2016 reflects the outbreak of Zika virus in the birth rate in Brazil. Histograms The histogram (or frequency distribution) is a graphical representation of a dataset tabulated and divided into uniform or non-uniform classes. It was first introduced by Karl Pearson. Scatter plot A scatter plot is a mathematical diagram that uses Cartesian coordinates to display values of a dataset. A scatter plot shows the data as a set of points, each one presenting the value of one variable determining the position on the horizontal axis and another variable on the vertical axis. They are also called scatter graph, scatter chart, scattergram, or scatter diagram. Mean The arithmetic mean is the sum of a collection of values () divided by the number of items of this collection (). Median The median is the value in the middle of a dataset. Mode The mode is the value of a set of data that appears most often. Box plot Box plot is a method for graphically depicting groups of numerical data. The maximum and minimum values are represented by the lines, and the interquartile range (IQR) represent 25–75% of the data. Outliers may be plotted as circles. Correlation coefficients Although correlations between two different kinds of data could be inferred by graphs, such as scatter plot, it is necessary validate this though numerical information. For this reason, correlation coefficients are required. They provide a numerical value that reflects the strength of an association. Pearson correlation coefficient Pearson correlation coefficient is a measure of association between two variables, X and Y. This coefficient, usually represented by ρ (rho) for the population and r for the sample, assumes values between −1 and 1, where ρ = 1 represents a perfect positive correlation, ρ = −1 represents a perfect negative correlation, and ρ = 0 is no linear correlation. Inferential statistics It is used to make inferences about an unknown population, by estimation and/or hypothesis testing. In other words, it is desirable to obtain parameters to describe the population of interest, but since the data is limited, it is necessary to make use of a representative sample in order to estimate them. With that, it is possible to test previously defined hypotheses and apply the conclusions to the entire population. The standard error of the mean is a measure of variability that is crucial to do inferences. Hypothesis testing Hypothesis testing is essential to make inferences about populations aiming to answer research questions, as settled in "Research planning" section. Authors defined four steps to be set: The hypothesis to be tested: as stated earlier, we have to work with the definition of a null hypothesis (H0), that is going to be tested, and an alternative hypothesis. But they must be defined before the experiment implementation. Significance level and decision rule: A decision rule depends on the level of significance, or in other words, the acceptable error rate (α). It is easier to think that we define a critical value that determines the statistical significance when a test statistic is compared with it. So, α also has to be predefined before the experiment. Experiment and statistical analysis: This is when the experiment is really implemented following the appropriate experimental design, data is collected and the more suitable statistical tests are evaluated. Inference: Is made when the null hypothesis is rejected or not rejected, based on the evidence that the comparison of p-values and α brings. It is pointed that the failure to reject H0 just means that there is not enough evidence to support its rejection, but not that this hypothesis is true. Confidence intervals A confidence interval is a range of values that can contain the true real parameter value in given a certain level of confidence. The first step is to estimate the best-unbiased estimate of the population parameter. The upper value of the interval is obtained by the sum of this estimate with the multiplication between the standard error of the mean and the confidence level. The calculation of lower value is similar, but instead of a sum, a subtraction must be applied. Statistical considerations Power and statistical error When testing a hypothesis, there are two types of statistic errors possible: Type I error and Type II error. The type I error or false positive is the incorrect rejection of a true null hypothesis The type II error or false negative is the failure to reject a false null hypothesis. The significance level denoted by α is the type I error rate and should be chosen before performing the test. The type II error rate is denoted by β and statistical power of the test is 1 − β. p-value The p-value is the probability of obtaining results as extreme as or more extreme than those observed, assuming the null hypothesis (H0) is true. It is also called the calculated probability. It is common to confuse the p-value with the significance level (α), but, the α is a predefined threshold for calling significant results. If p is less than α, the null hypothesis (H0) is rejected. Multiple testing In multiple tests of the same hypothesis, the probability of the occurrence of falses positives (familywise error rate) increase and some strategy are used to control this occurrence. This is commonly achieved by using a more stringent threshold to reject null hypotheses. The Bonferroni correction defines an acceptable global significance level, denoted by α* and each test is individually compared with a value of α = α*/m. This ensures that the familywise error rate in all m tests, is less than or equal to α*. When m is large, the Bonferroni correction may be overly conservative. An alternative to the Bonferroni correction is to control the false discovery rate (FDR). The FDR controls the expected proportion of the rejected null hypotheses (the so-called discoveries) that are false (incorrect rejections). This procedure ensures that, for independent tests, the false discovery rate is at most q*. Thus, the FDR is less conservative than the Bonferroni correction and have more power, at the cost of more false positives. Mis-specification and robustness checks The main hypothesis being tested (e.g., no association between treatments and outcomes) is often accompanied by other technical assumptions (e.g., about the form of the probability distribution of the outcomes) that are also part of the null hypothesis. When the technical assumptions are violated in practice, then the null may be frequently rejected even if the main hypothesis is true. Such rejections are said to be due to model mis-specification. Verifying whether the outcome of a statistical test does not change when the technical assumptions are slightly altered (so-called robustness checks) is the main way of combating mis-specification. Model selection criteria Model criteria selection will select or model that more approximate true model. The Akaike's Information Criterion (AIC) and The Bayesian Information Criterion (BIC) are examples of asymptotically efficient criteria. Developments and big data Recent developments have made a large impact on biostatistics. Two important changes have been the ability to collect data on a high-throughput scale, and the ability to perform much more complex analysis using computational techniques. This comes from the development in areas as sequencing technologies, Bioinformatics and Machine learning (Machine learning in bioinformatics). Use in high-throughput data New biomedical technologies like microarrays, next-generation sequencers (for genomics) and mass spectrometry (for proteomics) generate enormous amounts of data, allowing many tests to be performed simultaneously. Careful analysis with biostatistical methods is required to separate the signal from the noise. For example, a microarray could be used to measure many thousands of genes simultaneously, determining which of them have different expression in diseased cells compared to normal cells. However, only a fraction of genes will be differentially expressed. Multicollinearity often occurs in high-throughput biostatistical settings. Due to high intercorrelation between the predictors (such as gene expression levels), the information of one predictor might be contained in another one. It could be that only 5% of the predictors are responsible for 90% of the variability of the response. In such a case, one could apply the biostatistical technique of dimension reduction (for example via principal component analysis). Classical statistical techniques like linear or logistic regression and linear discriminant analysis do not work well for high dimensional data (i.e. when the number of observations n is smaller than the number of features or predictors p: n < p). As a matter of fact, one can get quite high R2-values despite very low predictive power of the statistical model. These classical statistical techniques (esp. least squares linear regression) were developed for low dimensional data (i.e. where the number of observations n is much larger than the number of predictors p: n >> p). In cases of high dimensionality, one should always consider an independent validation test set and the corresponding residual sum of squares (RSS) and R2 of the validation test set, not those of the training set. Often, it is useful to pool information from multiple predictors together. For example, Gene Set Enrichment Analysis (GSEA) considers the perturbation of whole (functionally related) gene sets rather than of single genes. These gene sets might be known biochemical pathways or otherwise functionally related genes. The advantage of this approach is that it is more robust: It is more likely that a single gene is found to be falsely perturbed than it is that a whole pathway is falsely perturbed. Furthermore, one can integrate the accumulated knowledge about biochemical pathways (like the JAK-STAT signaling pathway) using this approach. Bioinformatics advances in databases, data mining, and biological interpretation The development of biological databases enables storage and management of biological data with the possibility of ensuring access for users around the world. They are useful for researchers depositing data, retrieve information and files (raw or processed) originated from other experiments or indexing scientific articles, as PubMed. Another possibility is search for the desired term (a gene, a protein, a disease, an organism, and so on) and check all results related to this search. There are databases dedicated to SNPs (dbSNP), the knowledge on genes characterization and their pathways (KEGG) and the description of gene function classifying it by cellular component, molecular function and biological process (Gene Ontology). In addition to databases that contain specific molecular information, there are others that are ample in the sense that they store information about an organism or group of organisms. As an example of a database directed towards just one organism, but that contains much data about it, is the Arabidopsis thaliana genetic and molecular database – TAIR. Phytozome, in turn, stores the assemblies and annotation files of dozen of plant genomes, also containing visualization and analysis tools. Moreover, there is an interconnection between some databases in the information exchange/sharing and a major initiative was the International Nucleotide Sequence Database Collaboration (INSDC) which relates data from DDBJ, EMBL-EBI, and NCBI. Nowadays, increase in size and complexity of molecular datasets leads to use of powerful statistical methods provided by computer science algorithms which are developed by machine learning area. Therefore, data mining and machine learning allow detection of patterns in data with a complex structure, as biological ones, by using methods of supervised and unsupervised learning, regression, detection of clusters and association rule mining, among others. To indicate some of them, self-organizing maps and k-means are examples of cluster algorithms; neural networks implementation and support vector machines models are examples of common machine learning algorithms. Collaborative work among molecular biologists, bioinformaticians, statisticians and computer scientists is important to perform an experiment correctly, going from planning, passing through data generation and analysis, and ending with biological interpretation of the results. Use of computationally intensive methods On the other hand, the advent of modern computer technology and relatively cheap computing resources have enabled computer-intensive biostatistical methods like bootstrapping and re-sampling methods. In recent times, random forests have gained popularity as a method for performing statistical classification. Random forest techniques generate a panel of decision trees. Decision trees have the advantage that you can draw them and interpret them (even with a basic understanding of mathematics and statistics). Random Forests have thus been used for clinical decision support systems. Applications Public health Public health, including epidemiology, health services research, nutrition, environmental health and health care policy & management. In these medicine contents, it's important to consider the design and analysis of the clinical trials. As one example, there is the assessment of severity state of a patient with a prognosis of an outcome of a disease. With new technologies and genetics knowledge, biostatistics are now also used for Systems medicine, which consists in a more personalized medicine. For this, is made an integration of data from different sources, including conventional patient data, clinico-pathological parameters, molecular and genetic data as well as data generated by additional new-omics technologies. Quantitative genetics The study of population genetics and statistical genetics in order to link variation in genotype with a variation in phenotype. In other words, it is desirable to discover the genetic basis of a measurable trait, a quantitative trait, that is under polygenic control. A genome region that is responsible for a continuous trait is called a quantitative trait locus (QTL). The study of QTLs become feasible by using molecular markers and measuring traits in populations, but their mapping needs the obtaining of a population from an experimental crossing, like an F2 or recombinant inbred strains/lines (RILs). To scan for QTLs regions in a genome, a gene map based on linkage have to be built. Some of the best-known QTL mapping algorithms are Interval Mapping, Composite Interval Mapping, and Multiple Interval Mapping. However, QTL mapping resolution is impaired by the amount of recombination assayed, a problem for species in which it is difficult to obtain large offspring. Furthermore, allele diversity is restricted to individuals originated from contrasting parents, which limit studies of allele diversity when we have a panel of individuals representing a natural population. For this reason, the genome-wide association study was proposed in order to identify QTLs based on linkage disequilibrium, that is the non-random association between traits and molecular markers. It was leveraged by the development of high-throughput SNP genotyping. In animal and plant breeding, the use of markers in selection aiming for breeding, mainly the molecular ones, collaborated to the development of marker-assisted selection. While QTL mapping is limited due resolution, GWAS does not have enough power when rare variants of small effect that are also influenced by environment. So, the concept of Genomic Selection (GS) arises in order to use all molecular markers in the selection and allow the prediction of the performance of candidates in this selection. The proposal is to genotype and phenotype a training population, develop a model that can obtain the genomic estimated breeding values (GEBVs) of individuals belonging to a genotype and but not phenotype population, called testing population. This kind of study could also include a validation population, thinking in the concept of cross-validation, in which the real phenotype results measured in this population are compared with the phenotype results based on the prediction, what used to check the accuracy of the model. As a summary, some points about the application of quantitative genetics are: This has been used in agriculture to improve crops (Plant breeding) and livestock (Animal breeding). In biomedical research, this work can assist in finding candidates gene alleles that can cause or influence predisposition to diseases in human genetics Expression data Studies for differential expression of genes from RNA-Seq data, as for RT-qPCR and microarrays, demands comparison of conditions. The goal is to identify genes which have a significant change in abundance between different conditions. Then, experiments are designed appropriately, with replicates for each condition/treatment, randomization and blocking, when necessary. In RNA-Seq, the quantification of expression uses the information of mapped reads that are summarized in some genetic unit, as exons that are part of a gene sequence. As microarray results can be approximated by a normal distribution, RNA-Seq counts data are better explained by other distributions. The first used distribution was the Poisson one, but it underestimate the sample error, leading to false positives. Currently, biological variation is considered by methods that estimate a dispersion parameter of a negative binomial distribution. Generalized linear models are used to perform the tests for statistical significance and as the number of genes is high, multiple tests correction have to be considered. Some examples of other analysis on genomics data comes from microarray or proteomics experiments. Often concerning diseases or disease stages. Other studies Ecology, ecological forecasting Biological sequence analysis Systems biology for gene network inference or pathways analysis. Clinical research and pharmaceutical development Population dynamics, especially in regards to fisheries science. Phylogenetics and evolution Pharmacodynamics Pharmacokinetics Neuroimaging Tools There are a lot of tools that can be used to do statistical analysis in biological data. Most of them are useful in other areas of knowledge, covering a large number of applications (alphabetical). Here are brief descriptions of some of them: ASReml: Another software developed by VSNi that can be used also in R environment as a package. It is developed to estimate variance components under a general linear mixed model using restricted maximum likelihood (REML). Models with fixed effects and random effects and nested or crossed ones are allowed. Gives the possibility to investigate different variance-covariance matrix structures. CycDesigN: A computer package developed by VSNi that helps the researchers create experimental designs and analyze data coming from a design present in one of three classes handled by CycDesigN. These classes are resolvable, non-resolvable, partially replicated and crossover designs. It includes less used designs the Latinized ones, as t-Latinized design. Orange: A programming interface for high-level data processing, data mining and data visualization. Include tools for gene expression and genomics. R: An open source environment and programming language dedicated to statistical computing and graphics. It is an implementation of S language maintained by CRAN. In addition to its functions to read data tables, take descriptive statistics, develop and evaluate models, its repository contains packages developed by researchers around the world. This allows the development of functions written to deal with the statistical analysis of data that comes from specific applications. In the case of Bioinformatics, for example, there are packages located in the main repository (CRAN) and in others, as Bioconductor. It is also possible to use packages under development that are shared in hosting-services as GitHub. SAS: A data analysis software widely used, going through universities, services and industry. Developed by a company with the same name (SAS Institute), it uses SAS language for programming. PLA 3.0: Is a biostatistical analysis software for regulated environments (e.g. drug testing) which supports Quantitative Response Assays (Parallel-Line, Parallel-Logistics, Slope-Ratio) and Dichotomous Assays (Quantal Response, Binary Assays). It also supports weighting methods for combination calculations and the automatic data aggregation of independent assay data. Weka: A Java software for machine learning and data mining, including tools and methods for visualization, clustering, regression, association rule, and classification. There are tools for cross-validation, bootstrapping and a module of algorithm comparison. Weka also can be run in other programming languages as Perl or R. Python (programming language) image analysis, deep-learning, machine-learning SQL databases NoSQL NumPy numerical python SciPy SageMath LAPACK linear algebra MATLAB Apache Hadoop Apache Spark Amazon Web Services Scope and training programs Almost all educational programmes in biostatistics are at postgraduate level. They are most often found in schools of public health, affiliated with schools of medicine, forestry, or agriculture, or as a focus of application in departments of statistics. In the United States, where several universities have dedicated biostatistics departments, many other top-tier universities integrate biostatistics faculty into statistics or other departments, such as epidemiology. Thus, departments carrying the name "biostatistics" may exist under quite different structures. For instance, relatively new biostatistics departments have been founded with a focus on bioinformatics and computational biology, whereas older departments, typically affiliated with schools of public health, will have more traditional lines of research involving epidemiological studies and clinical trials as well as bioinformatics. In larger universities around the world, where both a statistics and a biostatistics department exist, the degree of integration between the two departments may range from the bare minimum to very close collaboration. In general, the difference between a statistics program and a biostatistics program is twofold: (i) statistics departments will often host theoretical/methodological research which are less common in biostatistics programs and (ii) statistics departments have lines of research that may include biomedical applications but also other areas such as industry (quality control), business and economics and biological areas other than medicine. Specialized journals Biostatistics International Journal of Biostatistics Journal of Epidemiology and Biostatistics Biostatistics and Public Health Biometrics Biometrika Biometrical Journal Communications in Biometry and Crop Science Statistical Applications in Genetics and Molecular Biology Statistical Methods in Medical Research Pharmaceutical Statistics Statistics in Medicine See also Bioinformatics Epidemiological method Epidemiology Group size measures Health indicator Mathematical and theoretical biology References External links The International Biometric Society The Collection of Biostatistics Research Archive Guide to Biostatistics (MedPageToday.com) Biomedical Statistics Bioinformatics
Biostatistics
[ "Engineering", "Biology" ]
6,812
[ "Bioinformatics", "Biological engineering" ]
3,954
https://en.wikipedia.org/wiki/Biochemistry
Biochemistry, or biological chemistry, is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis that allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs as well as organism structure and function. Biochemistry is closely related to molecular biology, the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, functions, and interactions of biological macromolecules such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition, and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology. History At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists. The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister. It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level. Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression. Starting materials: the chemical elements of life Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts). Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more. Biomolecules The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity. Carbohydrates Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications. The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare. Two monosaccharides can be joined by a glycosidic or ester bond into a disaccharide through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed hydrolysis. The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance. When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Cellulose is an important structural component of plant's cell walls and glycogen is used as a form of energy storage in animals. Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2). Lipids Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid. Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and may be saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain). Most lipids have some polar character and are largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol). In the case of phospholipids, the polar groups are considerably larger and more polar, as described below. Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilizers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome). Proteins Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that makes each amino acid different, and the properties of the side chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues. Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain. The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole. The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit. Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids. If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein. A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release ammonia into the water where it is quickly diluted. In general, mammals convert ammonia into urea, via the urea cycle. In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function. Nucleic acids Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group. The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid. Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine, thymine, and uracil contain two hydrogen bonds, while hydrogen bonds formed between cytosine and guanine are three. Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA. Metabolism Carbohydrates as energy source Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides. Glycolysis (anaerobic) Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway. Aerobic In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen. Gluconeogenesis In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate. The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle. Relationship to other "molecular-scale" biological sciences Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields: Biochemistry is the study of the chemical substances and vital processes occurring in live organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are applications of biochemistry. Biochemistry studies life at the atomic and molecular level. Genetics is the study of the effect of genetic differences in organisms. This can often be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms that lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knockout" studies. Molecular biology is the study of molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. The central dogma of molecular biology, where genetic material is transcribed into RNA and then translated into protein, despite being oversimplified, still provides a good starting point for understanding the field. This concept has been revised in light of emerging novel roles for RNA. Chemical biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules). See also Lists Important publications in biochemistry (chemistry) List of biochemistry topics List of biochemists List of biomolecules See also Astrobiology Biochemistry (journal) Biological Chemistry (journal) Biophysics Chemical ecology Computational biomodeling Dedicated bio-based chemical EC number Hypothetical types of biochemistry International Union of Biochemistry and Molecular Biology Metabolome Metabolomics Molecular biology Molecular medicine Plant biochemistry Proteolysis Small molecule Structural biology TCA cycle Notes References Cited literature Further reading Fruton, Joseph S. Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. Yale University Press: New Haven, 1999. Keith Roberts, Martin Raff, Bruce Alberts, Peter Walter, Julian Lewis and Alexander Johnson, Molecular Biology of the Cell 4th Edition, Routledge, March, 2002, hardcover, 1616 pp. 3rd Edition, Garland, 1994, 2nd Edition, Garland, 1989, Kohler, Robert. From Medical Chemistry to Biochemistry: The Making of a Biomedical Discipline. Cambridge University Press, 1982. External links The Virtual Library of Biochemistry, Molecular Biology and Cell Biology Biochemistry, 5th ed. Full text of Berg, Tymoczko, and Stryer, courtesy of NCBI. SystemsX.ch – The Swiss Initiative in Systems Biology Full text of Biochemistry by Kevin and Indira, an introductory biochemistry textbook. Biotechnology Molecular biology
Biochemistry
[ "Chemistry", "Biology" ]
6,497
[ "Biochemistry", "Biotechnology", "nan", "Molecular biology" ]
3,967
https://en.wikipedia.org/wiki/Bandwidth%20%28signal%20processing%29
Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in unit of hertz (symbol Hz). It may refer more specifically to two subcategories: Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth is equal to the upper cutoff frequency of a low-pass filter or baseband signal, which includes a zero frequency. Bandwidth in hertz is a central concept in many fields, including electronics, information theory, digital communications, radio communications, signal processing, and spectroscopy and is one of the determinants of the capacity of a given communication channel. A key characteristic of bandwidth is that any band of a given width can carry the same amount of information, regardless of where that band is located in the frequency spectrum. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency. However, wide bandwidths are easier to obtain and process at higher frequencies because the is smaller. Overview Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the frequency range occupied by a modulated carrier signal. An FM radio receiver's tuner spans a limited range of frequencies. A government agency (such as the Federal Communications Commission in the United States) may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere. In this context, bandwidth is also known as channel spacing. For other applications, there are other definitions. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a specified level of performance. A less strict and more practically useful definition will refer to the frequencies beyond which performance is degraded. In the case of frequency response, degradation could, for example, mean more than 3 dB below the maximum value or it could mean below a certain absolute value. As with any definition of the width of a function, many definitions are suitable for different purposes. In the context of, for example, the sampling theorem and Nyquist sampling rate, bandwidth typically refers to baseband bandwidth. In the context of Nyquist symbol rate or Shannon-Hartley channel capacity for communication systems it refers to passband bandwidth. The of a simple radar pulse is defined as the inverse of its duration. For example, a one-microsecond pulse has a Rayleigh bandwidth of one megahertz. The is defined as the portion of a signal spectrum in the frequency domain which contains most of the energy of the signal. x dB bandwidth In some contexts, the signal bandwidth in hertz refers to the frequency range in which the signal's spectral density (in W/Hz or V2/Hz) is nonzero or above a small threshold value. The threshold value is often defined relative to the maximum value, and is most commonly the , that is the point where the spectral density is half its maximum value (or the spectral amplitude, in or , is 70.7% of its maximum). This figure, with a lower threshold value, can be used in calculations of the lowest sampling rate that will satisfy the sampling theorem. The bandwidth is also used to denote system bandwidth, for example in filter or communication channel systems. To say that a system has a certain bandwidth means that the system can process signals with that range of frequencies, or that the system reduces the bandwidth of a white noise input to that bandwidth. The 3 dB bandwidth of an electronic filter or communication channel is the part of the system's frequency response that lies within 3 dB of the response at its peak, which, in the passband filter case, is typically at or near its center frequency, and in the low-pass filter is at or near its cutoff frequency. If the maximum gain is 0 dB, the 3 dB bandwidth is the frequency range where attenuation is less than 3 dB. 3 dB attenuation is also where power is half its maximum. This same half-power gain convention is also used in spectral width, and more generally for the extent of functions as full width at half maximum (FWHM). In electronic filter design, a filter specification may require that within the filter passband, the gain is nominally 0 dB with a small variation, for example within the ±1 dB interval. In the stopband(s), the required attenuation in decibels is above a certain level, for example >100 dB. In a transition band the gain is not specified. In this case, the filter bandwidth corresponds to the passband width, which in this example is the 1 dB-bandwidth. If the filter shows amplitude ripple within the passband, the x dB point refers to the point where the gain is x dB below the nominal passband gain rather than x dB below the maximum gain. In signal processing and control theory the bandwidth is the frequency at which the closed-loop system gain drops 3 dB below peak. In communication systems, in calculations of the Shannon–Hartley channel capacity, bandwidth refers to the 3 dB-bandwidth. In calculations of the maximum symbol rate, the Nyquist sampling rate, and maximum bit rate according to the Hartley's law, the bandwidth refers to the frequency range within which the gain is non-zero. The fact that in equivalent baseband models of communication systems, the signal spectrum consists of both negative and positive frequencies, can lead to confusion about bandwidth since they are sometimes referred to only by the positive half, and one will occasionally see expressions such as , where is the total bandwidth (i.e. the maximum passband bandwidth of the carrier-modulated RF signal and the minimum passband bandwidth of the physical passband channel), and is the positive bandwidth (the baseband bandwidth of the equivalent channel model). For instance, the baseband model of the signal would require a low-pass filter with cutoff frequency of at least to stay intact, and the physical passband channel would require a passband filter of at least to stay intact. Relative bandwidth The absolute bandwidth is not always the most appropriate or useful measure of bandwidth. For instance, in the field of antennas the difficulty of constructing an antenna to meet a specified absolute bandwidth is easier at a higher frequency than at a lower frequency. For this reason, bandwidth is often quoted relative to the frequency of operation which gives a better indication of the structure and sophistication needed for the circuit or device under consideration. There are two different measures of relative bandwidth in common use: fractional bandwidth () and ratio bandwidth (). In the following, the absolute bandwidth is defined as follows, where and are the upper and lower frequency limits respectively of the band in question. Fractional bandwidth Fractional bandwidth is defined as the absolute bandwidth divided by the center frequency (), The center frequency is usually defined as the arithmetic mean of the upper and lower frequencies so that, and However, the center frequency is sometimes defined as the geometric mean of the upper and lower frequencies, and While the geometric mean is more rarely used than the arithmetic mean (and the latter can be assumed if not stated explicitly) the former is considered more mathematically rigorous. It more properly reflects the logarithmic relationship of fractional bandwidth with increasing frequency. For narrowband applications, there is only marginal difference between the two definitions. The geometric mean version is inconsequentially larger. For wideband applications they diverge substantially with the arithmetic mean version approaching 2 in the limit and the geometric mean version approaching infinity. Fractional bandwidth is sometimes expressed as a percentage of the center frequency (percent bandwidth, ), Ratio bandwidth Ratio bandwidth is defined as the ratio of the upper and lower limits of the band, Ratio bandwidth may be notated as . The relationship between ratio bandwidth and fractional bandwidth is given by, and Percent bandwidth is a less meaningful measure in wideband applications. A percent bandwidth of 100% corresponds to a ratio bandwidth of 3:1. All higher ratios up to infinity are compressed into the range 100–200%. Ratio bandwidth is often expressed in octaves (i.e., as a frequency level) for wideband applications. An octave is a frequency ratio of 2:1 leading to this expression for the number of octaves, Noise equivalent bandwidth The noise equivalent bandwidth (or equivalent noise bandwidth (enbw)) of a system of frequency response is the bandwidth of an ideal filter with rectangular frequency response centered on the system's central frequency that produces the same average power outgoing when both systems are excited with a white noise source. The value of the noise equivalent bandwidth depends on the ideal filter reference gain used. Typically, this gain equals at its center frequency, but it can also equal the peak value of . The noise equivalent bandwidth can be calculated in the frequency domain using or in the time domain by exploiting the Parseval's theorem with the system impulse response . If is a lowpass system with zero central frequency and the filter reference gain is referred to this frequency, then: The same expression can be applied to bandpass systems by substituting the equivalent baseband frequency response for . The noise equivalent bandwidth is widely used to simplify the analysis of telecommunication systems in the presence of noise. Photonics In photonics, the term bandwidth carries a variety of meanings: the bandwidth of the output of some light source, e.g., an ASE source or a laser; the bandwidth of ultrashort optical pulses can be particularly large the width of the frequency range that can be transmitted by some element, e.g. an optical fiber the gain bandwidth of an optical amplifier the width of the range of some other phenomenon, e.g., a reflection, the phase matching of a nonlinear process, or some resonance the maximum modulation frequency (or range of modulation frequencies) of an optical modulator the range of frequencies in which some measurement apparatus (e.g., a power meter) can operate the data rate (e.g., in Gbit/s) achieved in an optical communication system; see bandwidth (computing). A related concept is the spectral linewidth of the radiation emitted by excited atoms. See also Bandwidth extension Broadband Noise bandwidth Rise time Spectral efficiency Spectral width Notes References Signal processing Telecommunication theory Filter frequency response Spectrum (physical sciences)
Bandwidth (signal processing)
[ "Physics", "Technology", "Engineering" ]
2,117
[ "Physical phenomena", "Telecommunications engineering", "Computer engineering", "Spectrum (physical sciences)", "Signal processing", "Waves" ]
4,101
https://en.wikipedia.org/wiki/Brouwer%20fixed-point%20theorem
Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function mapping a nonempty compact convex set to itself, there is a point such that . The simplest forms of Brouwer's theorem are for continuous functions from a closed interval in the real numbers to itself or from a closed disk to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset of Euclidean space to itself. Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu. The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The case of differentiable mappings of the -dimensional closed ball was first proved in 1910 by Jacques Hadamard and the general case for continuous mappings by Brouwer in 1911. Statement The theorem has several formulations, depending on the context in which it is used and its degree of generalization. The simplest is sometimes given as follows: In the plane Every continuous function from a closed disk to itself has at least one fixed point. This can be generalized to an arbitrary finite dimension: In Euclidean spaceEvery continuous function from a closed ball of a Euclidean space into itself has a fixed point. A slightly more general version is as follows: Convex compact setEvery continuous function from a nonempty convex compact subset K of a Euclidean space to K itself has a fixed point. An even more general form is better known under a different name: Schauder fixed point theoremEvery continuous function from a nonempty convex compact subset K of a Banach space to K itself has a fixed point. Importance of the pre-conditions The theorem holds only for functions that are endomorphisms (functions that have the same set as the domain and codomain) and for nonempty sets that are compact (thus, in particular, bounded and closed) and convex (or homeomorphic to convex). The following examples show why the pre-conditions are important. The function f as an endomorphism Consider the function with domain [-1,1]. The range of the function is [0,2]. Thus, f is not an endomorphism. Boundedness Consider the function which is a continuous function from to itself. As it shifts every point to the right, it cannot have a fixed point. The space is convex and closed, but not bounded. Closedness Consider the function which is a continuous function from the open interval (−1,1) to itself. Since x = 1 is not part of the interval, there is not a fixed point of f(x) = x. The space (−1,1) is convex and bounded, but not closed. On the other hand, the function f have a fixed point for the closed interval [−1,1], namely f(1) = 1. (The domain of f is (-1,1) but the range is (0,1), which are not the same. Earlier, one of the conditions for functions satisfying the theorem is that the domain and range were the same, not that one be a subset of the other. Thus the reason for f failing is not closure.) Convexity Convexity is not strictly necessary for Brouwer's fixed-point theorem. Because the properties involved (continuity, being a fixed point) are invariant under homeomorphisms, Brouwer's fixed-point theorem is equivalent to forms in which the domain is required to be a closed unit ball . For the same reason it holds for every set that is homeomorphic to a closed ball (and therefore also closed, bounded, connected, without holes, etc.). The following example shows that Brouwer's fixed-point theorem does not work for domains with holes. Consider the function , which is a continuous function from the unit circle to itself. Since -x≠x holds for any point of the unit circle, f has no fixed point. The analogous example works for the n-dimensional sphere (or any symmetric domain that does not contain the origin). The unit circle is closed and bounded, but it has a hole (and so it is not convex) . The function f have a fixed point for the unit disc, since it takes the origin to itself. A formal generalization of Brouwer's fixed-point theorem for "hole-free" domains can be derived from the Lefschetz fixed-point theorem. Notes The continuous function in this theorem is not required to be bijective or surjective. Illustrations The theorem has several "real world" illustrations. Here are some examples. Take two sheets of graph paper of equal size with coordinate systems on them, lay one flat on the table and crumple up (without ripping or tearing) the other one and place it, in any fashion, on top of the first so that the crumpled paper does not reach outside the flat one. There will then be at least one point of the crumpled sheet that lies directly above its corresponding point (i.e. the point with the same coordinates) of the flat sheet. This is a consequence of the n = 2 case of Brouwer's theorem applied to the continuous map that assigns to the coordinates of every point of the crumpled sheet the coordinates of the point of the flat sheet immediately beneath it. Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country. In three dimensions a consequence of the Brouwer fixed-point theorem is that, no matter how much you stir a delicious cocktail in a glass (or think about milk shake), when the liquid has come to rest, some point in the liquid will end up in exactly the same place in the glass as before you took any action, assuming that the final position of each point is a continuous function of its original position, that the liquid after stirring is contained within the space originally taken up by it, and that the glass (and stirred surface shape) maintain a convex volume. Ordering a cocktail shaken, not stirred defeats the convexity condition ("shaking" being defined as a dynamic series of non-convex inertial containment states in the vacant headspace under a lid). In that case, the theorem would not apply, and thus all points of the liquid disposition are potentially displaced from the original state. Intuitive approach Explanations attributed to Brouwer The theorem is supposed to have originated from Brouwer's observation of a cup of gourmet coffee. If one stirs to dissolve a lump of sugar, it appears there is always a point without motion. He drew the conclusion that at any moment, there is a point on the surface that is not moving. The fixed point is not necessarily the point that seems to be motionless, since the centre of the turbulence moves a little bit. The result is not intuitive, since the original fixed point may become mobile when another fixed point appears. Brouwer is said to have added: "I can formulate this splendid result different, I take a horizontal sheet, and another identical one which I crumple, flatten and place on the other. Then a point of the crumpled sheet is in the same place as on the other sheet." Brouwer "flattens" his sheet as with a flat iron, without removing the folds and wrinkles. Unlike the coffee cup example, the crumpled paper example also demonstrates that more than one fixed point may exist. This distinguishes Brouwer's result from other fixed-point theorems, such as Stefan Banach's, that guarantee uniqueness. One-dimensional case In one dimension, the result is intuitive and easy to prove. The continuous function f is defined on a closed interval [a, b] and takes values in the same interval. Saying that this function has a fixed point amounts to saying that its graph (dark green in the figure on the right) intersects that of the function defined on the same interval [a, b] which maps x to x (light green). Intuitively, any continuous line from the left edge of the square to the right edge must necessarily intersect the green diagonal. To prove this, consider the function g which maps x to f(x) − x. It is ≥ 0 on a and ≤ 0 on b. By the intermediate value theorem, g has a zero in [a, b]; this zero is a fixed point. Brouwer is said to have expressed this as follows: "Instead of examining a surface, we will prove the theorem about a piece of string. Let us begin with the string in an unfolded state, then refold it. Let us flatten the refolded string. Again a point of the string has not changed its position with respect to its original position on the unfolded string." History The Brouwer fixed point theorem was one of the early achievements of algebraic topology, and is the basis of more general fixed point theorems which are important in functional analysis. The case n = 3 first was proved by Piers Bohl in 1904 (published in Journal für die reine und angewandte Mathematik). It was later proved by L. E. J. Brouwer in 1909. Jacques Hadamard proved the general case in 1910, and Brouwer found a different proof in the same year. Since these early proofs were all non-constructive indirect proofs, they ran contrary to Brouwer's intuitionist ideals. Although the existence of a fixed point is not constructive in the sense of constructivism in mathematics, methods to approximate fixed points guaranteed by Brouwer's theorem are now known. Before discovery At the end of the 19th century, the old problem of the stability of the solar system returned into the focus of the mathematical community. Its solution required new methods. As noted by Henri Poincaré, who worked on the three-body problem, there is no hope to find an exact solution: "Nothing is more proper to give us an idea of the hardness of the three-body problem, and generally of all problems of Dynamics where there is no uniform integral and the Bohlin series diverge." He also noted that the search for an approximate solution is no more efficient: "the more we seek to obtain precise approximations, the more the result will diverge towards an increasing imprecision". He studied a question analogous to that of the surface movement in a cup of coffee. What can we say, in general, about the trajectories on a surface animated by a constant flow? Poincaré discovered that the answer can be found in what we now call the topological properties in the area containing the trajectory. If this area is compact, i.e. both closed and bounded, then the trajectory either becomes stationary, or it approaches a limit cycle. Poincaré went further; if the area is of the same kind as a disk, as is the case for the cup of coffee, there must necessarily be a fixed point. This fixed point is invariant under all functions which associate to each point of the original surface its position after a short time interval t. If the area is a circular band, or if it is not closed, then this is not necessarily the case. To understand differential equations better, a new branch of mathematics was born. Poincaré called it analysis situs. The French Encyclopædia Universalis defines it as the branch which "treats the properties of an object that are invariant if it is deformed in any continuous way, without tearing". In 1886, Poincaré proved a result that is equivalent to Brouwer's fixed-point theorem, although the connection with the subject of this article was not yet apparent. A little later, he developed one of the fundamental tools for better understanding the analysis situs, now known as the fundamental group or sometimes the Poincaré group. This method can be used for a very compact proof of the theorem under discussion. Poincaré's method was analogous to that of Émile Picard, a contemporary mathematician who generalized the Cauchy–Lipschitz theorem. Picard's approach is based on a result that would later be formalised by another fixed-point theorem, named after Banach. Instead of the topological properties of the domain, this theorem uses the fact that the function in question is a contraction. First proofs At the dawn of the 20th century, the interest in analysis situs did not stay unnoticed. However, the necessity of a theorem equivalent to the one discussed in this article was not yet evident. Piers Bohl, a Latvian mathematician, applied topological methods to the study of differential equations. In 1904 he proved the three-dimensional case of our theorem, but his publication was not noticed. It was Brouwer, finally, who gave the theorem its first patent of nobility. His goals were different from those of Poincaré. This mathematician was inspired by the foundations of mathematics, especially mathematical logic and topology. His initial interest lay in an attempt to solve Hilbert's fifth problem. In 1909, during a voyage to Paris, he met Henri Poincaré, Jacques Hadamard, and Émile Borel. The ensuing discussions convinced Brouwer of the importance of a better understanding of Euclidean spaces, and were the origin of a fruitful exchange of letters with Hadamard. For the next four years, he concentrated on the proof of certain great theorems on this question. In 1912 he proved the hairy ball theorem for the two-dimensional sphere, as well as the fact that every continuous map from the two-dimensional ball to itself has a fixed point. These two results in themselves were not really new. As Hadamard observed, Poincaré had shown a theorem equivalent to the hairy ball theorem. The revolutionary aspect of Brouwer's approach was his systematic use of recently developed tools such as homotopy, the underlying concept of the Poincaré group. In the following year, Hadamard generalised the theorem under discussion to an arbitrary finite dimension, but he employed different methods. Hans Freudenthal comments on the respective roles as follows: "Compared to Brouwer's revolutionary methods, those of Hadamard were very traditional, but Hadamard's participation in the birth of Brouwer's ideas resembles that of a midwife more than that of a mere spectator." Brouwer's approach yielded its fruits, and in 1910 he also found a proof that was valid for any finite dimension, as well as other key theorems such as the invariance of dimension. In the context of this work, Brouwer also generalized the Jordan curve theorem to arbitrary dimension and established the properties connected with the degree of a continuous mapping. This branch of mathematics, originally envisioned by Poincaré and developed by Brouwer, changed its name. In the 1930s, analysis situs became algebraic topology. Reception The theorem proved its worth in more than one way. During the 20th century numerous fixed-point theorems were developed, and even a branch of mathematics called fixed-point theory. Brouwer's theorem is probably the most important. It is also among the foundational theorems on the topology of topological manifolds and is often used to prove other important results such as the Jordan curve theorem. Besides the fixed-point theorems for more or less contracting functions, there are many that have emerged directly or indirectly from the result under discussion. A continuous map from a closed ball of Euclidean space to its boundary cannot be the identity on the boundary. Similarly, the Borsuk–Ulam theorem says that a continuous map from the n-dimensional sphere to Rn has a pair of antipodal points that are mapped to the same point. In the finite-dimensional case, the Lefschetz fixed-point theorem provided from 1926 a method for counting fixed points. In 1930, Brouwer's fixed-point theorem was generalized to Banach spaces. This generalization is known as Schauder's fixed-point theorem, a result generalized further by S. Kakutani to set-valued functions. One also meets the theorem and its variants outside topology. It can be used to prove the Hartman-Grobman theorem, which describes the qualitative behaviour of certain differential equations near certain equilibria. Similarly, Brouwer's theorem is used for the proof of the Central Limit Theorem. The theorem can also be found in existence proofs for the solutions of certain partial differential equations. Other areas are also touched. In game theory, John Nash used the theorem to prove that in the game of Hex there is a winning strategy for white. In economics, P. Bich explains that certain generalizations of the theorem show that its use is helpful for certain classical problems in game theory and generally for equilibria (Hotelling's law), financial equilibria and incomplete markets. Brouwer's celebrity is not exclusively due to his topological work. The proofs of his great topological theorems are not constructive, and Brouwer's dissatisfaction with this is partly what led him to articulate the idea of constructivity. He became the originator and zealous defender of a way of formalising mathematics that is known as intuitionism, which at the time made a stand against set theory. Brouwer disavowed his original proof of the fixed-point theorem. Proof outlines A proof using degree Brouwer's original 1911 proof relied on the notion of the degree of a continuous mapping, stemming from ideas in differential topology. Several modern accounts of the proof can be found in the literature, notably . Let denote the closed unit ball in centered at the origin. Suppose for simplicity that is continuously differentiable. A regular value of is a point such that the Jacobian of is non-singular at every point of the preimage of . In particular, by the inverse function theorem, every point of the preimage of lies in (the interior of ). The degree of at a regular value is defined as the sum of the signs of the Jacobian determinant of over the preimages of under : The degree is, roughly speaking, the number of "sheets" of the preimage f lying over a small open set around p, with sheets counted oppositely if they are oppositely oriented. This is thus a generalization of winding number to higher dimensions. The degree satisfies the property of homotopy invariance: let and be two continuously differentiable functions, and for . Suppose that the point is a regular value of for all t. Then . If there is no fixed point of the boundary of , then the function is well-defined, and defines a homotopy from the identity function to it. The identity function has degree one at every point. In particular, the identity function has degree one at the origin, so also has degree one at the origin. As a consequence, the preimage is not empty. The elements of are precisely the fixed points of the original function f. This requires some work to make fully general. The definition of degree must be extended to singular values of f, and then to continuous functions. The more modern advent of homology theory simplifies the construction of the degree, and so has become a standard proof in the literature. A proof using the hairy ball theorem The hairy ball theorem states that on the unit sphere in an odd-dimensional Euclidean space, there is no nowhere-vanishing continuous tangent vector field on . (The tangency condition means that = 0 for every unit vector .) Sometimes the theorem is expressed by the statement that "there is always a place on the globe with no wind". An elementary proof of the hairy ball theorem can be found in . In fact, suppose first that is continuously differentiable. By scaling, it can be assumed that is a continuously differentiable unit tangent vector on . It can be extended radially to a small spherical shell of . For sufficiently small, a routine computation shows that the mapping () = + is a contraction mapping on and that the volume of its image is a polynomial in . On the other hand, as a contraction mapping, must restrict to a homeomorphism of onto (1 + ) and onto (1 + ) . This gives a contradiction, because, if the dimension of the Euclidean space is odd, (1 + )/2 is not a polynomial. If is only a continuous unit tangent vector on , by the Weierstrass approximation theorem, it can be uniformly approximated by a polynomial map of into Euclidean space. The orthogonal projection on to the tangent space is given by () = () - () ⋅ . Thus is polynomial and nowhere vanishing on ; by construction /|||| is a smooth unit tangent vector field on , a contradiction. The continuous version of the hairy ball theorem can now be used to prove the Brouwer fixed point theorem. First suppose that is even. If there were a fixed-point-free continuous self-mapping of the closed unit ball of the -dimensional Euclidean space , set Since has no fixed points, it follows that, for in the interior of , the vector () is non-zero; and for in , the scalar product ⋅ () = 1 – ⋅ () is strictly positive. From the original -dimensional space Euclidean space , construct a new auxiliary ()-dimensional space = x R, with coordinates = (, ). Set By construction is a continuous vector field on the unit sphere of , satisfying the tangency condition ⋅ () = 0. Moreover, () is nowhere vanishing (because, if has norm 1, then ⋅ () is non-zero; while if has norm strictly less than 1, then and () are both non-zero). This contradiction proves the fixed point theorem when is even. For odd, one can apply the fixed point theorem to the closed unit ball in dimensions and the mapping (,) = ((),0). The advantage of this proof is that it uses only elementary techniques; more general results like the Borsuk-Ulam theorem require tools from algebraic topology. A proof using homology or cohomology The proof uses the observation that the boundary of the n-disk Dn is Sn−1, the (n − 1)-sphere. Suppose, for contradiction, that a continuous function has no fixed point. This means that, for every point x in Dn, the points x and f(x) are distinct. Because they are distinct, for every point x in Dn, we can construct a unique ray from f(x) to x and follow the ray until it intersects the boundary Sn−1 (see illustration). By calling this intersection point F(x), we define a function F : Dn → Sn−1 sending each point in the disk to its corresponding intersection point on the boundary. As a special case, whenever x itself is on the boundary, then the intersection point F(x) must be x. Consequently, F is a special type of continuous function known as a retraction: every point of the codomain (in this case Sn−1) is a fixed point of F. Intuitively it seems unlikely that there could be a retraction of Dn onto Sn−1, and in the case n = 1, the impossibility is more basic, because S0 (i.e., the endpoints of the closed interval D1) is not even connected. The case n = 2 is less obvious, but can be proven by using basic arguments involving the fundamental groups of the respective spaces: the retraction would induce a surjective group homomorphism from the fundamental group of D2 to that of S1, but the latter group is isomorphic to Z while the first group is trivial, so this is impossible. The case n = 2 can also be proven by contradiction based on a theorem about non-vanishing vector fields. For n > 2, however, proving the impossibility of the retraction is more difficult. One way is to make use of homology groups: the homology Hn−1(Dn) is trivial, while Hn−1(Sn−1) is infinite cyclic. This shows that the retraction is impossible, because again the retraction would induce an injective group homomorphism from the latter to the former group. The impossibility of a retraction can also be shown using the de Rham cohomology of open subsets of Euclidean space En. For n ≥ 2, the de Rham cohomology of U = En – (0) is one-dimensional in degree 0 and n – 1, and vanishes otherwise. If a retraction existed, then U would have to be contractible and its de Rham cohomology in degree n – 1 would have to vanish, a contradiction. A proof using Stokes' theorem As in the proof of Brouwer's fixed-point theorem for continuous maps using homology, it is reduced to proving that there is no continuous retraction from the ball onto its boundary ∂. In that case it can be assumed that is smooth, since it can be approximated using the Weierstrass approximation theorem or by convolving with non-negative smooth bump functions of sufficiently small support and integral one (i.e. mollifying). If is a volume form on the boundary then by Stokes' theorem, giving a contradiction. More generally, this shows that there is no smooth retraction from any non-empty smooth oriented compact manifold onto its boundary. The proof using Stokes' theorem is closely related to the proof using homology, because the form generates the de Rham cohomology group (∂) which is isomorphic to the homology group (∂) by de Rham's theorem. A combinatorial proof The BFPT can be proved using Sperner's lemma. We now give an outline of the proof for the special case in which f is a function from the standard n-simplex, to itself, where For every point also Hence the sum of their coordinates is equal: Hence, by the pigeonhole principle, for every there must be an index such that the th coordinate of is greater than or equal to the th coordinate of its image under f: Moreover, if lies on a k-dimensional sub-face of then by the same argument, the index can be selected from among the coordinates which are not zero on this sub-face. We now use this fact to construct a Sperner coloring. For every triangulation of the color of every vertex is an index such that By construction, this is a Sperner coloring. Hence, by Sperner's lemma, there is an n-dimensional simplex whose vertices are colored with the entire set of available colors. Because f is continuous, this simplex can be made arbitrarily small by choosing an arbitrarily fine triangulation. Hence, there must be a point which satisfies the labeling condition in all coordinates: for all Because the sum of the coordinates of and must be equal, all these inequalities must actually be equalities. But this means that: That is, is a fixed point of A proof by Hirsch There is also a quick proof, by Morris Hirsch, based on the impossibility of a differentiable retraction. The indirect proof starts by noting that the map f can be approximated by a smooth map retaining the property of not fixing a point; this can be done by using the Weierstrass approximation theorem or by convolving with smooth bump functions. One then defines a retraction as above which must now be differentiable. Such a retraction must have a non-singular value, by Sard's theorem, which is also non-singular for the restriction to the boundary (which is just the identity). Thus the inverse image would be a 1-manifold with boundary. The boundary would have to contain at least two end points, both of which would have to lie on the boundary of the original ball—which is impossible in a retraction. R. Bruce Kellogg, Tien-Yien Li, and James A. Yorke turned Hirsch's proof into a computable proof by observing that the retract is in fact defined everywhere except at the fixed points. For almost any point, q, on the boundary, (assuming it is not a fixed point) the one manifold with boundary mentioned above does exist and the only possibility is that it leads from q to a fixed point. It is an easy numerical task to follow such a path from q to the fixed point so the method is essentially computable. gave a conceptually similar path-following version of the homotopy proof which extends to a wide variety of related problems. A proof using oriented area A variation of the preceding proof does not employ the Sard's theorem, and goes as follows. If is a smooth retraction, one considers the smooth deformation and the smooth function Differentiating under the sign of integral it is not difficult to check that (t) = 0 for all t, so φ is a constant function, which is a contradiction because φ(0) is the n-dimensional volume of the ball, while φ(1) is zero. The geometric idea is that φ(t) is the oriented area of gt(B) (that is, the Lebesgue measure of the image of the ball via gt, taking into account multiplicity and orientation), and should remain constant (as it is very clear in the one-dimensional case). On the other hand, as the parameter t passes from 0 to 1 the map gt transforms continuously from the identity map of the ball, to the retraction r, which is a contradiction since the oriented area of the identity coincides with the volume of the ball, while the oriented area of r is necessarily 0, as its image is the boundary of the ball, a set of null measure. A proof using the game Hex A quite different proof given by David Gale is based on the game of Hex. The basic theorem regarding Hex, first proven by John Nash, is that no game of Hex can end in a draw; the first player always has a winning strategy (although this theorem is nonconstructive, and explicit strategies have not been fully developed for board sizes of dimensions 10 x 10 or greater). This turns out to be equivalent to the Brouwer fixed-point theorem for dimension 2. By considering n-dimensional versions of Hex, one can prove in general that Brouwer's theorem is equivalent to the determinacy theorem for Hex. A proof using the Lefschetz fixed-point theorem The Lefschetz fixed-point theorem says that if a continuous map f from a finite simplicial complex B to itself has only isolated fixed points, then the number of fixed points counted with multiplicities (which may be negative) is equal to the Lefschetz number and in particular if the Lefschetz number is nonzero then f must have a fixed point. If B is a ball (or more generally is contractible) then the Lefschetz number is one because the only non-zero simplicial homology group is: and f acts as the identity on this group, so f has a fixed point. A proof in a weak logical system In reverse mathematics, Brouwer's theorem can be proved in the system WKL0, and conversely over the base system RCA0 Brouwer's theorem for a square implies the weak Kőnig's lemma, so this gives a precise description of the strength of Brouwer's theorem. Generalizations The Brouwer fixed-point theorem forms the starting point of a number of more general fixed-point theorems. The straightforward generalization to infinite dimensions, i.e. using the unit ball of an arbitrary Hilbert space instead of Euclidean space, is not true. The main problem here is that the unit balls of infinite-dimensional Hilbert spaces are not compact. For example, in the Hilbert space ℓ2 of square-summable real (or complex) sequences, consider the map f : ℓ2 → ℓ2 which sends a sequence (xn) from the closed unit ball of ℓ2 to the sequence (yn) defined by It is not difficult to check that this map is continuous, has its image in the unit sphere of ℓ2, but does not have a fixed point. The generalizations of the Brouwer fixed-point theorem to infinite dimensional spaces therefore all include a compactness assumption of some sort, and also often an assumption of convexity. See fixed-point theorems in infinite-dimensional spaces for a discussion of these theorems. There is also finite-dimensional generalization to a larger class of spaces: If is a product of finitely many chainable continua, then every continuous function has a fixed point, where a chainable continuum is a (usually but in this case not necessarily metric) compact Hausdorff space of which every open cover has a finite open refinement , such that if and only if . Examples of chainable continua include compact connected linearly ordered spaces and in particular closed intervals of real numbers. The Kakutani fixed point theorem generalizes the Brouwer fixed-point theorem in a different direction: it stays in Rn, but considers upper hemi-continuous set-valued functions (functions that assign to each point of the set a subset of the set). It also requires compactness and convexity of the set. The Lefschetz fixed-point theorem applies to (almost) arbitrary compact topological spaces, and gives a condition in terms of singular homology that guarantees the existence of fixed points; this condition is trivially satisfied for any map in the case of Dn. Equivalent results See also Banach fixed-point theorem Fixed-point computation Infinite compositions of analytic functions Nash equilibrium Poincaré–Miranda theorem – equivalent to the Brouwer fixed-point theorem Topological combinatorics Notes References (see p. 72–73 for Hirsch's proof utilizing non-existence of a differentiable retraction) Leoni, Giovanni (2017). A First Course in Sobolev Spaces: Second Edition. Graduate Studies in Mathematics. 181. American Mathematical Society. pp. 734. External links Brouwer's Fixed Point Theorem for Triangles at cut-the-knot Brouwer theorem , from PlanetMath with attached proof. Reconstructing Brouwer at MathPages Brouwer Fixed Point Theorem at Math Images. Fixed-point theorems Theory of continuous functions Theorems in topology Theorems in convex geometry
Brouwer fixed-point theorem
[ "Mathematics" ]
7,396
[ "Theorems in mathematical analysis", "Mathematical theorems", "Theory of continuous functions", "Fixed-point theorems", "Theorems in topology", "Theorems in convex geometry", "Topology", "Theorems in geometry", "Mathematical problems" ]
4,107
https://en.wikipedia.org/wiki/Boltzmann%20distribution
In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form: where is the probability of the system being in state , is the exponential function, is the energy of that state, and a constant of the distribution is the product of the Boltzmann constant and thermodynamic temperature . The symbol denotes proportionality (see for the proportionality constant). The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore, the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied. The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference: The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902. The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution. The distribution The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as where: is the exponential function, is the probability of state , is the energy of state , is the Boltzmann constant, is the absolute temperature of the system, is the number of all states accessible to the system of interest, (denoted by some authors by ) is the normalization denominator, which is the canonical partition function It results from the constraint that the probabilities of all accessible states must add up to 1. Using Lagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes the entropy subject to the normalization constraint that and the constraint that equals a particular mean energy value, except for two special cases. (These special cases occur when the mean value is either the minimum or maximum of the energies . In these cases, the entropy maximizing distribution is a limit of Boltzmann distributions where approaches zero from above or below, respectively.) The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database. The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states and is given as where: is the probability of state , the probability of state , is the energy of state , is the energy of state . The corresponding ratio of populations of energy levels must also take their degeneracies into account. The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state . This probability is equal to the number of particles in state divided by the total number of particles in the system, that is the fraction of particles that occupy state . where is the number of particles in state and is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state as a function of the energy of that state is This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition. The softmax function commonly used in machine learning is related to the Boltzmann distribution: Generalized Boltzmann distribution Distribution of the form is called generalized Boltzmann distribution by some authors. The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from the principle of maximum entropy, but there are other derivations. The generalized Boltzmann distribution has the following properties: It is the only distribution for which the entropy as defined by Gibbs entropy formula matches with the entropy as defined in classical thermodynamics. It is the only distribution that is mathematically consistent with the fundamental thermodynamic relation where state functions are described by ensemble average. In statistical mechanics The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects: Canonical ensemble (general case) The canonical ensemble gives the probabilities of the various possible states of a closed system of fixed volume, in thermal equilibrium with a heat bath. The canonical ensemble has a state probability distribution with the Boltzmann form. Statistical frequencies of subsystems' states (in a non-interacting collection) When the system of interest is a collection of many non-interacting copies of a smaller subsystem, it is sometimes useful to find the statistical frequency of a given subsystem state, among the collection. The canonical ensemble has the property of separability when applied to such a collection: as long as the non-interacting subsystems have fixed composition, then each subsystem's state is independent of the others and is also characterized by a canonical ensemble. As a result, the expected statistical frequency distribution of subsystem states has the Boltzmann form. Maxwell–Boltzmann statistics of classical gases (systems of non-interacting particles) In particle systems, many particles share the same space and regularly change places with each other; the single-particle state space they occupy is a shared space. Maxwell–Boltzmann statistics give the expected number of particles found in a given single-particle state, in a classical gas of non-interacting particles at equilibrium. This expected number distribution has the Boltzmann form. Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed: When a system is in thermodynamic equilibrium with respect to both energy exchange and particle exchange, the requirement of fixed composition is relaxed and a grand canonical ensemble is obtained rather than canonical ensemble. On the other hand, if both composition and energy are fixed, then a microcanonical ensemble applies instead. If the subsystems within a collection do interact with each other, then the expected frequencies of subsystem states no longer follow a Boltzmann distribution, and even may not have an analytical solution. The canonical ensemble can however still be applied to the collective states of the entire system considered as a whole, provided the entire system is in thermal equilibrium. With quantum gases of non-interacting particles in equilibrium, the number of particles found in a given single-particle state does not follow Maxwell–Boltzmann statistics, and there is no simple closed form expression for quantum gases in the canonical ensemble. In the grand canonical ensemble the state-filling statistics of quantum gases are described by Fermi–Dirac statistics or Bose–Einstein statistics, depending on whether the particles are fermions or bosons, respectively. In mathematics In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning, it is called a log-linear model. In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, restricted Boltzmann machine, energy-based models and deep Boltzmann machine. In deep learning, the Boltzmann machine is considered to be one of the unsupervised learning models. In the design of Boltzmann machine in deep learning, as the number of nodes are increased the difficulty of implementing in real time applications becomes critical, so a different type of architecture named Restricted Boltzmann machine is introduced. In economics The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries. The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization. See also Bose–Einstein statistics Fermi–Dirac statistics Negative temperature Softmax function References Statistical mechanics Distribution
Boltzmann distribution
[ "Physics" ]
2,146
[ "Statistical mechanics" ]
4,115
https://en.wikipedia.org/wiki/Boiling%20point
The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor. The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum, i.e., under a lower pressure, has a lower boiling point than when that liquid is at atmospheric pressure. Because of this, water boils at 100°C (or with scientific precision: ) under standard pressure at sea level, but at at altitude. For a given pressure, different liquids will boil at different temperatures. The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar. The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure). Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid. Saturation temperature and pressure A saturated liquid contains as much thermal energy as it can without boiling (or conversely a saturated vapor contains as little thermal energy as it can without condensing). Saturation temperature means boiling point. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy—any addition of thermal energy results in a phase transition. If the pressure in a system remains constant (isobaric), a vapor at saturation temperature will begin to condense into its liquid phase as thermal energy (heat) is removed. Similarly, a liquid at saturation temperature and pressure will boil into its vapor phase as additional thermal energy is applied. The boiling point corresponds to the temperature at which the vapor pressure of the liquid equals the surrounding environmental pressure. Thus, the boiling point is dependent on the pressure. Boiling points may be published with respect to the NIST, USA standard pressure of 101.325 kPa (1 atm), or the IUPAC standard pressure of 100.000 kPa (1 bar). At higher elevations, where the atmospheric pressure is much lower, the boiling point is also lower. The boiling point increases with increased pressure up to the critical point, where the gas and liquid properties become identical. The boiling point cannot be increased beyond the critical point. Likewise, the boiling point decreases with decreasing pressure until the triple point is reached. The boiling point cannot be reduced below the triple point. Suppose the heat of vaporization and the vapor pressure of a liquid at a certain temperature are known. In that case, the boiling point can be calculated by using the Clausius–Clapeyron equation, thus: where: is the boiling point at the pressure of interest, is the ideal gas constant, is the vapor pressure of the liquid, is some pressure where the corresponding is known (usually data available at 1 atm or 100 kPa (1 bar)), is the heat of vaporization of the liquid, is the boiling temperature, is the natural logarithm. Saturation pressure is the pressure for a corresponding saturation temperature at which a liquid boils into its vapor phase. Saturation pressure and saturation temperature have a direct relationship: as saturation pressure is increased, so is saturation temperature. If the temperature in a system remains constant (an isothermal system), vapor at saturation pressure and temperature will begin to condense into its liquid phase as the system pressure is increased. Similarly, a liquid at saturation pressure and temperature will tend to flash into its vapor phase as system pressure is decreased. There are two conventions regarding the standard boiling point of water: The normal boiling point is commonly given as (actually following the thermodynamic definition of the Celsius scale based on the kelvin) at a pressure of 1 atm (101.325 kPa). The IUPAC-recommended standard boiling point of water at a standard pressure of 100 kPa (1 bar) is . For comparison, on top of Mount Everest, at elevation, the pressure is about and the boiling point of water is . The Celsius temperature scale was defined until 1954 by two points: 0 °C being defined by the water freezing point and 100 °C being defined by the water boiling point at standard atmospheric pressure. Relation between the normal boiling point and the vapor pressure of liquids The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point (i.e., the boiling point at atmospheric pressure) of the liquid. The vapor pressure chart to the right has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points. For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure. The critical point of a liquid is the highest temperature (and pressure) it will actually boil at. See also Vapour pressure of water. Boiling point of chemical elements The element with the lowest boiling point is helium. Both the boiling points of rhenium and tungsten exceed 5000 K at standard pressure; because it is difficult to measure extreme temperatures precisely without bias, both have been cited in the literature as having the higher boiling point. Boiling point as a reference property of a pure compound As can be seen from the above plot of the logarithm of the vapor pressure vs. the temperature for any given pure chemical compound, its normal boiling point can serve as an indication of that compound's overall volatility. A given pure compound has only one normal boiling point, if any, and a compound's normal boiling point and melting point can serve as characteristic physical properties for that compound, listed in reference books. The higher a compound's normal boiling point, the less volatile that compound is overall, and conversely, the lower a compound's normal boiling point, the more volatile that compound is overall. Some compounds decompose at higher temperatures before reaching their normal boiling point, or sometimes even their melting point. For a stable compound, the boiling point ranges from its triple point to its critical point, depending on the external pressure. Beyond its triple point, a compound's normal boiling point, if any, is higher than its melting point. Beyond the critical point, a compound's liquid and vapor phases merge into one phase, which may be called a superheated gas. At any given temperature, if a compound's normal boiling point is lower, then that compound will generally exist as a gas at atmospheric external pressure. If the compound's normal boiling point is higher, then that compound can exist as a liquid or solid at that given temperature at atmospheric external pressure, and will so exist in equilibrium with its vapor (if volatile) if its vapors are contained. If a compound's vapors are not contained, then some volatile compounds can eventually evaporate away in spite of their higher boiling points. In general, compounds with ionic bonds have high normal boiling points, if they do not decompose before reaching such high temperatures. Many metals have high boiling points, but not all. Very generally—with other factors being equal—in compounds with covalently bonded molecules, as the size of the molecule (or molecular mass) increases, the normal boiling point increases. When the molecular size becomes that of a macromolecule, polymer, or otherwise very large, the compound often decomposes at high temperature before the boiling point is reached. Another factor that affects the normal boiling point of a compound is the polarity of its molecules. As the polarity of a compound's molecules increases, its normal boiling point increases, other factors being equal. Closely related is the ability of a molecule to form hydrogen bonds (in the liquid state), which makes it harder for molecules to leave the liquid state and thus increases the normal boiling point of the compound. Simple carboxylic acids dimerize by forming hydrogen bonds between molecules. A minor factor affecting boiling points is the shape of a molecule. Making the shape of a molecule more compact tends to lower the normal boiling point slightly compared to an equivalent molecule with more surface area. Most volatile compounds (anywhere near ambient temperatures) go through an intermediate liquid phase while warming up from a solid phase to eventually transform to a vapor phase. By comparison to boiling, a sublimation is a physical transformation in which a solid turns directly into vapor, which happens in a few select cases such as with carbon dioxide at atmospheric pressure. For such compounds, a sublimation point is a temperature at which a solid turning directly into vapor has a vapor pressure equal to the external pressure. Impurities and mixtures In the preceding section, boiling points of pure compounds were covered. Vapor pressures and boiling points of substances can be affected by the presence of dissolved impurities (solutes) or other miscible compounds, the degree of effect depending on the concentration of the impurities or other compounds. The presence of non-volatile impurities such as salts or compounds of a volatility far lower than the main component compound decreases its mole fraction and the solution's volatility, and thus raises the normal boiling point in proportion to the concentration of the solutes. This effect is called boiling point elevation. As a common example, salt water boils at a higher temperature than pure water. In other mixtures of miscible compounds (components), there may be two or more components of varying volatility, each having its own pure component boiling point at any given pressure. The presence of other volatile components in a mixture affects the vapor pressures and thus boiling points and dew points of all the components in the mixture. The dew point is a temperature at which a vapor condenses into a liquid. Furthermore, at any given temperature, the composition of the vapor is different from the composition of the liquid in most such cases. In order to illustrate these effects between the volatile components in a mixture, a boiling point diagram is commonly used. Distillation is a process of boiling and [usually] condensation which takes advantage of these differences in composition between liquid and vapor phases. Boiling point of water with elevation Following is a table of the change in the boiling point of water with elevation, at intervals of 500 meters over the range of human habitation [the Dead Sea at to La Rinconada, Peru at ], then of 1,000 meters over the additional range of uninhabited surface elevation [up to Mount Everest at ], along with a similar range in Imperial. Element table See also Boiling points of the elements (data page) Boiling-point elevation Critical point (thermodynamics) Ebulliometer, a device to accurately measure the boiling point of liquids Hagedorn temperature Joback method (Estimation of normal boiling points from molecular structure) List of gases including boiling points Melting point Subcooling Superheating Trouton's constant relating latent heat to boiling point Triple point References External links Gases Meteorological quantities Temperature Threshold temperatures
Boiling point
[ "Physics", "Chemistry", "Mathematics" ]
2,446
[ "Scalar physical quantities", "Matter", "Temperature", "Physical phenomena", "Physical quantities", "Phase transitions", "Thermodynamic properties", "SI base quantities", "Intensive quantities", "Phases of matter", "Threshold temperatures", "Meteorological quantities", "Quantity", "Thermod...
4,403
https://en.wikipedia.org/wiki/BCS%20theory
In physics, the Bardeen–Cooper–Schrieffer (BCS) theory (named after John Bardeen, Leon Cooper, and John Robert Schrieffer) is the first microscopic theory of superconductivity since Heike Kamerlingh Onnes's 1911 discovery. The theory describes superconductivity as a microscopic effect caused by a condensation of Cooper pairs. The theory is also used in nuclear physics to describe the pairing interaction between nucleons in an atomic nucleus. It was proposed by Bardeen, Cooper, and Schrieffer in 1957; they received the Nobel Prize in Physics for this theory in 1972. History Rapid progress in the understanding of superconductivity gained momentum in the mid-1950s. It began with the 1948 paper, "On the Problem of the Molecular Theory of Superconductivity", where Fritz London proposed that the phenomenological London equations may be consequences of the coherence of a quantum state. In 1953, Brian Pippard, motivated by penetration experiments, proposed that this would modify the London equations via a new scale parameter called the coherence length. John Bardeen then argued in the 1955 paper, "Theory of the Meissner Effect in Superconductors", that such a modification naturally occurs in a theory with an energy gap. The key ingredient was Leon Cooper's calculation of the bound states of electrons subject to an attractive force in his 1956 paper, "Bound Electron Pairs in a Degenerate Fermi Gas". In 1957 Bardeen and Cooper assembled these ingredients and constructed such a theory, the BCS theory, with Robert Schrieffer. The theory was first published in April 1957 in the letter, "Microscopic theory of superconductivity". The demonstration that the phase transition is second order, that it reproduces the Meissner effect and the calculations of specific heats and penetration depths appeared in the December 1957 article, "Theory of superconductivity". They received the Nobel Prize in Physics in 1972 for this theory. In 1986, high-temperature superconductivity was discovered in La-Ba-Cu-O, at temperatures up to 30 K. Following experiments determined more materials with transition temperatures up to about 130 K, considerably above the previous limit of about 30 K. It is experimentally very well known that the transition temperature strongly depends on pressure. In general, it is believed that BCS theory alone cannot explain this phenomenon and that other effects are in play. These effects are still not yet fully understood; it is possible that they even control superconductivity at low temperatures for some materials. Overview At sufficiently low temperatures, electrons near the Fermi surface become unstable against the formation of Cooper pairs. Cooper showed such binding will occur in the presence of an attractive potential, no matter how weak. In conventional superconductors, an attraction is generally attributed to an electron-lattice interaction. The BCS theory, however, requires only that the potential be attractive, regardless of its origin. In the BCS framework, superconductivity is a macroscopic effect which results from the condensation of Cooper pairs. These have some bosonic properties, and bosons, at sufficiently low temperature, can form a large Bose–Einstein condensate. Superconductivity was simultaneously explained by Nikolay Bogolyubov, by means of the Bogoliubov transformations. In many superconductors, the attractive interaction between electrons (necessary for pairing) is brought about indirectly by the interaction between the electrons and the vibrating crystal lattice (the phonons). Roughly speaking the picture is the following: An electron moving through a conductor will attract nearby positive charges in the lattice. This deformation of the lattice causes another electron, with opposite spin, to move into the region of higher positive charge density. The two electrons then become correlated. Because there are a lot of such electron pairs in a superconductor, these pairs overlap very strongly and form a highly collective condensate. In this "condensed" state, the breaking of one pair will change the energy of the entire condensate - not just a single electron, or a single pair. Thus, the energy required to break any single pair is related to the energy required to break all of the pairs (or more than just two electrons). Because the pairing increases this energy barrier, kicks from oscillating atoms in the conductor (which are small at sufficiently low temperatures) are not enough to affect the condensate as a whole, or any individual "member pair" within the condensate. Thus the electrons stay paired together and resist all kicks, and the electron flow as a whole (the current through the superconductor) will not experience resistance. Thus, the collective behavior of the condensate is a crucial ingredient necessary for superconductivity. Details BCS theory starts from the assumption that there is some attraction between electrons, which can overcome the Coulomb repulsion. In most materials (in low temperature superconductors), this attraction is brought about indirectly by the coupling of electrons to the crystal lattice (as explained above). However, the results of BCS theory do not depend on the origin of the attractive interaction. For instance, Cooper pairs have been observed in ultracold gases of fermions where a homogeneous magnetic field has been tuned to their Feshbach resonance. The original results of BCS (discussed below) described an s-wave superconducting state, which is the rule among low-temperature superconductors but is not realized in many unconventional superconductors such as the d-wave high-temperature superconductors. Extensions of BCS theory exist to describe these other cases, although they are insufficient to completely describe the observed features of high-temperature superconductivity. BCS is able to give an approximation for the quantum-mechanical many-body state of the system of (attractively interacting) electrons inside the metal. This state is now known as the BCS state. In the normal state of a metal, electrons move independently, whereas in the BCS state, they are bound into Cooper pairs by the attractive interaction. The BCS formalism is based on the reduced potential for the electrons' attraction. Within this potential, a variational ansatz for the wave function is proposed. This ansatz was later shown to be exact in the dense limit of pairs. Note that the continuous crossover between the dilute and dense regimes of attracting pairs of fermions is still an open problem, which now attracts a lot of attention within the field of ultracold gases. Underlying evidence The hyperphysics website pages at Georgia State University summarize some key background to BCS theory as follows: Evidence of a band gap at the Fermi level (described as "a key piece in the puzzle") the existence of a critical temperature and critical magnetic field implied a band gap, and suggested a phase transition, but single electrons are forbidden from condensing to the same energy level by the Pauli exclusion principle. The site comments that "a drastic change in conductivity demanded a drastic change in electron behavior". Conceivably, pairs of electrons might perhaps act like bosons instead, which are bound by different condensate rules and do not have the same limitation. Isotope effect on the critical temperature, suggesting lattice interactions The Debye frequency of phonons in a lattice is proportional to the inverse of the square root of the mass of lattice ions. It was shown that the superconducting transition temperature of mercury indeed showed the same dependence, by substituting the most abundant natural mercury isotope, 202Hg, with a different isotope, 198Hg. An exponential rise in heat capacity near the critical temperature for some superconductors An exponential increase in heat capacity near the critical temperature also suggests an energy bandgap for the superconducting material. As superconducting vanadium is warmed toward its critical temperature, its heat capacity increases greatly in a very few degrees; this suggests an energy gap being bridged by thermal energy. The lessening of the measured energy gap towards the critical temperature This suggests a type of situation where some kind of binding energy exists but it is gradually weakened as the temperature increases toward the critical temperature. A binding energy suggests two or more particles or other entities that are bound together in the superconducting state. This helped to support the idea of bound particles – specifically electron pairs – and together with the above helped to paint a general picture of paired electrons and their lattice interactions. Implications BCS derived several important theoretical predictions that are independent of the details of the interaction, since the quantitative predictions mentioned below hold for any sufficiently weak attraction between the electrons and this last condition is fulfilled for many low temperature superconductors - the so-called weak-coupling case. These have been confirmed in numerous experiments: The electrons are bound into Cooper pairs, and these pairs are correlated due to the Pauli exclusion principle for the electrons, from which they are constructed. Therefore, in order to break a pair, one has to change energies of all other pairs. This means there is an energy gap for single-particle excitation, unlike in the normal metal (where the state of an electron can be changed by adding an arbitrarily small amount of energy). This energy gap is highest at low temperatures but vanishes at the transition temperature when superconductivity ceases to exist. The BCS theory gives an expression that shows how the gap grows with the strength of the attractive interaction and the (normal phase) single particle density of states at the Fermi level. Furthermore, it describes how the density of states is changed on entering the superconducting state, where there are no electronic states any more at the Fermi level. The energy gap is most directly observed in tunneling experiments and in reflection of microwaves from superconductors. BCS theory predicts the dependence of the value of the energy gap Δ at temperature T on the critical temperature Tc. The ratio between the value of the energy gap at zero temperature and the value of the superconducting transition temperature (expressed in energy units) takes the universal value independent of material. Near the critical temperature the relation asymptotes to which is of the form suggested the previous year by M. J. Buckingham based on the fact that the superconducting phase transition is second order, that the superconducting phase has a mass gap and on Blevins, Gordy and Fairbank's experimental results the previous year on the absorption of millimeter waves by superconducting tin. Due to the energy gap, the specific heat of the superconductor is suppressed strongly (exponentially) at low temperatures, there being no thermal excitations left. However, before reaching the transition temperature, the specific heat of the superconductor becomes even higher than that of the normal conductor (measured immediately above the transition) and the ratio of these two values is found to be universally given by 2.5. BCS theory correctly predicts the Meissner effect, i.e. the expulsion of a magnetic field from the superconductor and the variation of the penetration depth (the extent of the screening currents flowing below the metal's surface) with temperature. It also describes the variation of the critical magnetic field (above which the superconductor can no longer expel the field but becomes normal conducting) with temperature. BCS theory relates the value of the critical field at zero temperature to the value of the transition temperature and the density of states at the Fermi level. In its simplest form, BCS gives the superconducting transition temperature Tc in terms of the electron-phonon coupling potential V and the Debye cutoff energy ED: where N(0) is the electronic density of states at the Fermi level. For more details, see Cooper pairs. The BCS theory reproduces the isotope effect, which is the experimental observation that for a given superconducting material, the critical temperature is inversely proportional to the square-root of the mass of the isotope used in the material. The isotope effect was reported by two groups on 24 March 1950, who discovered it independently working with different mercury isotopes, although a few days before publication they learned of each other's results at the ONR conference in Atlanta. The two groups are Emanuel Maxwell, and C. A. Reynolds, B. Serin, W. H. Wright, and L. B. Nesbitt. The choice of isotope ordinarily has little effect on the electrical properties of a material, but does affect the frequency of lattice vibrations. This effect suggests that superconductivity is related to vibrations of the lattice. This is incorporated into BCS theory, where lattice vibrations yield the binding energy of electrons in a Cooper pair. Little–Parks experiment - One of the first indications to the importance of the Cooper-pairing principle. See also Magnesium diboride, considered a BCS superconductor Quasiparticle Little–Parks effect, one of the first indications of the importance of the Cooper pairing principle. References Primary sources Further reading John Robert Schrieffer, Theory of Superconductivity, (1964), Michael Tinkham, Introduction to Superconductivity, Pierre-Gilles de Gennes, Superconductivity of Metals and Alloys, . Schmidt, Vadim Vasil'evich. The physics of superconductors: Introduction to fundamentals and applications. Springer Science & Business Media, 2013. External links Hyperphysics page on BCS Dance analogy of BCS theory as explained by Bob Schrieffer (audio recording) Mean-Field Theory: Hartree-Fock and BCS in E. Pavarini, E. Koch, J. van den Brink, and G. Sawatzky: Quantum materials: Experiments and Theory, Jülich 2016, Superconductivity
BCS theory
[ "Physics", "Materials_science", "Engineering" ]
2,866
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]