id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
14,359
https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel%20principle
The Huygens–Fresnel principle (named after Dutch physicist Christiaan Huygens and French physicist Augustin-Jean Fresnel) states that every point on a wavefront is itself the source of spherical wavelets, and the secondary wavelets emanating from different points mutually interfere. The sum of these spherical wavelets forms a new wavefront. As such, the Huygens-Fresnel principle is a method of analysis applied to problems of luminous wave propagation both in the far-field limit and in near-field diffraction as well as reflection. History In 1678, Huygens proposed that every point reached by a luminous disturbance becomes a source of a spherical wave. The sum of these secondary waves determines the form of the wave at any subsequent time; the overall procedure is referred to as Huygens' construction. He assumed that the secondary waves travelled only in the "forward" direction, and it is not explained in the theory why this is the case. He was able to provide a qualitative explanation of linear and spherical wave propagation, and to derive the laws of reflection and refraction using this principle, but could not explain the deviations from rectilinear propagation that occur when light encounters edges, apertures and screens, commonly known as diffraction effects. In 1818, Fresnel showed that Huygens's principle, together with his own principle of interference, could explain both the rectilinear propagation of light and also diffraction effects. To obtain agreement with experimental results, he had to include additional arbitrary assumptions about the phase and amplitude of the secondary waves, and also an obliquity factor. These assumptions have no obvious physical foundation, but led to predictions that agreed with many experimental observations, including the Poisson spot. Poisson was a member of the French Academy, which reviewed Fresnel's work. He used Fresnel's theory to predict that a bright spot ought to appear in the center of the shadow of a small disc, and deduced from this that the theory was incorrect. However, Arago, another member of the committee, performed the experiment and showed that the prediction was correct. This success was important evidence in favor of the wave theory of light over then predominant corpuscular theory. In 1882, Gustav Kirchhoff analyzed Fresnel's theory in a rigorous mathematical formulation, as an approximate form of an integral theorem. Very few rigorous solutions to diffraction problems are known however, and most problems in optics are adequately treated using the Huygens-Fresnel principle. In 1939 Edward Copson, extended the Huygens' original principle to consider the polarization of light, which requires a vector potential, in contrast to the scalar potential of a simple ocean wave or sound wave. In antenna theory and engineering, the reformulation of the Huygens–Fresnel principle for radiating current sources is known as surface equivalence principle. Issues in Huygens-Fresnel theory continue to be of interest. In 1991, David A. B. Miller suggested that treating the source as a dipole (not the monopole assumed by Huygens) will cancel waves propagating in the reverse direction, making Huygens' construction quantiatively correct. In 2021, Forrest L. Anderson showed that treating the wavelets as Dirac delta functions, summing and differentiating the summation is sufficient to cancel reverse propagating waves. Examples Refraction The apparent change in direction of a light ray as it enters a sheet of glass at angle can be understood by the Huygens construction. Each point on the surface of the glass gives a secondary wavelet. These wavelets propagate at a slower velocity in the glass, making less forward progress than their counterparts in air. When the wavelets are summed, the resulting wavefront propagates at an angle to the direction of the wavefront in air. In an inhomogeneous medium with a variable index of refraction, different parts of the wavefront propagate at different speeds. Consequently the wavefront bends around in the direction of higher index. Diffraction Huygens' principle as a microscopic model The Huygens–Fresnel principle provides a reasonable basis for understanding and predicting the classical wave propagation of light. However, there are limitations to the principle, namely the same approximations done for deriving the Kirchhoff's diffraction formula and the approximations of near field due to Fresnel. These can be summarized in the fact that the wavelength of light is much smaller than the dimensions of any optical components encountered. Kirchhoff's diffraction formula provides a rigorous mathematical foundation for diffraction, based on the wave equation. The arbitrary assumptions made by Fresnel to arrive at the Huygens–Fresnel equation emerge automatically from the mathematics in this derivation. A simple example of the operation of the principle can be seen when an open doorway connects two rooms and a sound is produced in a remote corner of one of them. A person in the other room will hear the sound as if it originated at the doorway. As far as the second room is concerned, the vibrating air in the doorway is the source of the sound. Mathematical expression of the principle Consider the case of a point source located at a point P0, vibrating at a frequency f. The disturbance may be described by a complex variable U0 known as the complex amplitude. It produces a spherical wave with wavelength λ, wavenumber . Within a constant of proportionality, the complex amplitude of the primary wave at the point Q located at a distance r0 from P0 is: Note that magnitude decreases in inverse proportion to the distance traveled, and the phase changes as k times the distance traveled. Using Huygens's theory and the principle of superposition of waves, the complex amplitude at a further point P is found by summing the contribution from each point on the sphere of radius r0. In order to get an agreement with experimental results, Fresnel found that the individual contributions from the secondary waves on the sphere had to be multiplied by a constant, −i/λ, and by an additional inclination factor, K(χ). The first assumption means that the secondary waves oscillate at a quarter of a cycle out of phase with respect to the primary wave and that the magnitude of the secondary waves are in a ratio of 1:λ to the primary wave. He also assumed that K(χ) had a maximum value when χ = 0, and was equal to zero when χ = π/2, where χ is the angle between the normal of the primary wavefront and the normal of the secondary wavefront. The complex amplitude at P, due to the contribution of secondary waves, is then given by: where S describes the surface of the sphere, and s is the distance between Q and P. Fresnel used a zone construction method to find approximate values of K for the different zones, which enabled him to make predictions that were in agreement with experimental results. The integral theorem of Kirchhoff includes the basic idea of Huygens–Fresnel principle. Kirchhoff showed that in many cases, the theorem can be approximated to a simpler form that is equivalent to the formation of Fresnel's formulation. For an aperture illumination consisting of a single expanding spherical wave, if the radius of the curvature of the wave is sufficiently large, Kirchhoff gave the following expression for K(χ): K has a maximum value at χ = 0 as in the Huygens–Fresnel principle; however, K is not equal to zero at χ = π/2, but at χ = π. Above derivation of K(χ) assumed that the diffracting aperture is illuminated by a single spherical wave with a sufficiently large radius of curvature. However, the principle holds for more general illuminations. An arbitrary illumination can be decomposed into a collection of point sources, and the linearity of the wave equation can be invoked to apply the principle to each point source individually. K(χ) can be generally expressed as: In this case, K satisfies the conditions stated above (maximum value at χ = 0 and zero at χ = π/2). Generalized Huygens' principle Many books and references - e.g. (Greiner, 2002) and (Enders, 2009) - refer to the Generalized Huygens' Principle using the definition in (Feynman, 1948). Feynman defines the generalized principle in the following way: This clarifies the fact that in this context the generalized principle reflects the linearity of quantum mechanics and the fact that the quantum mechanics equations are first order in time. Finally only in this case the superposition principle fully apply, i.e. the wave function in a point P can be expanded as a superposition of waves on a border surface enclosing P. Wave functions can be interpreted in the usual quantum mechanical sense as probability densities where the formalism of Green's functions and propagators apply. What is note-worthy is that this generalized principle is applicable for "matter waves" and not for light waves any more. The phase factor is now clarified as given by the action and there is no more confusion why the phases of the wavelets are different from the one of the original wave and modified by the additional Fresnel parameters. As per Greiner the generalized principle can be expressed for in the form: where G is the usual Green function that propagates in time the wave function . This description resembles and generalize the initial Fresnel's formula of the classical model. Feynman's path integral and the modern photon wave function Huygens' theory served as a fundamental explanation of the wave nature of light interference and was further developed by Fresnel and Young but did not fully resolve all observations such as the low-intensity double-slit experiment first performed by G. I. Taylor in 1909. It was not until the early and mid-1900s that quantum theory discussions, particularly the early discussions at the 1927 Brussels Solvay Conference, where Louis de Broglie proposed his de Broglie hypothesis that the photon is guided by a wave function. The wave function presents a much different explanation of the observed light and dark bands in a double slit experiment. In this conception, the photon follows a path which is a probabilistic choice of one of many possible paths in the electromagnetic field. These probable paths form the pattern: in dark areas, no photons are landing, and in bright areas, many photons are landing. The set of possible photon paths is consistent with Richard Feynman's path integral theory, the paths determined by the surroundings: the photon's originating point (atom), the slit, and the screen and by tracking and summing phases. The wave function is a solution to this geometry. The wave function approach was further supported by additional double-slit experiments in Italy and Japan in the 1970s and 1980s with electrons. Quantum field theory Huygens' principle can be seen as a consequence of the homogeneity of space—space is uniform in all locations. Any disturbance created in a sufficiently small region of homogeneous space (or in a homogeneous medium) propagates from that region in all geodesic directions. The waves produced by this disturbance, in turn, create disturbances in other regions, and so on. The superposition of all the waves results in the observed pattern of wave propagation. Homogeneity of space is fundamental to quantum field theory (QFT) where the wave function of any object propagates along all available unobstructed paths. When integrated along all possible paths, with a phase factor proportional to the action, the interference of the wave-functions correctly predicts observable phenomena. Every point on the wavefront acts as the source of secondary wavelets that spread out in the light cone with the same speed as the wave. The new wavefront is found by constructing the surface tangent to the secondary wavelets. In other spatial dimensions In 1900, Jacques Hadamard observed that Huygens' principle was broken when the number of spatial dimensions is even. From this, he developed a set of conjectures that remain an active topic of research. In particular, it has been discovered that Huygens' principle holds on a large class of homogeneous spaces derived from the Coxeter group (so, for example, the Weyl groups of simple Lie algebras). The traditional statement of Huygens' principle for the D'Alembertian gives rise to the KdV hierarchy; analogously, the Dirac operator gives rise to the AKNS hierarchy. See also Fraunhofer diffraction Kirchhoff's diffraction formula Green's function Green's theorem Green's identities Near-field diffraction pattern Double-slit experiment Knife-edge effect Fermat's principle Fourier optics Surface equivalence principle Wave field synthesis Kirchhoff integral theorem References Further reading Stratton, Julius Adams: Electromagnetic Theory, McGraw-Hill, 1941. (Reissued by Wiley – IEEE Press, ). B.B. Baker and E.T. Copson, The Mathematical Theory of Huygens' Principle, Oxford, 1939, 1950; AMS Chelsea, 1987. Wave mechanics Diffraction Christiaan Huygens
Huygens–Fresnel principle
[ "Physics", "Chemistry", "Materials_science" ]
2,738
[ "Physical phenomena", "Spectrum (physical sciences)", "Classical mechanics", "Waves", "Wave mechanics", "Diffraction", "Crystallography", "Spectroscopy" ]
14,380
https://en.wikipedia.org/wiki/Helium-3
Helium-3 (3He see also helion) is a light, stable isotope of helium with two protons and one neutron. (In contrast, the most common isotope, helium-4, has two protons and two neutrons.) Helium-3 and protium (ordinary hydrogen) are the only stable nuclides with more protons than neutrons. It was discovered in 1939. Helium-3 occurs as a primordial nuclide, escaping from Earth's crust into its atmosphere and into outer space over millions of years. It is also thought to be a natural nucleogenic and cosmogenic nuclide, one produced when lithium is bombarded by natural neutrons, which can be released by spontaneous fission and by nuclear reactions with cosmic rays. Some found in the terrestrial atmosphere is a remnant of atmospheric and underwater nuclear weapons testing. Nuclear fusion using helium-3 has long been viewed as a desirable future energy source. The fusion of two of its atoms would be aneutronic, not release the dangerous radiation of traditional fusion or require much higher temperatures. The process may unavoidably create other reactions that themselves would cause the surrounding material to become radioactive. Helium-3 is thought to be more abundant on the Moon than on Earth, having been deposited in the upper layer of regolith by the solar wind over billions of years, though still lower in abundance than in the Solar System's gas giants. History The existence of helium-3 was first proposed in 1934 by the Australian nuclear physicist Mark Oliphant while he was working at the University of Cambridge Cavendish Laboratory. Oliphant had performed experiments in which fast deuterons collided with deuteron targets (incidentally, the first demonstration of nuclear fusion). Isolation of helium-3 was first accomplished by Luis Alvarez and Robert Cornog in 1939. Helium-3 was thought to be a radioactive isotope until it was also found in samples of natural helium, which is mostly helium-4, taken both from the terrestrial atmosphere and from natural gas wells. Physical properties Due to its low atomic mass of 3.016 u, helium-3 has some physical properties different from those of helium-4, with a mass of 4.0026 u. On account of the weak, induced dipole–dipole interaction between the helium atoms, their microscopic physical properties are mainly determined by their zero-point energy. Also, the microscopic properties of helium-3 cause it to have a higher zero-point energy than helium-4. This implies that helium-3 can overcome dipole–dipole interactions with less thermal energy than helium-4 can. The quantum mechanical effects on helium-3 and helium-4 are significantly different because with two protons, two neutrons, and two electrons, helium-4 has an overall spin of zero, making it a boson, but with one fewer neutron, helium-3 has an overall spin of one half, making it a fermion. Pure helium-3 gas boils at 3.19 K compared with helium-4 at 4.23 K, and its critical point is also lower at 3.35 K, compared with helium-4 at 5.2 K. Helium-3 has less than half the density of helium-4 when it is at its boiling point: 59 g/L compared to 125 g/L of helium-4 at a pressure of one atmosphere. Its latent heat of vaporization is also considerably lower at 0.026 kJ/mol compared with the 0.0829 kJ/mol of helium-4. Superfluidity An important property of helium-3, which distinguishes it from the more common helium-4, is that its nucleus is a fermion since it contains an odd number of spin particles. Helium-4 nuclei are bosons, containing an even number of spin particles. This is a direct result of the addition rules for quantized angular momentum. At low temperatures (about 2.17 K), helium-4 undergoes a phase transition: A fraction of it enters a superfluid phase that can be roughly understood as a type of Bose–Einstein condensate. Such a mechanism is not available for helium-3 atoms, which are fermions. Many speculated that helium-3 could also become a superfluid at much lower temperatures, if the atoms formed into pairs analogous to Cooper pairs in the BCS theory of superconductivity. Each Cooper pair, having integer spin, can be thought of as a boson. During the 1970s, David Lee, Douglas Osheroff and Robert Coleman Richardson discovered two phase transitions along the melting curve, which were soon realized to be the two superfluid phases of helium-3. The transition to a superfluid occurs at 2.491 millikelvins on the melting curve. They were awarded the 1996 Nobel Prize in Physics for their discovery. Alexei Abrikosov, Vitaly Ginzburg, and Tony Leggett won the 2003 Nobel Prize in Physics for their work on refining understanding of the superfluid phase of helium-3. In a zero magnetic field, there are two distinct superfluid phases of 3He, the A-phase and the B-phase. The B-phase is the low-temperature, low-pressure phase which has an isotropic energy gap. The A-phase is the higher temperature, higher pressure phase that is further stabilized by a magnetic field and has two point nodes in its gap. The presence of two phases is a clear indication that 3He is an unconventional superfluid (superconductor), since the presence of two phases requires an additional symmetry, other than gauge symmetry, to be broken. In fact, it is a p-wave superfluid, with spin one, S=1, and angular momentum one, L=1. The ground state corresponds to total angular momentum zero, J=S+L=0 (vector addition). Excited states are possible with non-zero total angular momentum, J>0, which are excited pair collective modes. These collective modes have been studied with much greater precision than in any other unconventional pairing system, because of the extreme purity of superfluid 3He. This purity is due to all 4He phase separating entirely and all other materials solidifying and sinking to the bottom of the liquid, making the A- and B-phases of 3He the most pure condensed matter state possible. Natural abundance Terrestrial abundance 3He is a primordial substance in the Earth's mantle, thought to have become entrapped in the Earth during planetary formation. The ratio of 3He to 4He within the Earth's crust and mantle is less than that of estimates of solar disk composition as obtained from meteorite and lunar samples, with terrestrial materials generally containing lower 3He/4He ratios due to production of 4He from radioactive decay. 3He has a cosmological ratio of 300 atoms per million atoms of 4He (at. ppm), leading to the assumption that the original ratio of these primordial gases in the mantle was around 200-300 ppm when Earth was formed. Over Earth's history alpha-particle decay of uranium, thorium and other radioactive isotopes has generated significant amounts of 4He, such that only around 7% of the helium now in the mantle is primordial helium, lowering the total 3He/4He ratio to around 20 ppm. Ratios of 3He/4He in excess of atmospheric are indicative of a contribution of 3He from the mantle. Crustal sources are dominated by the 4He produced by radioactive decay. The ratio of helium-3 to helium-4 in natural Earth-bound sources varies greatly. Samples of the lithium ore spodumene from Edison Mine, South Dakota were found to contain 12 parts of helium-3 to a million parts of helium-4. Samples from other mines showed 2 parts per million. Helium is also present as up to 7% of some natural gas sources, and large sources have over 0.5% (above 0.2% makes it viable to extract). The fraction of 3He in helium separated from natural gas in the U.S. was found to range from 70 to 242 parts per billion. Hence the US 2002 stockpile of 1 billion normal m3 would have contained about of helium-3. According to American physicist Richard Garwin, about or almost of 3He is available annually for separation from the US natural gas stream. If the process of separating out the 3He could employ as feedstock the liquefied helium typically used to transport and store bulk quantities, estimates for the incremental energy cost range from NTP, excluding the cost of infrastructure and equipment. Algeria's annual gas production is assumed to contain 100 million normal cubic metres and this would contain between of helium-3 (about ) assuming a similar 3He fraction. 3He is also present in the Earth's atmosphere. The natural abundance of 3He in naturally occurring helium gas is 1.38 (1.38 parts per million). The partial pressure of helium in the Earth's atmosphere is about , and thus helium accounts for 5.2 parts per million of the total pressure (101325 Pa) in the Earth's atmosphere, and 3He thus accounts for 7.2 parts per trillion of the atmosphere. Since the atmosphere of the Earth has a mass of about , the mass of 3He in the Earth's atmosphere is the product of these numbers, or about of 3He. (In fact the effective figure is ten times smaller, since the above ppm are ppmv and not ppmw. One must multiply by 3 (the molecular mass of helium-3) and divide by 29 (the mean molecular mass of the atmosphere), resulting in of helium-3 in the earth's atmosphere.) 3He is produced on Earth from three sources: lithium spallation, cosmic rays, and beta decay of tritium (3H). The contribution from cosmic rays is negligible within all except the oldest regolith materials, and lithium spallation reactions are a lesser contributor than the production of 4He by alpha particle emissions. The total amount of helium-3 in the mantle may be in the range of . Most mantle is not directly accessible. Some helium-3 leaks up through deep-sourced hotspot volcanoes such as those of the Hawaiian Islands, but only per year is emitted to the atmosphere. Mid-ocean ridges emit another . Around subduction zones, various sources produce helium-3 in natural gas deposits which possibly contain a thousand tonnes of helium-3 (although there may be 25 thousand tonnes if all ancient subduction zones have such deposits). Wittenberg estimated that United States crustal natural gas sources may have only half a tonne total. Wittenberg cited Anderson's estimate of another in interplanetary dust particles on the ocean floors. In the 1994 study, extracting helium-3 from these sources consumes more energy than fusion would release. Lunar surface See Extraterrestrial mining or Lunar resources Solar nebula (primordial) abundance One early estimate of the primordial ratio of 3He to 4He in the solar nebula has been the measurement of their ratio in the atmosphere of Jupiter, measured by the mass spectrometer of the Galileo atmospheric entry probe. This ratio is about 1:10,000, or 100 parts of 3He per million parts of 4He. This is roughly the same ratio of the isotopes as in lunar regolith, which contains 28 ppm helium-4 and 2.8 ppb helium-3 (which is at the lower end of actual sample measurements, which vary from about 1.4 to 15 ppb). Terrestrial ratios of the isotopes are lower by a factor of 100, mainly due to enrichment of helium-4 stocks in the mantle by billions of years of alpha decay from uranium, thorium as well as their decay products and extinct radionuclides. Human production Tritium decay Virtually all helium-3 used in industry today is produced from the radioactive decay of tritium, given its very low natural abundance and its very high cost. Production, sales and distribution of helium-3 in the United States are managed by the US Department of Energy (DOE) DOE Isotope Program. While tritium has several different experimentally determined values of its half-life, NIST lists (). It decays into helium-3 by beta decay as in this nuclear equation: {| border="0" |- style="height:2em;" | ||→ || ||+ || ||+ || |} Among the total released energy of , the part taken by electron's kinetic energy varies, with an average of , while the remaining energy is carried off by the nearly undetectable electron antineutrino. Beta particles from tritium can penetrate only about of air, and they are incapable of passing through the dead outermost layer of human skin. The unusually low energy released in the tritium beta decay makes the decay (along with that of rhenium-187) appropriate for absolute neutrino mass measurements in the laboratory (the most recent experiment being KATRIN). The low energy of tritium's radiation makes it difficult to detect tritium-labeled compounds except by using liquid scintillation counting. Tritium is a radioactive isotope of hydrogen and is typically produced by bombarding lithium-6 with neutrons in a nuclear reactor. The lithium nucleus absorbs a neutron and splits into helium-4 and tritium. Tritium decays into helium-3 with a half-life of , so helium-3 can be produced by simply storing the tritium until it undergoes radioactive decay. As tritium forms a stable compound with oxygen (tritiated water) while helium-3 does not, the storage and collection process could continuously collect the material that outgasses from the stored material. Tritium is a critical component of nuclear weapons and historically it was produced and stockpiled primarily for this application. The decay of tritium into helium-3 reduces the explosive power of the fusion warhead, so periodically the accumulated helium-3 must be removed from warhead reservoirs and tritium in storage. Helium-3 removed during this process is marketed for other applications. For decades this has been, and remains, the principal source of the world's helium-3. Since the signing of the START I Treaty in 1991 the number of nuclear warheads that are kept ready for use has decreased. This has reduced the quantity of helium-3 available from this source. Helium-3 stockpiles have been further diminished by increased demand, primarily for use in neutron radiation detectors and medical diagnostic procedures. US industrial demand for helium-3 reached a peak of (approximately ) per year in 2008. Price at auction, historically about , reached as high as . Since then, demand for helium-3 has declined to about per year due to the high cost and efforts by the DOE to recycle it and find substitutes. Assuming a density of at $100/l helium-3 would be about a thirtieth as expensive as tritium (roughly vs roughly ) while at $2000/l helium-3 would be about half as expensive as tritium ( vs ). The DOE recognized the developing shortage of both tritium and helium-3, and began producing tritium by lithium irradiation at the Tennessee Valley Authority's Watts Bar Nuclear Generating Station in 2010. In this process tritium-producing burnable absorber rods (TPBARs) containing lithium in a ceramic form are inserted into the reactor in place of the normal boron control rods Periodically the TPBARs are replaced and the tritium extracted. Currently only two commercial nuclear reactors (Watts Bar Nuclear Plant Units 1 and 2) are being used for tritium production but the process could, if necessary, be vastly scaled up to meet any conceivable demand simply by utilizing more of the nation's power reactors. Substantial quantities of tritium and helium-3 could also be extracted from the heavy water moderator in CANDU nuclear reactors. India and Canada, the two countries with the largest heavy water reactor fleet, are both known to extract tritium from moderator/coolant heavy water, but those amounts are not nearly enough to satisfy global demand of either tritium or helium-3. As tritium is also produced inadvertently in various processes in light water reactors (see the article on tritium for details), extraction from those sources could be another source of helium-3. If the annual discharge of tritium (per 2018 figures) at La Hague reprocessing facility is taken as a basis, the amounts discharged ( at La Hague) are not nearly enough to satisfy demand, even if 100% recovery is achieved. Uses Helium-3 spin echo Helium-3 can be used to do spin echo experiments of surface dynamics, which are underway at the Surface Physics Group at the Cavendish Laboratory in Cambridge and in the Chemistry Department at Swansea University. Neutron detection Helium-3 is an important isotope in instrumentation for neutron detection. It has a high absorption cross section for thermal neutron beams and is used as a converter gas in neutron detectors. The neutron is converted through the nuclear reaction n + 3He → 3H + 1H + 0.764 MeV into charged particles tritium ions (T, 3H) and Hydrogen ions, or protons (p, 1H) which then are detected by creating a charge cloud in the stopping gas of a proportional counter or a Geiger–Müller tube. Furthermore, the absorption process is strongly spin-dependent, which allows a spin-polarized helium-3 volume to transmit neutrons with one spin component while absorbing the other. This effect is employed in neutron polarization analysis, a technique which probes for magnetic properties of matter. The United States Department of Homeland Security had hoped to deploy detectors to spot smuggled plutonium in shipping containers by their neutron emissions, but the worldwide shortage of helium-3 following the drawdown in nuclear weapons production since the Cold War has to some extent prevented this. As of 2012, DHS determined the commercial supply of boron-10 would support converting its neutron detection infrastructure to that technology. Cryogenics A helium-3 refrigerator uses helium-3 to achieve temperatures of 0.2 to 0.3 kelvin. A dilution refrigerator uses a mixture of helium-3 and helium-4 to reach cryogenic temperatures as low as a few thousandths of a kelvin. Medical imaging Helium-3 nuclei have an intrinsic nuclear spin of , and a relatively high magnetogyric ratio. Helium-3 can be hyperpolarized using non-equilibrium means such as spin-exchange optical pumping. During this process, circularly polarized infrared laser light, tuned to the appropriate wavelength, is used to excite electrons in an alkali metal, such as caesium or rubidium inside a sealed glass vessel. The angular momentum is transferred from the alkali metal electrons to the noble gas nuclei through collisions. In essence, this process effectively aligns the nuclear spins with the magnetic field in order to enhance the NMR signal. The hyperpolarized gas may then be stored at pressures of 10 atm, for up to 100 hours. Following inhalation, gas mixtures containing the hyperpolarized helium-3 gas can be imaged with an MRI scanner to produce anatomical and functional images of lung ventilation. This technique is also able to produce images of the airway tree, locate unventilated defects, measure the alveolar oxygen partial pressure, and measure the ventilation/perfusion ratio. This technique may be critical for the diagnosis and treatment management of chronic respiratory diseases such as chronic obstructive pulmonary disease (COPD), emphysema, cystic fibrosis, and asthma. Radio energy absorber for tokamak plasma experiments Both MIT's Alcator C-Mod tokamak and the Joint European Torus (JET) have experimented with adding a little helium-3 to a H–D plasma to increase the absorption of radio-frequency (RF) energy to heat the hydrogen and deuterium ions, a "three-ion" effect. Nuclear fuel can be produced by the low temperature fusion of → + γ + 4.98 MeV. If the fusion temperature is below that for the helium nuclei to fuse, the reaction produces a high energy alpha particle which quickly acquires an electron producing a stable light helium ion which can be utilized directly as a source of electricity without producing dangerous neutrons. can be used in fusion reactions by either of the reactions + 18.3 MeV, or + 12.86 MeV. The conventional deuterium + tritium ("D–T") fusion process produces energetic neutrons which render reactor components radioactive with activation products. The appeal of helium-3 fusion stems from the aneutronic nature of its reaction products. Helium-3 itself is non-radioactive. The lone high-energy by-product, the proton, can be contained by means of electric and magnetic fields. The momentum energy of this proton (created in the fusion process) will interact with the containing electromagnetic field, resulting in direct net electricity generation. Because of the higher Coulomb barrier, the temperatures required for fusion are much higher than those of conventional D–T fusion. Moreover, since both reactants need to be mixed together to fuse, reactions between nuclei of the same reactant will occur, and the D–D reaction () does produce a neutron. Reaction rates vary with temperature, but the D– reaction rate is never greater than 3.56 times the D–D reaction rate (see graph). Therefore, fusion using D– fuel at the right temperature and a D-lean fuel mixture, can produce a much lower neutron flux than D–T fusion, but is not clean, negating some of its main attraction. The second possibility, fusing with itself (), requires even higher temperatures (since now both reactants have a +2 charge), and thus is even more difficult than the D- reaction. It offers a theoretical reaction that produces no neutrons; the charged protons produced can be contained in electric and magnetic fields, which in turn directly generates electricity. fusion is feasible as demonstrated in the laboratory and has immense advantages, but commercial viability is many years in the future. The amounts of helium-3 needed as a replacement for conventional fuels are substantial by comparison to amounts currently available. The total amount of energy produced in the reaction is 18.4 MeV, which corresponds to some 493 megawatt-hours (4.93×108 W·h) per three grams (one mole) of . If the total amount of energy could be converted to electrical power with 100% efficiency (a physical impossibility), it would correspond to about 30 minutes of output of a gigawatt electrical plant per mole of . Thus, a year's production (at 6 grams for each operation hour) would require 52.5 kilograms of helium-3. The amount of fuel needed for large-scale applications can also be put in terms of total consumption: electricity consumption by 107 million U.S. households in 2001 totaled 1,140 billion kW·h (1.14×1015 W·h). Again assuming 100% conversion efficiency, 6.7 tonnes per year of helium-3 would be required for that segment of the energy demand of the United States, 15 to 20 tonnes per year given a more realistic end-to-end conversion efficiency. A second-generation approach to controlled fusion power involves combining helium-3 and deuterium, . This reaction produces an alpha particle and a high-energy proton. The most important potential advantage of this fusion reaction for power production as well as other applications lies in its compatibility with the use of electrostatic fields to control fuel ions and the fusion protons. High speed protons, as positively charged particles, can have their kinetic energy converted directly into electricity, through use of solid-state conversion materials as well as other techniques. Potential conversion efficiencies of 70% may be possible, as there is no need to convert proton energy to heat in order to drive a turbine-powered electrical generator. He-3 power plants There have been many claims about the capabilities of helium-3 power plants. According to proponents, fusion power plants operating on deuterium and helium-3 would offer lower capital and operating costs than their competitors due to less technical complexity, higher conversion efficiency, smaller size, the absence of radioactive fuel, no air or water pollution, and only low-level radioactive waste disposal requirements. Recent estimates suggest that about $6 billion in investment capital will be required to develop and construct the first helium-3 fusion power plant. Financial break even at today's wholesale electricity prices (5 US cents per kilowatt-hour) would occur after five 1-gigawatt plants were on line, replacing old conventional plants or meeting new demand. The reality is not so clear-cut. The most advanced fusion programs in the world are inertial confinement fusion (such as National Ignition Facility) and magnetic confinement fusion (such as ITER and Wendelstein 7-X). In the case of the former, there is no solid roadmap to power generation. In the case of the latter, commercial power generation is not expected until around 2050. In both cases, the type of fusion discussed is the simplest: D–T fusion. The reason for this is the very low Coulomb barrier for this reaction; for D+3He, the barrier is much higher, and it is even higher for 3He–3He. The immense cost of reactors like ITER and National Ignition Facility are largely due to their immense size, yet to scale up to higher plasma temperatures would require reactors far larger still. The 14.7 MeV proton and 3.6 MeV alpha particle from D–3He fusion, plus the higher conversion efficiency, means that more electricity is obtained per kilogram than with D–T fusion (17.6 MeV), but not that much more. As a further downside, the rates of reaction for helium-3 fusion reactions are not particularly high, requiring a reactor that is larger still or more reactors to produce the same amount of electricity. In 2022, Helion Energy claimed that their 7th fusion prototype (Polaris; fully funded and under construction as of September 2022) will demonstrate "net electricity from fusion", and will demonstrate "helium-3 production through deuterium–deuterium fusion" by means of a "patented high-efficiency closed-fuel cycle". Alternatives to He-3 To attempt to work around this problem of massively large power plants that may not even be economical with D–T fusion, let alone the far more challenging D–3He fusion, a number of other reactors have been proposed – the Fusor, Polywell, Focus fusion, and many more, though many of these concepts have fundamental problems with achieving a net energy gain, and generally attempt to achieve fusion in thermal disequilibrium, something that could potentially prove impossible, and consequently, these long-shot programs tend to have trouble garnering funding despite their low budgets. Unlike the "big" and "hot" fusion systems, if such systems worked, they could scale to the higher barrier aneutronic fuels, and so their proponents tend to promote p-B fusion, which requires no exotic fuel such as helium-3. Extraterrestrial Moon Materials on the Moon's surface contain helium-3 at concentrations between 1.4 and 15 ppb in sunlit areas, and may contain concentrations as much as 50 ppb in permanently shadowed regions. A number of people, starting with Gerald Kulcinski in 1986, have proposed to explore the Moon, mine lunar regolith and use the helium-3 for fusion. Because of the low concentrations of helium-3, any mining equipment would need to process extremely large amounts of regolith (over 150 tonnes of regolith to obtain one gram of helium-3). The primary objective of Indian Space Research Organisation's first lunar probe called Chandrayaan-1, launched on October 22, 2008, was reported in some sources to be mapping the Moon's surface for helium-3-containing minerals. No such objective is mentioned in the project's official list of goals, though many of its scientific payloads have held helium-3-related applications. Cosmochemist and geochemist Ouyang Ziyuan from the Chinese Academy of Sciences who is now in charge of the Chinese Lunar Exploration Program has already stated on many occasions that one of the main goals of the program would be the mining of helium-3, from which operation "each year, three space shuttle missions could bring enough fuel for all human beings across the world". In January 2006, the Russian space company RKK Energiya announced that it considers lunar helium-3 a potential economic resource to be mined by 2020, if funding can be found. Not all writers feel the extraction of lunar helium-3 is feasible, or even that there will be a demand for it for fusion. Dwayne Day, writing in The Space Review in 2015, characterises helium-3 extraction from the Moon for use in fusion as magical thinking about an unproven technology, and questions the feasibility of lunar extraction, as compared to production on Earth. Gas giants Mining gas giants for helium-3 has also been proposed. The British Interplanetary Society's hypothetical Project Daedalus interstellar probe design was fueled by helium-3 mines in the atmosphere of Jupiter, for example. See also List of elements facing shortage Notes and references Bibliography External links The Nobel Prize in Physics 2003, presentation speech Moon for Sale: A BBC Horizon documentary on the possibility of lunar mining of Helium-3 Helium-03 Nuclear fusion fuels Superfluidity MRI contrast agents
Helium-3
[ "Physics", "Chemistry", "Materials_science" ]
6,122
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Nuclear magnetic resonance", "Phases of matter", "Cryogenics", "Isotopes", "Superfluidity", "Exotic matter", "Condensed matter physics", "Isotopes of helium", "Nuclear physics", "Matter", "Fluid dynamics" ...
14,381
https://en.wikipedia.org/wiki/Hamiltonian%20%28quantum%20mechanics%29
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's energy spectrum or its set of energy eigenvalues, is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory. The Hamiltonian is named after William Rowan Hamilton, who developed a revolutionary reformulation of Newtonian mechanics, known as Hamiltonian mechanics, which was historically important to the development of quantum physics. Similar to vector notation, it is typically denoted by , where the hat indicates that it is an operator. It can also be written as or . Introduction The Hamiltonian of a system represents the total energy of the system; that is, the sum of the kinetic and potential energies of all particles associated with the system. The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one. Schrödinger Hamiltonian One particle By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form where is the potential energy operator and is the kinetic energy operator in which is the mass of the particle, the dot denotes the dot product of vectors, and is the momentum operator where a is the del operator. The dot product of with itself is the Laplacian . In three dimensions using Cartesian coordinates the Laplace operator is Although this is not the technical definition of the Hamiltonian in classical mechanics, it is the form it most commonly takes. Combining these yields the form used in the Schrödinger equation: which allows one to apply the Hamiltonian to systems described by a wave function . This is the approach commonly taken in introductory treatments of quantum mechanics, using the formalism of Schrödinger's wave mechanics. One can also make substitutions to certain variables to fit specific cases, such as some involving electromagnetic fields. Expectation value It can be shown that the expectation value of the Hamiltonian which gives the energy expectation value will always be greater than or equal to the minimum potential of the system. Consider computing the expectation value of kinetic energy: Hence the expectation value of kinetic energy is always non-negative. This result can be used to calculate the expectation value of the total energy which is given for a normalized wavefunction as: which complete the proof. Similarly, the condition can be generalized to any higher dimensions using divergence theorem. Many particles The formalism can be extended to particles: where is the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and is the kinetic energy operator of particle , is the gradient for particle , and is the Laplacian for particle : Combining these yields the Schrödinger Hamiltonian for the -particle case: However, complications can arise in the many-body problem. Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy. The motion due to any one particle will vary due to the motion of all the other particles in the system. For this reason cross terms for kinetic energy may appear in the Hamiltonian; a mix of the gradients for two particles: where denotes the mass of the collection of particles resulting in this extra kinetic energy. Terms of this form are known as mass polarization terms, and appear in the Hamiltonian of many-electron atoms (see below). For interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle. For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, that is The general form of the Hamiltonian in this case is: where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation—in practice the particles are almost always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they interact with each other by Coulomb interaction (electrostatic force), as shown below. Schrödinger equation The Hamiltonian generates the time evolution of quantum states. If is the state of the system at time , then This equation is the Schrödinger equation. It takes the same form as the Hamilton–Jacobi equation, which is one of the reasons is also called the Hamiltonian. Given the state at some initial time (), we can solve it to obtain the state at any subsequent time. In particular, if is independent of time, then The exponential operator on the right hand side of the Schrödinger equation is usually defined by the corresponding power series in . One might notice that taking polynomials or power series of unbounded operators that are not defined everywhere may not make mathematical sense. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicists' formulation is quite sufficient. By the *-homomorphism property of the functional calculus, the operator is a unitary operator. It is the time evolution operator or propagator of a closed quantum system. If the Hamiltonian is time-independent, form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance. Dirac formalism However, in the more general formalism of Dirac, the Hamiltonian is typically implemented as an operator on a Hilbert space in the following way: The eigenkets of , denoted , provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted , solving the equation: Since is a Hermitian operator, the energy is always a real number. From a mathematically rigorous point of view, care must be taken with the above assumptions. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation. Expressions for the Hamiltonian Following are expressions for the Hamiltonian in a number of situations. Typical ways to classify the expressions are the number of particles, number of dimensions, and the nature of the potential energy function—importantly space and time dependence. Masses are denoted by , and charges by . Free particle The particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest. For one dimension: and in higher dimensions: Constant-potential well For a particle in a region of constant potential (no dependence on space or time), in one dimension, the Hamiltonian is: in three dimensions This applies to the elementary "particle in a box" problem, and step potentials. Simple harmonic oscillator For a simple harmonic oscillator in one dimension, the potential varies with position (but not time), according to: where the angular frequency , effective spring constant , and mass of the oscillator satisfy: so the Hamiltonian is: For three dimensions, this becomes where the three-dimensional position vector using Cartesian coordinates is , its magnitude is Writing the Hamiltonian out in full shows it is simply the sum of the one-dimensional Hamiltonians in each direction: Rigid rotor For a rigid rotor—i.e., system of particles which can rotate freely about any axes, not bound in any potential (such as free molecules with negligible vibrational degrees of freedom, say due to double or triple chemical bonds), the Hamiltonian is: where , , and are the moment of inertia components (technically the diagonal elements of the moment of inertia tensor), and and are the total angular momentum operators (components), about the , , and axes respectively. Electrostatic (Coulomb) potential The Coulomb potential energy for two point charges and (i.e., those that have no spatial extent independently), in three dimensions, is (in SI units—rather than Gaussian units which are frequently used in electromagnetism): However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For charges, the potential energy of charge due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges): where is the electrostatic potential of charge at . The total potential of the system is then the sum over : so the Hamiltonian is: Electric dipole in an electric field For an electric dipole moment constituting charges of magnitude , in a uniform, electrostatic field (time-independent) , positioned in one place, the potential is: the dipole moment itself is the operator Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy: Magnetic dipole in a magnetic field For a magnetic dipole moment in a uniform, magnetostatic field (time-independent) , positioned in one place, the potential is: Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy: For a spin- particle, the corresponding spin magnetic moment is: where is the "spin g-factor" (not to be confused with the gyromagnetic ratio), is the electron charge, is the spin operator vector, whose components are the Pauli matrices, hence Charged particle in an electromagnetic field For a particle with mass and charge in an electromagnetic field, described by the scalar potential and vector potential , there are two parts to the Hamiltonian to substitute for. The canonical momentum operator , which includes a contribution from the field and fulfils the canonical commutation relation, must be quantized; where is the kinetic momentum. The quantization prescription reads so the corresponding kinetic energy operator is and the potential energy, which is due to the field, is given by Casting all of these into the Hamiltonian gives Energy eigenket degeneracy, symmetry, and conservation laws In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the direction is a different state from one propagating in the direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate. It turns out that degeneracy occurs whenever a nontrivial unitary operator commutes with the Hamiltonian. To see this, suppose that is an energy eigenket. Then is an energy eigenket with the same eigenvalue, since Since is nontrivial, at least one pair of and must represent distinct states. Therefore, has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape. The existence of a symmetry operator implies the existence of a conserved observable. Let be the Hermitian generator of : It is straightforward to show that if commutes with , then so does : Therefore, In obtaining this result, we have used the Schrödinger equation, as well as its dual, Thus, the expected value of the observable is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum. Hamilton's equations Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states , which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e., Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time. The instantaneous state of the system at time , , can be expanded in terms of these basis states: where The coefficients are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole. The expectation value of the Hamiltonian of this state, which is also the mean energy, is where the last step was obtained by expanding in terms of the basis states. Each actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use and its complex conjugate . With this choice of independent variables, we can calculate the partial derivative By applying Schrödinger's equation and using the orthonormality of the basis states, this further reduces to Similarly, one can show that If we define "conjugate momentum" variables by then the above equations become which is precisely the form of Hamilton's equations, with the s as the generalized coordinates, the s as the conjugate momenta, and taking the place of the classical Hamiltonian. See also Hamiltonian mechanics Two-state quantum system Operator (physics) Bra–ket notation Quantum state Linear algebra Conservation of energy Potential theory Many-body problem Electrostatics Electric field Magnetic field Lieb–Thirring inequality References External links Hamiltonian mechanics Operator theory Quantum mechanics Quantum chemistry Theoretical chemistry Computational chemistry William Rowan Hamilton
Hamiltonian (quantum mechanics)
[ "Physics", "Chemistry", "Mathematics" ]
3,064
[ "Quantum chemistry", "Dynamical systems", "Theoretical physics", "Classical mechanics", "Quantum mechanics", "Hamiltonian mechanics", "Quantum operators", "Computational chemistry", "Theoretical chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
14,554
https://en.wikipedia.org/wiki/Imaginary%20number
An imaginary number is the product of a real number and the imaginary unit , which is defined by its property . The square of an imaginary number is . For example, is an imaginary number, and its square is . The number zero is considered to be both real and imaginary. Originally coined in the 17th century by René Descartes as a derogatory term and regarded as fictitious or useless, the concept gained wide acceptance following the work of Leonhard Euler (in the 18th century) and Augustin-Louis Cauchy and Carl Friedrich Gauss (in the early 19th century). An imaginary number can be added to a real number to form a complex number of the form , where the real numbers and are called, respectively, the real part and the imaginary part of the complex number. History Although the Greek mathematician and engineer Heron of Alexandria is noted as the first to present a calculation involving the square root of a negative number, it was Rafael Bombelli who first set down the rules for multiplication of complex numbers in 1572. The concept had appeared in print earlier, such as in work by Gerolamo Cardano. At the time, imaginary numbers and negative numbers were poorly understood and were regarded by some as fictitious or useless, much as zero once was. Many other mathematicians were slow to adopt the use of imaginary numbers, including René Descartes, who wrote about them in his La Géométrie in which he coined the term imaginary and meant it to be derogatory. The use of imaginary numbers was not widely accepted until the work of Leonhard Euler (1707–1783) and Carl Friedrich Gauss (1777–1855). The geometric significance of complex numbers as points in a plane was first described by Caspar Wessel (1745–1818). In 1843, William Rowan Hamilton extended the idea of an axis of imaginary numbers in the plane to a four-dimensional space of quaternion imaginaries in which three of the dimensions are analogous to the imaginary numbers in the complex field. Geometric interpretation Geometrically, imaginary numbers are found on the vertical axis of the complex number plane, which allows them to be presented perpendicular to the real axis. One way of viewing imaginary numbers is to consider a standard number line positively increasing in magnitude to the right and negatively increasing in magnitude to the left. At 0 on the -axis, a -axis can be drawn with "positive" direction going up; "positive" imaginary numbers then increase in magnitude upwards, and "negative" imaginary numbers increase in magnitude downwards. This vertical axis is often called the "imaginary axis" and is denoted or . In this representation, multiplication by  corresponds to a counterclockwise rotation of 90 degrees about the origin, which is a quarter of a circle. Multiplication by  corresponds to a clockwise rotation of 90 degrees about the origin. Similarly, multiplying by a purely imaginary number , with a real number, both causes a counterclockwise rotation about the origin by 90 degrees and scales the answer by a factor of . When , this can instead be described as a clockwise rotation by 90 degrees and a scaling by . Square roots of negative numbers Care must be used when working with imaginary numbers that are expressed as the principal values of the square roots of negative numbers. For example, if and are both positive real numbers, the following chain of equalities appears reasonable at first glance: But the result is clearly nonsense. The step where the square root was broken apart was illegitimate. (See Mathematical fallacy.) See also −1 Dual number Split-complex number Notes References Bibliography , explains many applications of imaginary expressions. External links How can one show that imaginary numbers really do exist? – an article that discusses the existence of imaginary numbers. 5Numbers programme 4 BBC Radio 4 programme Why Use Imaginary Numbers? Basic Explanation and Uses of Imaginary Numbers
Imaginary number
[ "Mathematics" ]
774
[ "Complex numbers", "Mathematical objects", "Numbers" ]
14,563
https://en.wikipedia.org/wiki/Integer
An integer is the number zero (0), a positive natural number (1, 2, 3, . . .), or the negation of a positive natural number (−1, −2, −3, . . .). The negations or additive inverses of the positive natural numbers are referred to as negative integers. The set of all integers is often denoted by the boldface or blackboard bold The set of natural numbers is a subset of , which in turn is a subset of the set of all rational numbers , itself a subset of the real numbers . Like the set of natural numbers, the set of integers is countably infinite. An integer may be regarded as a real number that can be written without a fractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75, , 5/4, and are not. The integers form the smallest group and the smallest ring containing the natural numbers. In algebraic number theory, the integers are sometimes qualified as rational integers to distinguish them from the more general algebraic integers. In fact, (rational) integers are algebraic integers that are also rational numbers. History The word integer comes from the Latin integer meaning "whole" or (literally) "untouched", from in ("not") plus tangere ("to touch"). "Entire" derives from the same origin via the French word entier, which means both entire and integer. Historically the term was used for a number that was a multiple of 1, or to the whole part of a mixed number. Only positive integers were considered, making the term synonymous with the natural numbers. The definition of integer expanded over time to include negative numbers as their usefulness was recognized. For example Leonhard Euler in his 1765 Elements of Algebra defined integers to include both positive and negative numbers. The phrase the set of the integers was not used before the end of the 19th century, when Georg Cantor introduced the concept of infinite sets and set theory. The use of the letter Z to denote the set of integers comes from the German word Zahlen ("numbers") and has been attributed to David Hilbert. The earliest known use of the notation in a textbook occurs in Algèbre written by the collective Nicolas Bourbaki, dating to 1947. The notation was not adopted immediately. For example, another textbook used the letter J, and a 1960 paper used Z to denote the non-negative integers. But by 1961, Z was generally used by modern algebra texts to denote the positive and negative integers. The symbol is often annotated to denote various sets, with varying usage amongst different authors: , , or for the positive integers, or for non-negative integers, and for non-zero integers. Some authors use for non-zero integers, while others use it for non-negative integers, or for {–1,1} (the group of units of ). Additionally, is used to denote either the set of integers modulo (i.e., the set of congruence classes of integers), or the set of -adic integers. The whole numbers were synonymous with the integers up until the early 1950s. In the late 1950s, as part of the New Math movement, American elementary school teachers began teaching that whole numbers referred to the natural numbers, excluding negative numbers, while integer included the negative numbers. The whole numbers remain ambiguous to the present day. Algebraic properties Like the natural numbers, is closed under the operations of addition and multiplication, that is, the sum and product of any two integers is an integer. However, with the inclusion of the negative natural numbers (and importantly, ), , unlike the natural numbers, is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense: for any ring, there is a unique ring homomorphism from the integers into this ring. This universal property, namely to be an initial object in the category of rings, characterizes the ring . is not closed under division, since the quotient of two integers (e.g., 1 divided by 2) need not be an integer. Although the natural numbers are closed under exponentiation, the integers are not (since the result can be a fraction when the exponent is negative). The following table lists some of the basic properties of addition and multiplication for any integers , , and : The first five properties listed above for addition say that , under addition, is an abelian group. It is also a cyclic group, since every non-zero integer can be written as a finite sum or . In fact, under addition is the only infinite cyclic group—in the sense that any infinite cyclic group is isomorphic to . The first four properties listed above for multiplication say that under multiplication is a commutative monoid. However, not every integer has a multiplicative inverse (as is the case of the number 2), which means that under multiplication is not a group. All the rules from the above property table (except for the last), when taken together, say that together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of such algebraic structure. Only those equalities of expressions are true in  for all values of variables, which are true in any unital commutative ring. Certain non-zero integers map to zero in certain rings. The lack of zero divisors in the integers (last property in the table) means that the commutative ring  is an integral domain. The lack of multiplicative inverses, which is equivalent to the fact that is not closed under division, means that is not a field. The smallest field containing the integers as a subring is the field of rational numbers. The process of constructing the rationals from the integers can be mimicked to form the field of fractions of any integral domain. And back, starting from an algebraic number field (an extension of rational numbers), its ring of integers can be extracted, which includes as its subring. Although ordinary division is not defined on , the division "with remainder" is defined on them. It is called Euclidean division, and possesses the following important property: given two integers and with , there exist unique integers and such that and , where denotes the absolute value of . The integer is called the quotient and is called the remainder of the division of by . The Euclidean algorithm for computing greatest common divisors works by a sequence of Euclidean divisions. The above says that is a Euclidean domain. This implies that is a principal ideal domain, and any positive integer can be written as the products of primes in an essentially unique way. This is the fundamental theorem of arithmetic. Order-theoretic properties is a totally ordered set without upper or lower bound. The ordering of is given by: . An integer is positive if it is greater than zero, and negative if it is less than zero. Zero is defined as neither negative nor positive. The ordering of integers is compatible with the algebraic operations in the following way: If and , then If and , then Thus it follows that together with the above ordering is an ordered ring. The integers are the only nontrivial totally ordered abelian group whose positive elements are well-ordered. This is equivalent to the statement that any Noetherian valuation ring is either a field—or a discrete valuation ring. Construction Traditional development In elementary school teaching, integers are often intuitively defined as the union of the (positive) natural numbers, zero, and the negations of the natural numbers. This can be formalized as follows. First construct the set of natural numbers according to the Peano axioms, call this . Then construct a set which is disjoint from and in one-to-one correspondence with via a function . For example, take to be the ordered pairs with the mapping . Finally let 0 be some object not in or , for example the ordered pair (0,0). Then the integers are defined to be the union . The traditional arithmetic operations can then be defined on the integers in a piecewise fashion, for each of positive numbers, negative numbers, and zero. For example negation is defined as follows: The traditional style of definition leads to many different cases (each arithmetic operation needs to be defined on each combination of types of integer) and makes it tedious to prove that integers obey the various laws of arithmetic. Equivalence classes of ordered pairs In modern set-theoretic mathematics, a more abstract construction allowing one to define arithmetical operations without any case distinction is often used instead. The integers can thus be formally constructed as the equivalence classes of ordered pairs of natural numbers . The intuition is that stands for the result of subtracting from . To confirm our expectation that and denote the same number, we define an equivalence relation on these pairs with the following rule: precisely when . Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers; by using to denote the equivalence class having as a member, one has: . . The negation (or additive inverse) of an integer is obtained by reversing the order of the pair: . Hence subtraction can be defined as the addition of the additive inverse: . The standard ordering on the integers is given by: if and only if . It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes. Every equivalence class has a unique member that is of the form or (or both at once). The natural number is identified with the class (i.e., the natural numbers are embedded into the integers by map sending to ), and the class is denoted (this covers all remaining classes, and gives the class a second time since –0 = 0. Thus, is denoted by If the natural numbers are identified with the corresponding integers (using the embedding mentioned above), this convention creates no ambiguity. This notation recovers the familiar representation of the integers as . Some examples are: Other approaches In theoretical computer science, other approaches for the construction of integers are used by automated theorem provers and term rewrite engines. Integers are represented as algebraic terms built using a few basic operations (e.g., zero, succ, pred) and using natural numbers, which are assumed to be already constructed (using the Peano approach). There exist at least ten such constructions of signed integers. These constructions differ in several ways: the number of basic operations used for the construction, the number (usually, between 0 and 2), and the types of arguments accepted by these operations; the presence or absence of natural numbers as arguments of some of these operations, and the fact that these operations are free constructors or not, i.e., that the same integer can be represented using only one or many algebraic terms. The technique for the construction of integers presented in the previous section corresponds to the particular case where there is a single basic operation pair that takes as arguments two natural numbers and , and returns an integer (equal to ). This operation is not free since the integer 0 can be written pair(0,0), or pair(1,1), or pair(2,2), etc.. This technique of construction is used by the proof assistant Isabelle; however, many other tools use alternative construction techniques, notable those based upon free constructors, which are simpler and can be implemented more efficiently in computers. Computer science An integer is often a primitive data type in computer languages. However, integer data types can only represent a subset of all integers, since practical computers are of finite capacity. Also, in the common two's complement representation, the inherent definition of sign distinguishes between "negative" and "non-negative" rather than "negative, positive, and 0". (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Fixed length integer approximation data types (or subsets) are denoted int or Integer in several programming languages (such as Algol68, C, Java, Delphi, etc.). Variable-length representations of integers, such as bignums, can store any integer that fits in the computer's memory. Other integer data types are implemented with a fixed size, usually a number of bits which is a power of 2 (4, 8, 16, etc.) or a memorable number of decimal digits (e.g., 9 or 10). Cardinality The set of integers is countably infinite, meaning it is possible to pair each integer with a unique natural number. An example of such a pairing is More technically, the cardinality of is said to equal (aleph-null). The pairing between elements of and is called a bijection. See also Canonical factorization of a positive integer Complex integer Hyperinteger Integer complexity Integer lattice Integer part Integer sequence Integer-valued function Mathematical symbols Parity (mathematics) Profinite integer Footnotes References Sources ) External links The Positive Integers – divisor tables and numeral representation tools On-Line Encyclopedia of Integer Sequences cf OEIS Elementary mathematics Abelian group theory Ring theory Elementary number theory Algebraic number theory Sets of real numbers
Integer
[ "Mathematics" ]
2,697
[ "Elementary number theory", "Mathematical objects", "Ring theory", "Elementary mathematics", "Fields of abstract algebra", "Algebraic number theory", "Integers", "Numbers", "Number theory" ]
14,624
https://en.wikipedia.org/wiki/Inorganic%20chemistry
Inorganic chemistry deals with synthesis and behavior of inorganic and organometallic compounds. This field covers chemical compounds that are not carbon-based, which are the subjects of organic chemistry. The distinction between the two disciplines is far from absolute, as there is much overlap in the subdiscipline of organometallic chemistry. It has applications in every aspect of the chemical industry, including catalysis, materials science, pigments, surfactants, coatings, medications, fuels, and agriculture. Occurrence Many inorganic compounds are found in nature as minerals. Soil may contain iron sulfide as pyrite or calcium sulfate as gypsum. Inorganic compounds are also found multitasking as biomolecules: as electrolytes (sodium chloride), in energy storage (ATP) or in construction (the polyphosphate backbone in DNA). Bonding Inorganic compounds exhibit a range of bonding properties. Some are ionic compounds, consisting of very simple cations and anions joined by ionic bonding. Examples of salts (which are ionic compounds) are magnesium chloride MgCl2, which consists of magnesium cations Mg2+ and chloride anions Cl−; or sodium hydroxide NaOH, which consists of sodium cations Na+ and hydroxide anions OH−. Some inorganic compounds are highly covalent, such as sulfur dioxide and iron pentacarbonyl. Many inorganic compounds feature polar covalent bonding, which is a form of bonding intermediate between covalent and ionic bonding. This description applies to many oxides, carbonates, and halides. Many inorganic compounds are characterized by high melting points. Some salts (e.g., NaCl) are very soluble in water. When one reactant contains hydrogen atoms, a reaction can take place by exchanging protons in acid-base chemistry. In a more general definition, any chemical species capable of binding to electron pairs is called a Lewis acid; conversely any molecule that tends to donate an electron pair is referred to as a Lewis base. As a refinement of acid-base interactions, the HSAB theory takes into account polarizability and size of ions. Subdivisions of inorganic chemistry Subdivisions of inorganic chemistry are numerous, but include: organometallic chemistry, compounds with metal-carbon bonds. This area touches on organic synthesis, which employs many organometallic catalysts and reagents. cluster chemistry, compounds with several metals bound together with metal–metal bonds or bridging ligands. bioinorganic chemistry, biomolecules that contain metals. This area touches on medicinal chemistry. materials chemistry and solid state chemistry, extended (i.e. polymeric) solids exhibiting properties not seen for simple molecules. Many practical themes are associated with these areas, including ceramics. Industrial inorganic chemistry Inorganic chemistry is a highly practical area of science. Traditionally, the scale of a nation's economy could be evaluated by their productivity of sulfuric acid. An important man-made inorganic compound is ammonium nitrate, used for fertilization. The ammonia is produced through the Haber process. Nitric acid is prepared from the ammonia by oxidation. Another large-scale inorganic material is portland cement. Inorganic compounds are used as catalysts such as vanadium(V) oxide for the oxidation of sulfur dioxide and titanium(III) chloride for the polymerization of alkenes. Many inorganic compounds are used as reagents in organic chemistry such as lithium aluminium hydride. Descriptive inorganic chemistry Descriptive inorganic chemistry focuses on the classification of compounds based on their properties. Partly the classification focuses on the position in the periodic table of the heaviest element (the element with the highest atomic weight) in the compound, partly by grouping compounds by their structural similarities Coordination compounds Classical coordination compounds feature metals bound to "lone pairs" of electrons residing on the main group atoms of ligands such as H2O, NH3, Cl−, and CN−. In modern coordination compounds almost all organic and inorganic compounds can be used as ligands. The "metal" usually is a metal from the groups 3–13, as well as the trans-lanthanides and trans-actinides, but from a certain perspective, all chemical compounds can be described as coordination complexes. The stereochemistry of coordination complexes can be quite rich, as hinted at by Werner's separation of two enantiomers of [Co((OH)2Co(NH3)4)3]6+, an early demonstration that chirality is not inherent to organic compounds. A topical theme within this specialization is supramolecular coordination chemistry. Examples: [Co(EDTA)]−, [Co(NH3)6]3+, TiCl4(THF)2. Coordination compounds show a rich diversity of structures, varying from tetrahedral for titanium (e.g., TiCl4) to square planar for some nickel complexes to octahedral for coordination complexes of cobalt. A range of transition metals can be found in biologically important compounds, such as iron in hemoglobin. Examples: iron pentacarbonyl, titanium tetrachloride, cisplatin Main group compounds These species feature elements from groups I, II, III, IV, V, VI, VII, 0 (excluding hydrogen) of the periodic table. Due to their often similar reactivity, the elements in group 3 (Sc, Y, and La) and group 12 (Zn, Cd, and Hg) are also generally included, and the lanthanides and actinides are sometimes included as well. Main group compounds have been known since the beginnings of chemistry, e.g., elemental sulfur and the distillable white phosphorus. Experiments on oxygen, O2, by Lavoisier and Priestley not only identified an important diatomic gas, but opened the way for describing compounds and reactions according to stoichiometric ratios. The discovery of a practical synthesis of ammonia using iron catalysts by Carl Bosch and Fritz Haber in the early 1900s deeply impacted mankind, demonstrating the significance of inorganic chemical synthesis. Typical main group compounds are SiO2, SnCl4, and N2O. Many main group compounds can also be classed as "organometallic", as they contain organic groups, e.g., B(CH3)3). Main group compounds also occur in nature, e.g., phosphate in DNA, and therefore may be classed as bioinorganic. Conversely, organic compounds lacking (many) hydrogen ligands can be classed as "inorganic", such as the fullerenes, buckytubes and binary carbon oxides. Examples: tetrasulfur tetranitride S4N4, diborane B2H6, silicones, buckminsterfullerene C60. Noble gas compounds include several derivatives of xenon and krypton. Examples: xenon hexafluoride XeF6, xenon trioxide XeO3, and krypton difluoride KrF2 Organometallic compounds Usually, organometallic compounds are considered to contain the M-C-H group. The metal (M) in these species can either be a main group element or a transition metal. Operationally, the definition of an organometallic compound is more relaxed to include also highly lipophilic complexes such as metal carbonyls and even metal alkoxides. Organometallic compounds are mainly considered a special category because organic ligands are often sensitive to hydrolysis or oxidation, necessitating that organometallic chemistry employs more specialized preparative methods than was traditional in Werner-type complexes. Synthetic methodology, especially the ability to manipulate complexes in solvents of low coordinating power, enabled the exploration of very weakly coordinating ligands such as hydrocarbons, H2, and N2. Because the ligands are petrochemicals in some sense, the area of organometallic chemistry has greatly benefited from its relevance to industry. Examples: Cyclopentadienyliron dicarbonyl dimer (C5H5)Fe(CO)2CH3, ferrocene Fe(C5H5)2, molybdenum hexacarbonyl Mo(CO)6, triethylborane Et3B, Tris(dibenzylideneacetone)dipalladium(0) Pd2(dba)3) Cluster compounds Clusters can be found in all classes of chemical compounds. According to the commonly accepted definition, a cluster consists minimally of a triangular set of atoms that are directly bonded to each other. But metal–metal bonded dimetallic complexes are highly relevant to the area. Clusters occur in "pure" inorganic systems, organometallic chemistry, main group chemistry, and bioinorganic chemistry. The distinction between very large clusters and bulk solids is increasingly blurred. This interface is the chemical basis of nanoscience or nanotechnology and specifically arise from the study of quantum size effects in cadmium selenide clusters. Thus, large clusters can be described as an array of bound atoms intermediate in character between a molecule and a solid. Examples: Fe3(CO)12, B10H14, [Mo6Cl14]2−, 4Fe-4S Bioinorganic compounds By definition, these compounds occur in nature, but the subfield includes anthropogenic species, such as pollutants (e.g., methylmercury) and drugs (e.g., Cisplatin). The field, which incorporates many aspects of biochemistry, includes many kinds of compounds, e.g., the phosphates in DNA, and also metal complexes containing ligands that range from biological macromolecules, commonly peptides, to ill-defined species such as humic acid, and to water (e.g., coordinated to gadolinium complexes employed for MRI). Traditionally bioinorganic chemistry focuses on electron- and energy-transfer in proteins relevant to respiration. Medicinal inorganic chemistry includes the study of both non-essential and essential elements with applications to diagnosis and therapies. Examples: hemoglobin, methylmercury, carboxypeptidase Solid state compounds This important area focuses on structure, bonding, and the physical properties of materials. In practice, solid state inorganic chemistry uses techniques such as crystallography to gain an understanding of the properties that result from collective interactions between the subunits of the solid. Included in solid state chemistry are metals and their alloys or intermetallic derivatives. Related fields are condensed matter physics, mineralogy, and materials science. Examples: silicon chips, zeolites, YBa2Cu3O7 Spectroscopy and magnetism In contrast to most organic compounds, many inorganic compounds are magnetic and/or colored. These properties provide information on the bonding and structure. The magnetism of inorganic compounds can be comlex. For example, most copper(II) compounds are paramagnetic but CuII2(OAc)4(H2O)2 is almost diamagnetic below room temperature. The explanation is due to magnetic coupling between pairs of Cu(II) sites in the acetate. Qualitative theories Inorganic chemistry has greatly benefited from qualitative theories. Such theories are easier to learn as they require little background in quantum theory. Within main group compounds, VSEPR theory powerfully predicts, or at least rationalizes, the structures of main group compounds, such as an explanation for why NH3 is pyramidal whereas ClF3 is T-shaped. For the transition metals, crystal field theory allows one to understand the magnetism of many simple complexes, such as why [FeIII(CN)6]3− has only one unpaired electron, whereas [FeIII(H2O)6]3+ has five. A particularly powerful qualitative approach to assessing the structure and reactivity begins with classifying molecules according to electron counting, focusing on the numbers of valence electrons, usually at the central atom in a molecule. Molecular symmetry group theory A construct in chemistry is molecular symmetry, as embodied in Group theory. Inorganic compounds display a particularly diverse symmetries, so it is logical that Group Theory is intimately associated with inorganic chemistry. Group theory provides the language to describe the shapes of molecules according to their point group symmetry. Group theory also enables factoring and simplification of theoretical calculations. Spectroscopic features are analyzed and described with respect to the symmetry properties of the, inter alia, vibrational or electronic states. Knowledge of the symmetry properties of the ground and excited states allows one to predict the numbers and intensities of absorptions in vibrational and electronic spectra. A classic application of group theory is the prediction of the number of C–O vibrations in substituted metal carbonyl complexes. The most common applications of symmetry to spectroscopy involve vibrational and electronic spectra. Group theory highlights commonalities and differences in the bonding of otherwise disparate species. For example, the metal-based orbitals transform identically for WF6 and W(CO)6, but the energies and populations of these orbitals differ significantly. A similar relationship exists CO2 and molecular beryllium difluoride. Thermodynamics and inorganic chemistry An alternative quantitative approach to inorganic chemistry focuses on energies of reactions. This approach is highly traditional and empirical, but it is also useful. Broad concepts that are couched in thermodynamic terms include redox potential, acidity, phase changes. A classic concept in inorganic thermodynamics is the Born–Haber cycle, which is used for assessing the energies of elementary processes such as electron affinity, some of which cannot be observed directly. Mechanistic inorganic chemistry An important aspect of inorganic chemistry focuses on reaction pathways, i.e. reaction mechanisms. Main group elements and lanthanides The mechanisms of main group compounds of groups 13–18 are usually discussed in the context of organic chemistry (organic compounds are main group compounds, after all). Elements heavier than C, N, O, and F often form compounds with more electrons than predicted by the octet rule, as explained in the article on hypervalent molecules. The mechanisms of their reactions differ from organic compounds for this reason. Elements lighter than carbon (B, Be, Li) as well as Al and Mg often form electron-deficient structures that are electronically akin to carbocations. Such electron-deficient species tend to react via associative pathways. The chemistry of the lanthanides mirrors many aspects of chemistry seen for aluminium. Transition metal complexes Transition metal and main group compounds often react differently. The important role of d-orbitals in bonding strongly influences the pathways and rates of ligand substitution and dissociation. These themes are covered in articles on coordination chemistry and ligand. Both associative and dissociative pathways are observed. An overarching aspect of mechanistic transition metal chemistry is the kinetic lability of the complex illustrated by the exchange of free and bound water in the prototypical complexes [M(H2O)6]n+: [M(H2O)6]n+ + 6 H2O* → [M(H2O*)6]n+ + 6 H2O where H2O* denotes isotopically enriched water, e.g., H217O The rates of water exchange varies by 20 orders of magnitude across the periodic table, with lanthanide complexes at one extreme and Ir(III) species being the slowest. Redox reactions Redox reactions are prevalent for the transition elements. Two classes of redox reaction are considered: atom-transfer reactions, such as oxidative addition/reductive elimination, and electron-transfer. A fundamental redox reaction is "self-exchange", which involves the degenerate reaction between an oxidant and a reductant. For example, permanganate and its one-electron reduced relative manganate exchange one electron: [MnO4]− + [Mn*O4]2− → [MnO4]2− + [Mn*O4]− Reactions at ligands Coordinated ligands display reactivity distinct from the free ligands. For example, the acidity of the ammonia ligands in [Co(NH3)6]3+ is elevated relative to NH3 itself. Alkenes bound to metal cations are reactive toward nucleophiles whereas alkenes normally are not. The large and industrially important area of catalysis hinges on the ability of metals to modify the reactivity of organic ligands. Homogeneous catalysis occurs in solution and heterogeneous catalysis occurs when gaseous or dissolved substrates interact with surfaces of solids. Traditionally homogeneous catalysis is considered part of organometallic chemistry and heterogeneous catalysis is discussed in the context of surface science, a subfield of solid state chemistry. But the basic inorganic chemical principles are the same. Transition metals, almost uniquely, react with small molecules such as CO, H2, O2, and C2H4. The industrial significance of these feedstocks drives the active area of catalysis. Ligands can also undergo ligand transfer reactions such as transmetalation. Characterization of inorganic compounds Because of the diverse range of elements and the correspondingly diverse properties of the resulting derivatives, inorganic chemistry is closely associated with many methods of analysis. Older methods tended to examine bulk properties such as the electrical conductivity of solutions, melting points, solubility, and acidity. With the advent of quantum theory and the corresponding expansion of electronic apparatus, new tools have been introduced to probe the electronic properties of inorganic molecules and solids. Often these measurements provide insights relevant to theoretical models. Commonly encountered techniques are: X-ray crystallography: This technique allows for the 3D determination of molecular structures. Various forms of spectroscopy: Ultraviolet-visible spectroscopy: Historically, this has been an important tool, since many inorganic compounds are strongly colored NMR spectroscopy: Besides 1H and 13C many other NMR-active nuclei (e.g., 11B, 19F, 31P, and 195Pt) can give important information on compound properties and structure. The NMR of paramagnetic species can provide important structural information. Proton (1H) NMR is also important because the light hydrogen nucleus is not easily detected by X-ray crystallography. Infrared spectroscopy: Mostly for absorptions from carbonyl ligands Electron nuclear double resonance (ENDOR) spectroscopy Mössbauer spectroscopy Electron-spin resonance: ESR (or EPR) allows for the measurement of the environment of paramagnetic metal centres. Electrochemistry: Cyclic voltammetry and related techniques probe the redox characteristics of compounds. Synthetic inorganic chemistry Although some inorganic species can be obtained in pure form from nature, most are synthesized in chemical plants and in the laboratory. Inorganic synthetic methods can be classified roughly according to the volatility or solubility of the component reactants. Soluble inorganic compounds are prepared using methods of organic synthesis. For metal-containing compounds that are reactive toward air, Schlenk line and glove box techniques are followed. Volatile compounds and gases are manipulated in "vacuum manifolds" consisting of glass piping interconnected through valves, the entirety of which can be evacuated to 0.001 mm Hg or less. Compounds are condensed using liquid nitrogen (b.p. 78K) or other cryogens. Solids are typically prepared using tube furnaces, the reactants and products being sealed in containers, often made of fused silica (amorphous SiO2) but sometimes more specialized materials such as welded Ta tubes or Pt "boats". Products and reactants are transported between temperature zones to drive reactions. See also Important publications in inorganic chemistry References
Inorganic chemistry
[ "Chemistry" ]
4,079
[ "nan" ]
14,734
https://en.wikipedia.org/wiki/Iron
Iron is a chemical element; it has the symbol Fe () and atomic number 26. It is a metal that belongs to the first transition series and group 8 of the periodic table. It is, by mass, the most common element on Earth, forming much of Earth's outer and inner core. It is the fourth most abundant element in the Earth's crust, being mainly deposited by meteorites in its metallic state. Extracting usable metal from iron ores requires kilns or furnaces capable of reaching , about higher than that required to smelt copper. Humans started to master that process in Eurasia during the 2nd millennium BC and the use of iron tools and weapons began to displace copper alloys – in some regions, only around 1200 BC. That event is considered the transition from the Bronze Age to the Iron Age. In the modern world, iron alloys, such as steel, stainless steel, cast iron and special steels, are by far the most common industrial metals, due to their mechanical properties and low cost. The iron and steel industry is thus very important economically, and iron is the cheapest metal, with a price of a few dollars per kilogram or pound. Pristine and smooth pure iron surfaces are a mirror-like silvery-gray. Iron reacts readily with oxygen and water to produce brown-to-black hydrated iron oxides, commonly known as rust. Unlike the oxides of some other metals that form passivating layers, rust occupies more volume than the metal and thus flakes off, exposing more fresh surfaces for corrosion. Chemically, the most common oxidation states of iron are iron(II) and iron(III). Iron shares many properties of other transition metals, including the other group 8 elements, ruthenium and osmium. Iron forms compounds in a wide range of oxidation states, −4 to +7. Iron also forms many coordination complexs; some of them, such as ferrocene, ferrioxalate, and Prussian blue have substantial industrial, medical, or research applications. The body of an adult human contains about 4 grams (0.005% body weight) of iron, mostly in hemoglobin and myoglobin. These two proteins play essential roles in oxygen transport by blood and oxygen storage in muscles. To maintain the necessary levels, human iron metabolism requires a minimum of iron in the diet. Iron is also the metal at the active site of many important redox enzymes dealing with cellular respiration and oxidation and reduction in plants and animals. Characteristics Allotropes At least four allotropes of iron (differing atom arrangements in the solid) are known, conventionally denoted α, γ, δ, and ε. The first three forms are observed at ordinary pressures. As molten iron cools past its freezing point of 1538 °C, it crystallizes into its δ allotrope, which has a body-centered cubic (bcc) crystal structure. As it cools further to 1394 °C, it changes to its γ-iron allotrope, a face-centered cubic (fcc) crystal structure, or austenite. At 912 °C and below, the crystal structure again becomes the bcc α-iron allotrope. The physical properties of iron at very high pressures and temperatures have also been studied extensively, because of their relevance to theories about the cores of the Earth and other planets. Above approximately 10 GPa and temperatures of a few hundred kelvin or less, α-iron changes into another hexagonal close-packed (hcp) structure, which is also known as ε-iron. The higher-temperature γ-phase also changes into ε-iron, but does so at higher pressure. Some controversial experimental evidence exists for a stable β phase at pressures above 50 GPa and temperatures of at least 1500 K. It is supposed to have an orthorhombic or a double hcp structure. (Confusingly, the term "β-iron" is sometimes also used to refer to α-iron above its Curie point, when it changes from being ferromagnetic to paramagnetic, even though its crystal structure has not changed.) The Earth's inner core is generally presumed to consist of an iron-nickel alloy with ε (or β) structure. Melting and boiling points The melting and boiling points of iron, along with its enthalpy of atomization, are lower than those of the earlier 3d elements from scandium to chromium, showing the lessened contribution of the 3d electrons to metallic bonding as they are attracted more and more into the inert core by the nucleus; however, they are higher than the values for the previous element manganese because that element has a half-filled 3d sub-shell and consequently its d-electrons are not easily delocalized. This same trend appears for ruthenium but not osmium. The melting point of iron is experimentally well defined for pressures less than 50 GPa. For greater pressures, published data (as of 2007) still varies by tens of gigapascals and over a thousand kelvin. Magnetic properties Below its Curie point of , α-iron changes from paramagnetic to ferromagnetic: the spins of the two unpaired electrons in each atom generally align with the spins of its neighbors, creating an overall magnetic field. This happens because the orbitals of those two electrons (dz2 and dx2 − y2) do not point toward neighboring atoms in the lattice, and therefore are not involved in metallic bonding. In the absence of an external source of magnetic field, the atoms get spontaneously partitioned into magnetic domains, about 10 micrometers across, such that the atoms in each domain have parallel spins, but some domains have other orientations. Thus a macroscopic piece of iron will have a nearly zero overall magnetic field. Application of an external magnetic field causes the domains that are magnetized in the same general direction to grow at the expense of adjacent ones that point in other directions, reinforcing the external field. This effect is exploited in devices that need to channel magnetic fields to fulfill design function, such as electrical transformers, magnetic recording heads, and electric motors. Impurities, lattice defects, or grain and particle boundaries can "pin" the domains in the new positions, so that the effect persists even after the external field is removed – thus turning the iron object into a (permanent) magnet. Similar behavior is exhibited by some iron compounds, such as the ferrites including the mineral magnetite, a crystalline form of the mixed iron(II,III) oxide (although the atomic-scale mechanism, ferrimagnetism, is somewhat different). Pieces of magnetite with natural permanent magnetization (lodestones) provided the earliest compasses for navigation. Particles of magnetite were extensively used in magnetic recording media such as core memories, magnetic tapes, floppies, and disks, until they were replaced by cobalt-based materials. Isotopes Iron has four stable isotopes: 54Fe (5.845% of natural iron), 56Fe (91.754%), 57Fe (2.119%) and 58Fe (0.282%). Twenty-four artificial isotopes have also been created. Of these stable isotopes, only 57Fe has a nuclear spin (−). The nuclide 54Fe theoretically can undergo double electron capture to 54Cr, but the process has never been observed and only a lower limit on the half-life of 4.4×1020 years has been established. 60Fe is an extinct radionuclide of long half-life (2.6 million years). It is not found on Earth, but its ultimate decay product is its granddaughter, the stable nuclide 60Ni. Much of the past work on isotopic composition of iron has focused on the nucleosynthesis of 60Fe through studies of meteorites and ore formation. In the last decade, advances in mass spectrometry have allowed the detection and quantification of minute, naturally occurring variations in the ratios of the stable isotopes of iron. Much of this work is driven by the Earth and planetary science communities, although applications to biological and industrial systems are emerging. In phases of the meteorites Semarkona and Chervony Kut, a correlation between the concentration of 60Ni, the granddaughter of 60Fe, and the abundance of the stable iron isotopes provided evidence for the existence of 60Fe at the time of formation of the Solar System. Possibly the energy released by the decay of 60Fe, along with that released by 26Al, contributed to the remelting and differentiation of asteroids after their formation 4.6 billion years ago. The abundance of 60Ni present in extraterrestrial material may bring further insight into the origin and early history of the Solar System. The most abundant iron isotope 56Fe is of particular interest to nuclear scientists because it represents the most common endpoint of nucleosynthesis. Since 56Ni (14 alpha particles) is easily produced from lighter nuclei in the alpha process in nuclear reactions in supernovae (see silicon burning process), it is the endpoint of fusion chains inside extremely massive stars. Although adding more alpha particles is possible, but nonetheless the sequence does effectively end at 56Ni because conditions in stellar interiors cause the competition between photodisintegration and the alpha process to favor photodisintegration around 56Ni. This 56Ni, which has a half-life of about 6 days, is created in quantity in these stars, but soon decays by two successive positron emissions within supernova decay products in the supernova remnant gas cloud, first to radioactive 56Co, and then to stable 56Fe. As such, iron is the most abundant element in the core of red giants, and is the most abundant metal in iron meteorites and in the dense metal cores of planets such as Earth. It is also very common in the universe, relative to other stable metals of approximately the same atomic weight. Iron is the sixth most abundant element in the universe, and the most common refractory element. Although a further tiny energy gain could be extracted by synthesizing 62Ni, which has a marginally higher binding energy than 56Fe, conditions in stars are unsuitable for this process. Element production in supernovas greatly favor iron over nickel, and in any case, 56Fe still has a lower mass per nucleon than 62Ni due to its higher fraction of lighter protons. Hence, elements heavier than iron require a supernova for their formation, involving rapid neutron capture by starting 56Fe nuclei. In the far future of the universe, assuming that proton decay does not occur, cold fusion occurring via quantum tunnelling would cause the light nuclei in ordinary matter to fuse into 56Fe nuclei. Fission and alpha-particle emission would then make heavy nuclei decay into iron, converting all stellar-mass objects to cold spheres of pure iron. Origin and occurrence in nature Cosmogenesis Iron's abundance in rocky planets like Earth is due to its abundant production during the runaway fusion and explosion of type Ia supernovae, which scatters the iron into space. Metallic iron Metallic or native iron is rarely found on the surface of the Earth because it tends to oxidize. However, both the Earth's inner and outer core, which together account for 35% of the mass of the whole Earth, are believed to consist largely of an iron alloy, possibly with nickel. Electric currents in the liquid outer core are believed to be the origin of the Earth's magnetic field. The other terrestrial planets (Mercury, Venus, and Mars) as well as the Moon are believed to have a metallic core consisting mostly of iron. The M-type asteroids are also believed to be partly or mostly made of metallic iron alloy. The rare iron meteorites are the main form of natural metallic iron on the Earth's surface. Items made of cold-worked meteoritic iron have been found in various archaeological sites dating from a time when iron smelting had not yet been developed; and the Inuit in Greenland have been reported to use iron from the Cape York meteorite for tools and hunting weapons. About 1 in 20 meteorites consist of the unique iron-nickel minerals taenite (35–80% iron) and kamacite (90–95% iron). Native iron is also rarely found in basalts that have formed from magmas that have come into contact with carbon-rich sedimentary rocks, which have reduced the oxygen fugacity sufficiently for iron to crystallize. This is known as telluric iron and is described from a few localities, such as Disko Island in West Greenland, Yakutia in Russia and Bühl in Germany. Mantle minerals Ferropericlase , a solid solution of periclase (MgO) and wüstite (FeO), makes up about 20% of the volume of the lower mantle of the Earth, which makes it the second most abundant mineral phase in that region after silicate perovskite ; it also is the major host for iron in the lower mantle. At the bottom of the transition zone of the mantle, the reaction γ- transforms γ-olivine into a mixture of silicate perovskite and ferropericlase and vice versa. In the literature, this mineral phase of the lower mantle is also often called magnesiowüstite. Silicate perovskite may form up to 93% of the lower mantle, and the magnesium iron form, , is considered to be the most abundant mineral in the Earth, making up 38% of its volume. Earth's crust While iron is the most abundant element on Earth, most of this iron is concentrated in the inner and outer cores. The fraction of iron that is in Earth's crust only amounts to about 5% of the overall mass of the crust and is thus only the fourth most abundant element in that layer (after oxygen, silicon, and aluminium). Most of the iron in the crust is combined with various other elements to form many iron minerals. An important class is the iron oxide minerals such as hematite (Fe2O3), magnetite (Fe3O4), and siderite (FeCO3), which are the major ores of iron. Many igneous rocks also contain the sulfide minerals pyrrhotite and pentlandite. During weathering, iron tends to leach from sulfide deposits as the sulfate and from silicate deposits as the bicarbonate. Both of these are oxidized in aqueous solution and precipitate in even mildly elevated pH as iron(III) oxide. Large deposits of iron are banded iron formations, a type of rock consisting of repeated thin layers of iron oxides alternating with bands of iron-poor shale and chert. The banded iron formations were laid down in the time between and . Materials containing finely ground iron(III) oxides or oxide-hydroxides, such as ochre, have been used as yellow, red, and brown pigments since pre-historical times. They contribute as well to the color of various rocks and clays, including entire geological formations like the Painted Hills in Oregon and the Buntsandstein ("colored sandstone", British Bunter). Through Eisensandstein (a jurassic 'iron sandstone', e.g. from Donzdorf in Germany) and Bath stone in the UK, iron compounds are responsible for the yellowish color of many historical buildings and sculptures. The proverbial red color of the surface of Mars is derived from an iron oxide-rich regolith. Significant amounts of iron occur in the iron sulfide mineral pyrite (FeS2), but it is difficult to extract iron from it and it is therefore not exploited. In fact, iron is so common that production generally focuses only on ores with very high quantities of it. According to the International Resource Panel's Metal Stocks in Society report, the global stock of iron in use in society is 2,200 kg per capita. More-developed countries differ in this respect from less-developed countries (7,000–14,000 vs 2,000 kg per capita). Oceans Ocean science demonstrated the role of the iron in the ancient seas in both marine biota and climate. Chemistry and compounds Iron shows the characteristic chemical properties of the transition metals, namely the ability to form variable oxidation states differing by steps of one and a very large coordination and organometallic chemistry: indeed, it was the discovery of an iron compound, ferrocene, that revolutionalized the latter field in the 1950s. Iron is sometimes considered as a prototype for the entire block of transition metals, due to its abundance and the immense role it has played in the technological progress of humanity. Its 26 electrons are arranged in the configuration [Ar]3d64s2, of which the 3d and 4s electrons are relatively close in energy, and thus a number of electrons can be ionized. Iron forms compounds mainly in the oxidation states +2 (iron(II), "ferrous") and +3 (iron(III), "ferric"). Iron also occurs in higher oxidation states, e.g., the purple potassium ferrate (K2FeO4), which contains iron in its +6 oxidation state. The anion [FeO4]– with iron in its +7 oxidation state, along with an iron(V)-peroxo isomer, has been detected by infrared spectroscopy at 4 K after cocondensation of laser-ablated Fe atoms with a mixture of O2/Ar. Iron(IV) is a common intermediate in many biochemical oxidation reactions. Numerous organoiron compounds contain formal oxidation states of +1, 0, −1, or even −2. The oxidation states and other bonding properties are often assessed using the technique of Mössbauer spectroscopy. Many mixed valence compounds contain both iron(II) and iron(III) centers, such as magnetite and Prussian blue (). The latter is used as the traditional "blue" in blueprints. Iron is the first of the transition metals that cannot reach its group oxidation state of +8, although its heavier congeners ruthenium and osmium can, with ruthenium having more difficulty than osmium. Ruthenium exhibits an aqueous cationic chemistry in its low oxidation states similar to that of iron, but osmium does not, favoring high oxidation states in which it forms anionic complexes. In the second half of the 3d transition series, vertical similarities down the groups compete with the horizontal similarities of iron with its neighbors cobalt and nickel in the periodic table, which are also ferromagnetic at room temperature and share similar chemistry. As such, iron, cobalt, and nickel are sometimes grouped together as the iron triad. Unlike many other metals, iron does not form amalgams with mercury. As a result, mercury is traded in standardized 76 pound flasks (34 kg) made of iron. Iron is by far the most reactive element in its group; it is pyrophoric when finely divided and dissolves easily in dilute acids, giving Fe2+. However, it does not react with concentrated nitric acid and other oxidizing acids due to the formation of an impervious oxide layer, which can nevertheless react with hydrochloric acid. High-purity iron, called electrolytic iron, is considered to be resistant to rust, due to its oxide layer. Binary compounds Oxides and sulfides Iron forms various oxide and hydroxide compounds; the most common are iron(II,III) oxide (Fe3O4), and iron(III) oxide (Fe2O3). Iron(II) oxide also exists, though it is unstable at room temperature. Despite their names, they are actually all non-stoichiometric compounds whose compositions may vary. These oxides are the principal ores for the production of iron (see bloomery and blast furnace). They are also used in the production of ferrites, useful magnetic storage media in computers, and pigments. The best known sulfide is iron pyrite (FeS2), also known as fool's gold owing to its golden luster. It is not an iron(IV) compound, but is actually an iron(II) polysulfide containing Fe2+ and ions in a distorted sodium chloride structure. Halides The binary ferrous and ferric halides are well-known. The ferrous halides typically arise from treating iron metal with the corresponding hydrohalic acid to give the corresponding hydrated salts. Fe + 2 HX → FeX2 + H2 (X = F, Cl, Br, I) Iron reacts with fluorine, chlorine, and bromine to give the corresponding ferric halides, ferric chloride being the most common. 2 Fe + 3 X2 → 2 FeX3 (X = F, Cl, Br) Ferric iodide is an exception, being thermodynamically unstable due to the oxidizing power of Fe3+ and the high reducing power of I−: 2 I− + 2 Fe3+ → I2 + 2 Fe2+ (E0 = +0.23 V) Ferric iodide, a black solid, is not stable in ordinary conditions, but can be prepared through the reaction of iron pentacarbonyl with iodine and carbon monoxide in the presence of hexane and light at the temperature of −20 °C, with oxygen and water excluded. Complexes of ferric iodide with some soft bases are known to be stable compounds. Solution chemistry The standard reduction potentials in acidic aqueous solution for some common iron ions are given below: {| |- | [Fe(H2O)6]2+ + 2 e−|| Fe || E0 = −0.447 V |- | [Fe(H2O)6]3+ + e−|| [Fe(H2O)6]2+ || E0 = +0.77 V |- | + 8 H3O+ + 3 e−|| [Fe(H2O)6]3+ + 6 H2O || E0 = +2.20 V |} The red-purple tetrahedral ferrate(VI) anion is such a strong oxidizing agent that it oxidizes ammonia to nitrogen (N2) and water to oxygen: 4 + 34 → 4 + 20 + 3 O2 The pale-violet hexaquo complex is an acid such that above pH 0 it is fully hydrolyzed: {| |- | || || K = 10−3.05 mol dm−3 |- | || || K = 10−3.26 mol dm−3 |- | || || K = 10−2.91 mol dm−3 |} As pH rises above 0 the above yellow hydrolyzed species form and as it rises above 2–3, reddish-brown hydrous iron(III) oxide precipitates out of solution. Although Fe3+ has a d5 configuration, its absorption spectrum is not like that of Mn2+ with its weak, spin-forbidden d–d bands, because Fe3+ has higher positive charge and is more polarizing, lowering the energy of its ligand-to-metal charge transfer absorptions. Thus, all the above complexes are rather strongly colored, with the single exception of the hexaquo ion – and even that has a spectrum dominated by charge transfer in the near ultraviolet region. On the other hand, the pale green iron(II) hexaquo ion does not undergo appreciable hydrolysis. Carbon dioxide is not evolved when carbonate anions are added, which instead results in white iron(II) carbonate being precipitated out. In excess carbon dioxide this forms the slightly soluble bicarbonate, which occurs commonly in groundwater, but it oxidises quickly in air to form iron(III) oxide that accounts for the brown deposits present in a sizeable number of streams. Coordination compounds Due to its electronic structure, iron has a very large coordination and organometallic chemistry. Many coordination compounds of iron are known. A typical six-coordinate anion is hexachloroferrate(III), [FeCl6]3−, found in the mixed salt tetrakis(methylammonium) hexachloroferrate(III) chloride. Complexes with multiple bidentate ligands have geometric isomers. For example, the trans-chlorohydridobis(bis-1,2-(diphenylphosphino)ethane)iron(II) complex is used as a starting material for compounds with the moiety. The ferrioxalate ion with three oxalate ligands displays helical chirality with its two non-superposable geometries labelled Λ (lambda) for the left-handed screw axis and Δ (delta) for the right-handed screw axis, in line with IUPAC conventions. Potassium ferrioxalate is used in chemical actinometry and along with its sodium salt undergoes photoreduction applied in old-style photographic processes. The dihydrate of iron(II) oxalate has a polymeric structure with co-planar oxalate ions bridging between iron centres with the water of crystallisation located forming the caps of each octahedron, as illustrated below. Iron(III) complexes are quite similar to those of chromium(III) with the exception of iron(III)'s preference for O-donor instead of N-donor ligands. The latter tend to be rather more unstable than iron(II) complexes and often dissociate in water. Many Fe–O complexes show intense colors and are used as tests for phenols or enols. For example, in the ferric chloride test, used to determine the presence of phenols, iron(III) chloride reacts with a phenol to form a deep violet complex: 3 ArOH + FeCl3 → Fe(OAr)3 + 3 HCl (Ar = aryl) Among the halide and pseudohalide complexes, fluoro complexes of iron(III) are the most stable, with the colorless [FeF5(H2O)]2− being the most stable in aqueous solution. Chloro complexes are less stable and favor tetrahedral coordination as in [FeCl4]−; [FeBr4]− and [FeI4]− are reduced easily to iron(II). Thiocyanate is a common test for the presence of iron(III) as it forms the blood-red [Fe(SCN)(H2O)5]2+. Like manganese(II), most iron(III) complexes are high-spin, the exceptions being those with ligands that are high in the spectrochemical series such as cyanide. An example of a low-spin iron(III) complex is [Fe(CN)6]3−. Iron shows a great variety of electronic spin states, including every possible spin quantum number value for a d-block element from 0 (diamagnetic) to (5 unpaired electrons). This value is always half the number of unpaired electrons. Complexes with zero to two unpaired electrons are considered low-spin and those with four or five are considered high-spin. Iron(II) complexes are less stable than iron(III) complexes but the preference for O-donor ligands is less marked, so that for example is known while is not. They have a tendency to be oxidized to iron(III) but this can be moderated by low pH and the specific ligands used. Organometallic compounds Organoiron chemistry is the study of organometallic compounds of iron, where carbon atoms are covalently bound to the metal atom. They are many and varied, including cyanide complexes, carbonyl complexes, sandwich and half-sandwich compounds. Prussian blue or "ferric ferrocyanide", Fe4[Fe(CN)6]3, is an old and well-known iron-cyanide complex, extensively used as pigment and in several other applications. Its formation can be used as a simple wet chemistry test to distinguish between aqueous solutions of Fe2+ and Fe3+ as they react (respectively) with potassium ferricyanide and potassium ferrocyanide to form Prussian blue. Another old example of an organoiron compound is iron pentacarbonyl, Fe(CO)5, in which a neutral iron atom is bound to the carbon atoms of five carbon monoxide molecules. The compound can be used to make carbonyl iron powder, a highly reactive form of metallic iron. Thermolysis of iron pentacarbonyl gives triiron dodecacarbonyl, , a complex with a cluster of three iron atoms at its core. Collman's reagent, disodium tetracarbonylferrate, is a useful reagent for organic chemistry; it contains iron in the −2 oxidation state. Cyclopentadienyliron dicarbonyl dimer contains iron in the rare +1 oxidation state. A landmark in this field was the discovery in 1951 of the remarkably stable sandwich compound ferrocene , by Pauson and Kealy and independently by Miller and colleagues, whose surprising molecular structure was determined only a year later by Woodward and Wilkinson and Fischer. Ferrocene is still one of the most important tools and models in this class. Iron-centered organometallic species are used as catalysts. The Knölker complex, for example, is a transfer hydrogenation catalyst for ketones. Industrial uses The iron compounds produced on the largest scale in industry are iron(II) sulfate (FeSO4·7H2O) and iron(III) chloride (FeCl3). The former is one of the most readily available sources of iron(II), but is less stable to aerial oxidation than Mohr's salt (). Iron(II) compounds tend to be oxidized to iron(III) compounds in the air. History Development of iron metallurgy Iron is one of the elements undoubtedly known to the ancient world. It has been worked, or wrought, for millennia. However, iron artefacts of great age are much rarer than objects made of gold or silver due to the ease with which iron corrodes. The technology developed slowly, and even after the discovery of smelting it took many centuries for iron to replace bronze as the metal of choice for tools and weapons. Meteoritic iron Beads made from meteoric iron in 3500 BC or earlier were found in Gerzeh, Egypt by G. A. Wainwright. The beads contain 7.5% nickel, which is a signature of meteoric origin since iron found in the Earth's crust generally has only minuscule nickel impurities. Meteoric iron was highly regarded due to its origin in the heavens and was often used to forge weapons and tools. For example, a dagger made of meteoric iron was found in the tomb of Tutankhamun, containing similar proportions of iron, cobalt, and nickel to a meteorite discovered in the area, deposited by an ancient meteor shower. Items that were likely made of iron by Egyptians date from 3000 to 2500 BC. Meteoritic iron is comparably soft and ductile and easily cold forged but may get brittle when heated because of the nickel content. Wrought iron The first iron production started in the Middle Bronze Age, but it took several centuries before iron displaced bronze. Samples of smelted iron from Asmar, Mesopotamia and Tall Chagar Bazaar in northern Syria were made sometime between 3000 and 2700 BC. The Hittites established an empire in north-central Anatolia around 1600 BC. They appear to be the first to understand the production of iron from its ores and regard it highly in their society. The Hittites began to smelt iron between 1500 and 1200 BC and the practice spread to the rest of the Near East after their empire fell in 1180 BC. The subsequent period is called the Iron Age. Artifacts of smelted iron are found in India dating from 1800 to 1200 BC, and in the Levant from about 1500 BC (suggesting smelting in Anatolia or the Caucasus). Alleged references (compare history of metallurgy in South Asia) to iron in the Indian Vedas have been used for claims of a very early usage of iron in India respectively to date the texts as such. The rigveda term ayas (metal) refers to copper, while iron which is called as śyāma ayas, literally "black copper", first is mentioned in the post-rigvedic Atharvaveda. Some archaeological evidence suggests iron was smelted in Zimbabwe and southeast Africa as early as the eighth century BC. Iron working was introduced to Greece in the late 11th century BC, from which it spread quickly throughout Europe. The spread of ironworking in Central and Western Europe is associated with Celtic expansion. According to Pliny the Elder, iron use was common in the Roman era. In the lands of what is now considered China, iron appears approximately 700–500 BC. Iron smelting may have been introduced into China through Central Asia. The earliest evidence of the use of a blast furnace in China dates to the 1st century AD, and cupola furnaces were used as early as the Warring States period (403–221 BC). Usage of the blast and cupola furnace remained widespread during the Tang and Song dynasties. During the Industrial Revolution in Britain, Henry Cort began refining iron from pig iron to wrought iron (or bar iron) using innovative production systems. In 1783 he patented the puddling process for refining iron ore. It was later improved by others, including Joseph Hall. Cast iron Cast iron was first produced in China during 5th century BC, but was hardly in Europe until the medieval period. The earliest cast iron artifacts were discovered by archaeologists in what is now modern Luhe County, Jiangsu in China. Cast iron was used in ancient China for warfare, agriculture, and architecture. During the medieval period, means were found in Europe of producing wrought iron from cast iron (in this context known as pig iron) using finery forges. For all these processes, charcoal was required as fuel. Medieval blast furnaces were about tall and made of fireproof brick; forced air was usually provided by hand-operated bellows. Modern blast furnaces have grown much bigger, with hearths fourteen meters in diameter that allow them to produce thousands of tons of iron each day, but essentially operate in much the same way as they did during medieval times. In 1709, Abraham Darby I established a coke-fired blast furnace to produce cast iron, replacing charcoal, although continuing to use blast furnaces. The ensuing availability of inexpensive iron was one of the factors leading to the Industrial Revolution. Toward the end of the 18th century, cast iron began to replace wrought iron for certain purposes, because it was cheaper. Carbon content in iron was not implicated as the reason for the differences in properties of wrought iron, cast iron, and steel until the 18th century. Since iron was becoming cheaper and more plentiful, it also became a major structural material following the building of the innovative first iron bridge in 1778. This bridge still stands today as a monument to the role iron played in the Industrial Revolution. Following this, iron was used in rails, boats, ships, aqueducts, and buildings, as well as in iron cylinders in steam engines. Railways have been central to the formation of modernity and ideas of progress and various languages refer to railways as iron road (e.g. French , German , Turkish , Russian , Chinese, Japanese, and Korean 鐵道, Vietnamese ). Steel Steel (with smaller carbon content than pig iron but more than wrought iron) was first produced in antiquity by using a bloomery. Blacksmiths in Luristan in western Persia were making good steel by 1000 BC. Then improved versions, Wootz steel by India and Damascus steel were developed around 300 BC and AD 500 respectively. These methods were specialized, and so steel did not become a major commodity until the 1850s. New methods of producing it by carburizing bars of iron in the cementation process were devised in the 17th century. In the Industrial Revolution, new methods of producing bar iron without charcoal were devised and these were later applied to produce steel. In the late 1850s, Henry Bessemer invented a new steelmaking process, involving blowing air through molten pig iron, to produce mild steel. This made steel much more economical, thereby leading to wrought iron no longer being produced in large quantities. Foundations of modern chemistry In 1774, Antoine Lavoisier used the reaction of water steam with metallic iron inside an incandescent iron tube to produce hydrogen in his experiments leading to the demonstration of the conservation of mass, which was instrumental in changing chemistry from a qualitative science to a quantitative one. Symbolic role Iron plays a certain role in mythology and has found various usage as a metaphor and in folklore. The Greek poet Hesiod's Works and Days (lines 109–201) lists different ages of man named after metals like gold, silver, bronze and iron to account for successive ages of humanity. The Iron Age was closely related with Rome, and in Ovid's Metamorphoses An example of the importance of iron's symbolic role may be found in the German Campaign of 1813. Frederick William III commissioned then the first Iron Cross as military decoration. Berlin iron jewellery reached its peak production between 1813 and 1815, when the Prussian royal family urged citizens to donate gold and silver jewellery for military funding. The inscription Ich gab Gold für Eisen (I gave gold for iron) was used as well in later war efforts. Laboratory routes For a few limited purposes when it is needed, pure iron is produced in the laboratory in small quantities by reducing the pure oxide or hydroxide with hydrogen, or forming iron pentacarbonyl and heating it to 250 °C so that it decomposes to form pure iron powder. Another method is electrolysis of ferrous chloride onto an iron cathode. Main industrial route Nowadays, the industrial production of iron or steel consists of two main stages. In the first stage, iron ore is reduced with coke in a blast furnace, and the molten metal is separated from gross impurities such as silicate minerals. This stage yields an alloy – pig iron – that contains relatively large amounts of carbon. In the second stage, the amount of carbon in the pig iron is lowered by oxidation to yield wrought iron, steel, or cast iron. Other metals can be added at this stage to form alloy steels. Blast furnace processing The blast furnace is loaded with iron ores, usually hematite or magnetite , along with coke (coal that has been separately baked to remove volatile components) and flux (limestone or dolomite). "Blasts" of air pre-heated to 900 °C (sometimes with oxygen enrichment) is blown through the mixture, in sufficient amount to turn the carbon into carbon monoxide: This reaction raises the temperature to about 2000 °C. The carbon monoxide reduces the iron ore to metallic iron: Some iron in the high-temperature lower region of the furnace reacts directly with the coke: The flux removes silicaceous minerals in the ore, which would otherwise clog the furnace: The heat of the furnace decomposes the carbonates to calcium oxide, which reacts with any excess silica to form a slag composed of calcium silicate or other products. At the furnace's temperature, the metal and the slag are both molten. They collect at the bottom as two immiscible liquid layers (with the slag on top), that are then easily separated. The slag can be used as a material in road construction or to improve mineral-poor soils for agriculture. Steelmaking thus remains one of the largest industrial contributors of CO2 emissions in the world. Steelmaking The pig iron produced by the blast furnace process contains up to 4–5% carbon (by mass), with small amounts of other impurities like sulfur, magnesium, phosphorus, and manganese. This high level of carbon makes it relatively weak and brittle. Reducing the amount of carbon to 0.002–2.1% produces steel, which may be up to 1000 times harder than pure iron. A great variety of steel articles can then be made by cold working, hot rolling, forging, machining, etc. Removing the impurities from pig iron, but leaving 2–4% carbon, results in cast iron, which is cast by foundries into articles such as stoves, pipes, radiators, lamp-posts, and rails. Steel products often undergo various heat treatments after they are forged to shape. Annealing consists of heating them to 700–800 °C for several hours and then gradual cooling. It makes the steel softer and more workable. Direct iron reduction Owing to environmental concerns, alternative methods of processing iron have been developed. "Direct iron reduction" reduces iron ore to a ferrous lump called "sponge" iron or "direct" iron that is suitable for steelmaking. Two main reactions comprise the direct reduction process: Natural gas is partially oxidized (with heat and a catalyst): Iron ore is then treated with these gases in a furnace, producing solid sponge iron: Silica is removed by adding a limestone flux as described above. Thermite process Ignition of a mixture of aluminium powder and iron oxide yields metallic iron via the thermite reaction: Alternatively pig iron may be made into steel (with up to about 2% carbon) or wrought iron (commercially pure iron). Various processes have been used for this, including finery forges, puddling furnaces, Bessemer converters, open hearth furnaces, basic oxygen furnaces, and electric arc furnaces. In all cases, the objective is to oxidize some or all of the carbon, together with other impurities. On the other hand, other metals may be added to make alloy steels. Molten oxide electrolysis Molten oxide electrolysis (MOE) uses electrolysis of molten iron oxide to yield metallic iron. It is studied in laboratory-scale experiments and is proposed as a method for industrial iron production that has no direct emissions of carbon dioxide. It uses a liquid iron cathode, an anode formed from an alloy of chromium, aluminium and iron, and the electrolyte is a mixture of molten metal oxides into which iron ore is dissolved. The current keeps the electrolyte molten and reduces the iron oxide. Oxygen gas is produced in addition to liquid iron. The only carbon dioxide emissions come from any fossil fuel-generated electricity used to heat and reduce the metal. Applications As structural material Iron is the most widely used of all the metals, accounting for over 90% of worldwide metal production. Its low cost and high strength often make it the material of choice to withstand stress or transmit forces, such as the construction of machinery and machine tools, rails, automobiles, ship hulls, concrete reinforcing bars, and the load-carrying framework of buildings. Since pure iron is quite soft, it is most commonly combined with alloying elements to make steel. Mechanical properties The mechanical properties of iron and its alloys are extremely relevant to their structural applications. Those properties can be evaluated in various ways, including the Brinell test, the Rockwell test and the Vickers hardness test. The properties of pure iron are often used to calibrate measurements or to compare tests. However, the mechanical properties of iron are significantly affected by the sample's purity: pure, single crystals of iron are actually softer than aluminium, and the purest industrially produced iron (99.99%) has a hardness of 20–30 Brinell. The pure iron (99.9%~99.999%), especially called electrolytic iron, is industrially produced by electrolytic refining. An increase in the carbon content will cause a significant increase in the hardness and tensile strength of iron. Maximum hardness of 65 Rc is achieved with a 0.6% carbon content, although the alloy has low tensile strength. Because of the softness of iron, it is much easier to work with than its heavier congeners ruthenium and osmium. Types of steels and alloys α-Iron is a fairly soft metal that can dissolve only a small concentration of carbon (no more than 0.021% by mass at 910 °C). Austenite (γ-iron) is similarly soft and metallic but can dissolve considerably more carbon (as much as 2.04% by mass at 1146 °C). This form of iron is used in the type of stainless steel used for making cutlery, and hospital and food-service equipment. Commercially available iron is classified based on purity and the abundance of additives. Pig iron has 3.5–4.5% carbon and contains varying amounts of contaminants such as sulfur, silicon and phosphorus. Pig iron is not a saleable product, but rather an intermediate step in the production of cast iron and steel. The reduction of contaminants in pig iron that negatively affect material properties, such as sulfur and phosphorus, yields cast iron containing 2–4% carbon, 1–6% silicon, and small amounts of manganese. Pig iron has a melting point in the range of 1420–1470 K, which is lower than either of its two main components, and makes it the first product to be melted when carbon and iron are heated together. Its mechanical properties vary greatly and depend on the form the carbon takes in the alloy. "White" cast irons contain their carbon in the form of cementite, or iron carbide (Fe3C). This hard, brittle compound dominates the mechanical properties of white cast irons, rendering them hard, but unresistant to shock. The broken surface of a white cast iron is full of fine facets of the broken iron carbide, a very pale, silvery, shiny material, hence the appellation. Cooling a mixture of iron with 0.8% carbon slowly below 723 °C to room temperature results in separate, alternating layers of cementite and α-iron, which is soft and malleable and is called pearlite for its appearance. Rapid cooling, on the other hand, does not allow time for this separation and creates hard and brittle martensite. The steel can then be tempered by reheating to a temperature in between, changing the proportions of pearlite and martensite. The end product below 0.8% carbon content is a pearlite-αFe mixture, and that above 0.8% carbon content is a pearlite-cementite mixture. In gray iron the carbon exists as separate, fine flakes of graphite, and also renders the material brittle due to the sharp edged flakes of graphite that produce stress concentration sites within the material. A newer variant of gray iron, referred to as ductile iron, is specially treated with trace amounts of magnesium to alter the shape of graphite to spheroids, or nodules, reducing the stress concentrations and vastly increasing the toughness and strength of the material. Wrought iron contains less than 0.25% carbon but large amounts of slag that give it a fibrous characteristic. Wrought iron is more corrosion resistant than steel. It has been almost completely replaced by mild steel, which corrodes more readily than wrought iron, but is cheaper and more widely available. Carbon steel contains 2.0% carbon or less, with small amounts of manganese, sulfur, phosphorus, and silicon. Alloy steels contain varying amounts of carbon as well as other metals, such as chromium, vanadium, molybdenum, nickel, tungsten, etc. Their alloy content raises their cost, and so they are usually only employed for specialist uses. One common alloy steel, though, is stainless steel. Recent developments in ferrous metallurgy have produced a growing range of microalloyed steels, also termed 'HSLA' or high-strength, low alloy steels, containing tiny additions to produce high strengths and often spectacular toughness at minimal cost. Alloys with high purity elemental makeups (such as alloys of electrolytic iron) have specifically enhanced properties such as ductility, tensile strength, toughness, fatigue strength, heat resistance, and corrosion resistance. Apart from traditional applications, iron is also used for protection from ionizing radiation. Although it is lighter than another traditional protection material, lead, it is much stronger mechanically. The main disadvantage of iron and steel is that pure iron, and most of its alloys, suffer badly from rust if not protected in some way, a cost amounting to over 1% of the world's economy. Painting, galvanization, passivation, plastic coating and bluing are all used to protect iron from rust by excluding water and oxygen or by cathodic protection. The mechanism of the rusting of iron is as follows: Cathode: 3 O2 + 6 H2O + 12 e− → 12 OH− Anode: 4 Fe → 4 Fe2+ + 8 e−; 4 Fe2+ → 4 Fe3+ + 4 e− Overall: 4 Fe + 3 O2 + 6 H2O → 4 Fe3+ + 12 OH− → 4 Fe(OH)3 or 4 FeO(OH) + 4 H2O The electrolyte is usually iron(II) sulfate in urban areas (formed when atmospheric sulfur dioxide attacks iron), and salt particles in the atmosphere in seaside areas. Catalysts and reagents Because Fe is inexpensive and nontoxic, much effort has been devoted to the development of Fe-based catalysts and reagents. Iron is however less common as a catalyst in commercial processes than more expensive metals. In biology, Fe-containing enzymes are pervasive. Iron catalysts are traditionally used in the Haber–Bosch process for the production of ammonia and the Fischer–Tropsch process for conversion of carbon monoxide to hydrocarbons for fuels and lubricants. Powdered iron in an acidic medium is used in the Bechamp reduction, the conversion of nitrobenzene to aniline. Iron compounds Iron(III) oxide mixed with aluminium powder can be ignited to create a thermite reaction, used in welding large iron parts (like rails) and purifying ores. Iron(III) oxide and oxyhydroxide are used as reddish and ocher pigments. Iron(III) chloride finds use in water purification and sewage treatment, in the dyeing of cloth, as a coloring agent in paints, as an additive in animal feed, and as an etchant for copper in the manufacture of printed circuit boards. It can also be dissolved in alcohol to form tincture of iron, which is used as a medicine to stop bleeding in canaries. Iron(II) sulfate is used as a precursor to other iron compounds. It is also used to reduce chromate in cement. It is used to fortify foods and treat iron deficiency anemia. Iron(III) sulfate is used in settling minute sewage particles in tank water. Iron(II) chloride is used as a reducing flocculating agent, in the formation of iron complexes and magnetic iron oxides, and as a reducing agent in organic synthesis. Sodium nitroprusside is a drug used as a vasodilator. It is on the World Health Organization's List of Essential Medicines. Biological and pathological role Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and use of oxygen. Iron proteins are involved in electron transfer. Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin—a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content. Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron(III). Biochemistry Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores. After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable complexes. At the bone marrow, transferrin is reduced from Fe3+ to Fe2+ and stored as ferritin to be incorporated into hemoglobin. The most commonly known and studied bioinorganic iron compounds (biological iron molecules) are the heme proteins: examples are hemoglobin, myoglobin, and cytochrome P450. These compounds participate in transporting gases, building enzymes, and transferring electrons. Metalloproteins are a group of proteins with metal ion cofactors. Some examples of iron metalloproteins are ferritin and rubredoxin. Many enzymes vital to life contain iron, such as catalase, lipoxygenases, and IRE-BP. Hemoglobin is an oxygen carrier that occurs in red blood cells and contributes their color, transporting oxygen in the arteries from the lungs to the muscles where it is transferred to myoglobin, which stores it until it is needed for the metabolic oxidation of glucose, generating energy. Here the hemoglobin binds to carbon dioxide, produced when glucose is oxidized, which is transported through the veins by hemoglobin (predominantly as bicarbonate anions) back to the lungs where it is exhaled. In hemoglobin, the iron is in one of four heme groups and has six possible coordination sites; four are occupied by nitrogen atoms in a porphyrin ring, the fifth by an imidazole nitrogen in a histidine residue of one of the protein chains attached to the heme group, and the sixth is reserved for the oxygen molecule it can reversibly bind to. When hemoglobin is not attached to oxygen (and is then called deoxyhemoglobin), the Fe2+ ion at the center of the heme group (in the hydrophobic protein interior) is in a high-spin configuration. It is thus too large to fit inside the porphyrin ring, which bends instead into a dome with the Fe2+ ion about 55 picometers above it. In this configuration, the sixth coordination site reserved for the oxygen is blocked by another histidine residue. When deoxyhemoglobin picks up an oxygen molecule, this histidine residue moves away and returns once the oxygen is securely attached to form a hydrogen bond with it. This results in the Fe2+ ion switching to a low-spin configuration, resulting in a 20% decrease in ionic radius so that now it can fit into the porphyrin ring, which becomes planar. Additionally, this hydrogen bonding results in the tilting of the oxygen molecule, resulting in a Fe–O–O bond angle of around 120° that avoids the formation of Fe–O–Fe or Fe–O2–Fe bridges that would lead to electron transfer, the oxidation of Fe2+ to Fe3+, and the destruction of hemoglobin. This results in a movement of all the protein chains that leads to the other subunits of hemoglobin changing shape to a form with larger oxygen affinity. Thus, when deoxyhemoglobin takes up oxygen, its affinity for more oxygen increases, and vice versa. Myoglobin, on the other hand, contains only one heme group and hence this cooperative effect cannot occur. Thus, while hemoglobin is almost saturated with oxygen in the high partial pressures of oxygen found in the lungs, its affinity for oxygen is much lower than that of myoglobin, which oxygenates even at low partial pressures of oxygen found in muscle tissue. As described by the Bohr effect (named after Christian Bohr, the father of Niels Bohr), the oxygen affinity of hemoglobin diminishes in the presence of carbon dioxide. Carbon monoxide and phosphorus trifluoride are poisonous to humans because they bind to hemoglobin similarly to oxygen, but with much more strength, so that oxygen can no longer be transported throughout the body. Hemoglobin bound to carbon monoxide is known as carboxyhemoglobin. This effect also plays a minor role in the toxicity of cyanide, but there the major effect is by far its interference with the proper functioning of the electron transport protein cytochrome a. The cytochrome proteins also involve heme groups and are involved in the metabolic oxidation of glucose by oxygen. The sixth coordination site is then occupied by either another imidazole nitrogen or a methionine sulfur, so that these proteins are largely inert to oxygen—with the exception of cytochrome a, which bonds directly to oxygen and thus is very easily poisoned by cyanide. Here, the electron transfer takes place as the iron remains in low spin but changes between the +2 and +3 oxidation states. Since the reduction potential of each step is slightly greater than the previous one, the energy is released step-by-step and can thus be stored in adenosine triphosphate. Cytochrome a is slightly distinct, as it occurs at the mitochondrial membrane, binds directly to oxygen, and transports protons as well as electrons, as follows: 4 Cytc2+ + O2 + 8H → 4 Cytc3+ + 2 H2O + 4H Although the heme proteins are the most important class of iron-containing proteins, the iron–sulfur proteins are also very important, being involved in electron transfer, which is possible since iron can exist stably in either the +2 or +3 oxidation states. These have one, two, four, or eight iron atoms that are each approximately tetrahedrally coordinated to four sulfur atoms; because of this tetrahedral coordination, they always have high-spin iron. The simplest of such compounds is rubredoxin, which has only one iron atom coordinated to four sulfur atoms from cysteine residues in the surrounding peptide chains. Another important class of iron–sulfur proteins is the ferredoxins, which have multiple iron atoms. Transferrin does not belong to either of these classes. The ability of sea mussels to maintain their grip on rocks in the ocean is facilitated by their use of organometallic iron-based bonds in their protein-rich cuticles. Based on synthetic replicas, the presence of iron in these structures increased elastic modulus 770 times, tensile strength 58 times, and toughness 92 times. The amount of stress required to permanently damage them increased 76 times. Nutrition Diet Iron is pervasive, but particularly rich sources of dietary iron include red meat, oysters, beans, poultry, fish, leaf vegetables, watercress, tofu, and blackstrap molasses. Bread and breakfast cereals are sometimes specifically fortified with iron. Iron provided by dietary supplements is often found as iron(II) fumarate, although iron(II) sulfate is cheaper and is absorbed equally well. Elemental iron, or reduced iron, despite being absorbed at only one-third to two-thirds the efficiency (relative to iron sulfate), is often added to foods such as breakfast cereals or enriched wheat flour. Iron is most available to the body when chelated to amino acids and is also available for use as a common iron supplement. Glycine, the least expensive amino acid, is most often used to produce iron glycinate supplements. Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for iron in 2001. The current EAR for iron for women ages 1418 is 7.9 mg/day, 8.1 mg/day for ages 1950 and 5.0 mg/day thereafter (postmenopause). For men, the EAR is 6.0 mg/day for ages 19 and up. The RDA is 15.0 mg/day for women ages 1518, 18.0 mg/day for ages 1950 and 8.0 mg/day thereafter. For men, 8.0 mg/day for ages 19 and up. RDAs are higher than EARs so as to identify amounts that will cover people with higher-than-average requirements. RDA for pregnancy is 27 mg/day and, for lactation, 9 mg/day. For children ages 13 years 7 mg/day, 10 mg/day for ages 4–8 and 8 mg/day for ages 913. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of iron, the UL is set at 45 mg/day. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women the PRI is 13 mg/day ages 1517 years, 16 mg/day for women ages 18 and up who are premenopausal and 11 mg/day postmenopausal. For pregnancy and lactation, 16 mg/day. For men the PRI is 11 mg/day ages 15 and older. For children ages 1 to 14, the PRI increases from 7 to 11 mg/day. The PRIs are higher than the U.S. RDAs, with the exception of pregnancy. The EFSA reviewed the same safety question did not establish a UL. Infants may require iron supplements if they are bottle-fed cow's milk. Frequent blood donors are at risk of low iron levels and are often advised to supplement their iron intake. For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For iron labeling purposes, 100% of the Daily Value was 18 mg, and remained unchanged at 18 mg. A table of the old and new adult daily values is provided at Reference Daily Intake. Deficiency Iron deficiency is the most common nutritional deficiency in the world. When loss of iron is not adequately compensated by adequate dietary iron intake, a state of latent iron deficiency occurs, which over time leads to iron-deficiency anemia if left untreated, which is characterised by an insufficient number of red blood cells and an insufficient amount of hemoglobin. Children, pre-menopausal women (women of child-bearing age), and people with poor diet are most susceptible to the disease. Most cases of iron-deficiency anemia are mild, but if not treated can cause problems like fast or irregular heartbeat, complications during pregnancy, and delayed growth in infants and children. The brain is resistant to acute iron deficiency due to the slow transport of iron through the blood brain barrier. Acute fluctuations in iron status (marked by serum ferritin levels) do not reflect brain iron status, but prolonged nutritional iron deficiency is suspected to reduce brain iron concentrations over time. In the brain, iron plays a role in oxygen transport, myelin synthesis, mitochondrial respiration, and as a cofactor for neurotransmitter synthesis and metabolism. Animal models of nutritional iron deficiency report biomolecular changes resembling those seen in Parkinson's and Huntington's disease. However, age-related accumulation of iron in the brain has also been linked to the development of Parkinson's. Excess Iron uptake is tightly regulated by the human body, which has no regulated physiological means of excreting iron. Only small amounts of iron are lost daily due to mucosal and skin epithelial cell sloughing, so control of iron levels is primarily accomplished by regulating uptake. Regulation of iron uptake is impaired in some people as a result of a genetic defect that maps to the HLA-H gene region on chromosome 6 and leads to abnormally low levels of hepcidin, a key regulator of the entry of iron into the circulatory system in mammals. In these people, excessive iron intake can result in iron overload disorders, known medically as hemochromatosis. Many people have an undiagnosed genetic susceptibility to iron overload, and are not aware of a family history of the problem. For this reason, people should not take iron supplements unless they suffer from iron deficiency and have consulted a doctor. Hemochromatosis is estimated to be the cause of 0.3–0.8% of all metabolic diseases of Caucasians. Overdoses of ingested iron can cause excessive levels of free iron in the blood. High blood levels of free ferrous iron react with peroxides to produce highly reactive free radicals that can damage DNA, proteins, lipids, and other cellular components. Iron toxicity occurs when the cell contains free iron, which generally occurs when iron levels exceed the availability of transferrin to bind the iron. Damage to the cells of the gastrointestinal tract can also prevent them from regulating iron absorption, leading to further increases in blood levels. Iron typically damages cells in the heart, liver and elsewhere, causing adverse effects that include coma, metabolic acidosis, shock, liver failure, coagulopathy, long-term organ damage, and even death. Humans experience iron toxicity when the iron exceeds 20 milligrams for every kilogram of body mass; 60 milligrams per kilogram is considered a lethal dose. Overconsumption of iron, often the result of children eating large quantities of ferrous sulfate tablets intended for adult consumption, is one of the most common toxicological causes of death in children under six. The Dietary Reference Intake (DRI) sets the Tolerable Upper Intake Level (UL) for adults at 45 mg/day. For children under fourteen years old the UL is 40 mg/day. The medical management of iron toxicity is complicated, and can include use of a specific chelating agent called deferoxamine to bind and expel excess iron from the body. ADHD Some research has suggested that low thalamic iron levels may play a role in the pathophysiology of ADHD. Some researchers have found that iron supplementation can be effective especially in the inattentive subtype of the disorder. Some researchers in the 2000s suggested a link between low levels of iron in the blood and ADHD. A 2012 study found no such correlation. Cancer The role of iron in cancer defense can be described as a "double-edged sword" because of its pervasive presence in non-pathological processes. People having chemotherapy may develop iron deficiency and anemia, for which intravenous iron therapy is used to restore iron levels. Iron overload, which may occur from high consumption of red meat, may initiate tumor growth and increase susceptibility to cancer onset, particularly for colorectal cancer. Marine systems Iron plays an essential role in marine systems and can act as a limiting nutrient for planktonic activity. Because of this, too much of a decrease in iron may lead to a decrease in growth rates in phytoplanktonic organisms such as diatoms. Iron can also be oxidized by marine microbes under conditions that are high in iron and low in oxygen. Iron can enter marine systems through adjoining rivers and directly from the atmosphere. Once iron enters the ocean, it can be distributed throughout the water column through ocean mixing and through recycling on the cellular level. In the arctic, sea ice plays a major role in the store and distribution of iron in the ocean, depleting oceanic iron as it freezes in the winter and releasing it back into the water when thawing occurs in the summer. The iron cycle can fluctuate the forms of iron from aqueous to particle forms altering the availability of iron to primary producers. Increased light and warmth increases the amount of iron that is in forms that are usable by primary producers. See also Economically important iron deposits include: Carajás Mine in the state of Pará, Brazil, is thought to be the largest iron deposit in the world. El Mutún in Bolivia, where 10% of the world's accessible iron ore is located. Hamersley Basin is the largest iron ore deposit in Australia. Kiirunavaara in Sweden, where one of the world's largest deposits of iron ore is located The Mesabi Iron Range is the chief iron ore mining district in the United States. Iron and steel industry Iron cycle Iron nanoparticle Iron–platinum nanoparticle Iron fertilization – proposed fertilization of oceans to stimulate phytoplankton growth Iron-oxidizing bacteria List of countries by iron production Pelletising – process of creation of iron ore pellets Rustproof iron Steel References Bibliography Further reading H.R. Schubert, History of the British Iron and Steel Industry ... to 1775 AD (Routledge, London, 1957) R.F. Tylecote, History of Metallurgy (Institute of Materials, London 1992). R.F. Tylecote, "Iron in the Industrial Revolution" in J. Day and R.F. Tylecote, The Industrial Revolution in Metals (Institute of Materials 1991), 200–60. External links It's Elemental – Iron Iron at The Periodic Table of Videos (University of Nottingham) Metallurgy for the non-Metallurgist Iron by J. B. Calvert Building materials Chemical elements with body-centered cubic structure Chemical elements Cubic minerals Dietary minerals Ferromagnetic materials Minerals in space group 225 Minerals in space group 229 Native element minerals Pyrotechnic fuels Transition metals
Iron
[ "Physics", "Engineering" ]
14,633
[ "Chemical elements", "Building engineering", "Ferromagnetic materials", "Architecture", "Construction", "Materials", "Atoms", "Matter", "Building materials" ]
14,749
https://en.wikipedia.org/wiki/Indium
Indium is a chemical element; it has symbol In and atomic number 49. It is a silvery-white post-transition metal and one of the softest elements. Chemically, indium is similar to gallium and thallium, and its properties are largely intermediate between the two. It was discovered in 1863 by Ferdinand Reich and Hieronymous Theodor Richter by spectroscopic methods and named for the indigo blue line in its spectrum. Indium is a technology-critical element used primarily in the production of flat-panel displays as indium tin oxide (ITO), a transparent and conductive coating applied to glass. Indium is also used in the semiconductor industry, in low-melting-point metal alloys such as solders and soft-metal high-vacuum seals. It is produced exclusively as a by-product during the processing of the ores of other metals, chiefly from sphalerite and other zinc sulfide ores. Indium has no biological role and its compounds are toxic when inhaled or injected into the bloodstream, although they are poorly absorbed following ingestion. Etymology The name comes from the Latin word indicum meaning violet or indigo. The word indicum means "Indian", as the naturally based dye indigo was originally exported to Europe from India. Properties Physical Indium is a shiny silvery-white, highly ductile post-transition metal with a bright luster. It is so soft (Mohs hardness 1.2) that it can be cut with a knife and leaves a visible line like a pencil when rubbed on paper. It is a member of group 13 on the periodic table and its properties are mostly intermediate between its vertical neighbors gallium and thallium. As with tin, a high-pitched cry is heard when indium is bent – a crackling sound due to crystal twinning. Like gallium, indium is able to wet glass. Like both, indium has a low melting point, 156.60 °C (313.88 °F); higher than its lighter homologue, gallium, but lower than its heavier homologue, thallium, and lower than tin. The boiling point is 2072 °C (3762 °F), higher than that of thallium, but lower than gallium, conversely to the general trend of melting points, but similarly to the trends down the other post-transition metal groups because of the weakness of the metallic bonding with few electrons delocalized. The density of indium, 7.31 g/cm3, is also greater than gallium, but lower than thallium. Below the critical temperature, 3.41 K, indium becomes a superconductor. Indium crystallizes in the body-centered tetragonal crystal system in the space group I4/mmm (lattice parameters: a = 325 pm, c = 495 pm): this is a slightly distorted face-centered cubic structure, where each indium atom has four neighbours at 324 pm distance and eight neighbours slightly further (336 pm). Indium has greater solubility in liquid mercury than any other metal (more than 50 mass percent of indium at 0 °C). Indium displays a ductile viscoplastic response, found to be size-independent in tension and compression. However it does have a size effect in bending and indentation, associated to a length-scale of order 50–100 μm, significantly large when compared with other metals. Chemical Indium has 49 electrons, with an electronic configuration of [Kr]4d5s5p. In compounds, indium most commonly donates the three outermost electrons to become indium(III), In. In some cases, the pair of 5s-electrons are not donated, resulting in indium(I), In. The stabilization of the monovalent state is attributed to the inert pair effect, in which relativistic effects stabilize the 5s-orbital, observed in heavier elements. Thallium (indium's heavier homolog) shows an even stronger effect, causing oxidation to thallium(I) to be more probable than to thallium(III), whereas gallium (indium's lighter homolog) commonly shows only the +3 oxidation state. Thus, although thallium(III) is a moderately strong oxidizing agent, indium(III) is not, and many indium(I) compounds are powerful reducing agents. While the energy required to include the s-electrons in chemical bonding is lowest for indium among the group 13 metals, bond energies decrease down the group so that by indium, the energy released in forming two additional bonds and attaining the +3 state is not always enough to outweigh the energy needed to involve the 5s-electrons. Indium(I) oxide and hydroxide are more basic and indium(III) oxide and hydroxide are more acidic. A number of standard electrode potentials, depending on the reaction under study, are reported for indium, reflecting the decreased stability of the +3 oxidation state: {| |- | In2+ + e−|| ⇌ In+ || E0 = −0.40 V |- | In3+ + e−|| ⇌ In2+ || E0 = −0.49 V |- | In3+ + 2 e−|| ⇌ In+ || E0 = −0.443 V |- | In3+ + 3 e−|| ⇌ In || E0 = −0.3382 V |- | In+ + e−|| ⇌ In || E0 = −0.14 V |} Indium metal does not react with water, but it is oxidized by stronger oxidizing agents such as halogens to give indium(III) compounds. It does not form a boride, silicide, or carbide, and the hydride InH3 has at best a transitory existence in ethereal solutions at low temperatures, being unstable enough to spontaneously polymerize without coordination. Indium is rather basic in aqueous solution, showing only slight amphoteric characteristics, and unlike its lighter homologs aluminium and gallium, it is insoluble in aqueous alkaline solutions. Isotopes Indium has 39 known isotopes, ranging in mass number from 97 to 135. Only two isotopes occur naturally as primordial nuclides: indium-113, the only stable isotope, and indium-115, which has a half-life of 4.41 years, four orders of magnitude greater than the age of the Universe and nearly 30,000 times greater than half life of thorium-232. The half-life of 115In is very long because the beta decay to 115Sn is spin-forbidden. Indium-115 makes up 95.7% of all indium. Indium is one of three known elements (the others being tellurium and rhenium) of which the stable isotope is less abundant in nature than the long-lived primordial radioisotopes. The stablest artificial isotope is indium-111, with a half-life of approximately 2.8 days. All other isotopes have half-lives shorter than 5 hours. Indium also has 47 meta states, among which indium-114m1 (half-life about 49.51 days) is the most stable, more stable than the ground state of any indium isotope other than the primordial. All decay by isomeric transition. The indium isotopes lighter than 113In predominantly decay through electron capture or positron emission to form cadmium isotopes, while the indium isotopes heavier than 113In predominantly decay through beta-minus decay to form tin isotopes. Compounds Indium(III) Indium(III) oxide, In2O3, forms when indium metal is burned in air or when the hydroxide or nitrate is heated. In2O3 adopts a structure like alumina and is amphoteric, that is able to react with both acids and bases. Indium reacts with water to reproduce soluble indium(III) hydroxide, which is also amphoteric; with alkalis to produce indates(III); and with acids to produce indium(III) salts: In(OH)3 + 3 HCl → InCl3 + 3 H2O The analogous sesqui-chalcogenides with sulfur, selenium, and tellurium are also known. Indium forms the expected trihalides. Chlorination, bromination, and iodination of In produce colorless InCl3, InBr3, and yellow InI3. The compounds are Lewis acids, somewhat akin to the better known aluminium trihalides. Again like the related aluminium compound, InF3 is polymeric. Direct reaction of indium with the pnictogens produces the gray or semimetallic III–V semiconductors. Many of them slowly decompose in moist air, necessitating careful storage of semiconductor compounds to prevent contact with the atmosphere. Indium nitride is readily attacked by acids and alkalis. Indium(I) Indium(I) compounds are not common. The chloride, bromide, and iodide are deeply colored, unlike the parent trihalides from which they are prepared. The fluoride is known only as an unstable gas. Indium(I) oxide black powder is produced when indium(III) oxide decomposes upon heating to 700 °C. Other oxidation states Less frequently, indium forms compounds in oxidation state +2 and even fractional oxidation states. Usually such materials feature In–In bonding, most notably in the halides In2X4 and [In2X6]2−, and various subchalcogenides such as In4Se3. Several other compounds are known to combine indium(I) and indium(III), such as InI6(InIIICl6)Cl3, InI5(InIIIBr4)2(InIIIBr6), and InIInIIIBr4. Organoindium compounds Organoindium compounds feature In–C bonds. Most are In(III) derivatives, but cyclopentadienylindium(I) is an exception. It was the first known organoindium(I) compound, and is polymeric, consisting of zigzag chains of alternating indium atoms and cyclopentadienyl complexes. Perhaps the best-known organoindium compound is trimethylindium, In(CH3)3, used to prepare certain semiconducting materials. History In 1863, German chemists Ferdinand Reich and Hieronymus Theodor Richter were testing ores from the mines around Freiberg, Saxony. They dissolved the minerals pyrite, arsenopyrite, galena and sphalerite in hydrochloric acid and distilled raw zinc chloride. Reich, who was color-blind, employed Richter as an assistant for detecting the colored spectral lines. Knowing that ores from that region sometimes contain thallium, they searched for the green thallium emission spectrum lines. Instead, they found a bright blue line. Because that blue line did not match any known element, they hypothesized a new element was present in the minerals. They named the element indium, from the indigo color seen in its spectrum, after the Latin indicum, meaning 'of India'. Richter went on to isolate the metal in 1864. An ingot of was presented at the World Fair 1867. Reich and Richter later fell out when the latter claimed to be the sole discoverer. Occurrence Indium is created by the long-lasting (up to thousands of years) s-process (slow neutron capture) in low-to-medium-mass stars (range in mass between 0.6 and 10 solar masses). When a silver-109 atom captures a neutron, it transmutes into silver-110, which then undergoes beta decay to become cadmium-110. Capturing further neutrons, it becomes cadmium-115, which decays to indium-115 by another beta decay. This explains why the radioactive isotope is more abundant than the stable one. The stable indium isotope, indium-113, is one of the p-nuclei, the origin of which is not fully understood; although indium-113 is known to be made directly in the s- and r-processes (rapid neutron capture), and also as the daughter of very long-lived cadmium-113, which has a half-life of about eight quadrillion years, this cannot account for all indium-113. Indium is the 68th most abundant element in Earth's crust at approximately 50 ppb. This is similar to the crustal abundance of silver, bismuth and mercury. It very rarely forms its own minerals, or occurs in elemental form. Fewer than 10 indium minerals such as roquesite (CuInS2) are known, and none occur at sufficient concentrations for economic extraction. Instead, indium is usually a trace constituent of more common ore minerals, such as sphalerite and chalcopyrite. From these, it can be extracted as a by-product during smelting. While the enrichment of indium in these deposits is high relative to its crustal abundance, it is insufficient, at current prices, to support extraction of indium as the main product. Different estimates exist of the amounts of indium contained within the ores of other metals. However, these amounts are not extractable without mining of the host materials (see Production and availability). Thus, the availability of indium is fundamentally determined by the rate at which these ores are extracted, and not their absolute amount. This is an aspect that is often forgotten in the current debate, e.g. by the Graedel group at Yale in their criticality assessments, explaining the paradoxically low depletion times some studies cite. Production and availability Indium is produced exclusively as a by-product during the processing of the ores of other metals. Its main source material are sulfidic zinc ores, where it is mostly hosted by sphalerite. Minor amounts are also extracted from sulfidic copper ores. During the roast-leach-electrowinning process of zinc smelting, indium accumulates in the iron-rich residues. From these, it can be extracted in different ways. It may also be recovered directly from the process solutions. Further purification is done by electrolysis. The exact process varies with the mode of operation of the smelter. Its by-product status means that indium production is constrained by the amount of sulfidic zinc (and copper) ores extracted each year. Therefore, its availability needs to be discussed in terms of supply potential. The supply potential of a by-product is defined as that amount which is economically extractable from its host materials per year under current market conditions (i.e. technology and price). Reserves and resources are not relevant for by-products, since they cannot be extracted independently from the main-products. Recent estimates put the supply potential of indium at a minimum of 1,300 t/yr from sulfidic zinc ores and 20 t/yr from sulfidic copper ores. These figures are significantly greater than current production (655 t in 2016). Thus, major future increases in the by-product production of indium will be possible without significant increases in production costs or price. The average indium price in 2016 was 240/kg, down from 705/kg in 2014. China is a leading producer of indium (290 tonnes in 2016), followed by South Korea (195 t), Japan (70 t) and Canada (65 t). The Teck Resources refinery in Trail, British Columbia, is a large single-source indium producer, with an output of 32.5 tonnes in 2005, 41.8 tonnes in 2004 and 36.1 tonnes in 2003. The primary consumption of indium worldwide is LCD production. Demand rose rapidly from the late 1990s to 2010 with the popularity of LCD computer monitors and television sets, which now account for 50% of indium consumption. Increased manufacturing efficiency and recycling (especially in Japan) maintain a balance between demand and supply. According to the UNEP, indium's end-of-life recycling rate is less than 1%. Applications Industrial uses In 1924, indium was found to have a valued property of stabilizing non-ferrous metals, and that became the first significant use for the element. The first large-scale application for indium was coating bearings in high-performance aircraft engines during World War II, to protect against damage and corrosion; this is no longer a major use of the element. New uses were found in fusible alloys, solders, and electronics. In the 1950s, tiny beads of indium were used for the emitters and collectors of PNP alloy-junction transistors. In the middle and late 1980s, the development of indium phosphide semiconductors and indium tin oxide thin films for liquid-crystal displays (LCD) aroused much interest. By 1992, the thin-film application had become the largest end use. Indium(III) oxide and indium tin oxide (ITO) are used as a transparent conductive coating on glass substrates in electroluminescent panels. Indium tin oxide is used as a light filter in low-pressure sodium-vapor lamps. The infrared radiation is reflected back into the lamp, which increases the temperature within the tube and improves the performance of the lamp. Indium has many semiconductor-related applications. Some indium compounds, such as indium antimonide and indium phosphide, are semiconductors with useful properties: one precursor is usually trimethylindium (TMI), which is also used as the semiconductor dopant in II–VI compound semiconductors. InAs and InSb are used for low-temperature transistors and InP for high-temperature transistors. The compound semiconductors InGaN and InGaP are used in light-emitting diodes (LEDs) and laser diodes. Indium is used in photovoltaics as the semiconductor copper indium gallium selenide (CIGS), also called CIGS solar cells, a type of second-generation thin-film solar cell. Indium is used in PNP bipolar junction transistors with germanium: when soldered at low temperature, indium does not stress the germanium. Indium wire is used as a vacuum seal and a thermal conductor in cryogenics and ultra-high-vacuum applications, in such manufacturing applications as gaskets that deform to fill gaps. Owing to its great plasticity and adhesion to metals, Indium sheets are sometimes used for cold-soldering in microwave circuits and waveguide joints, where direct soldering is complicated. Indium is an ingredient in the gallium–indium–tin alloy galinstan, which is liquid at room temperature and replaces mercury in some thermometers. Other alloys of indium with bismuth, cadmium, lead, and tin, which have higher but still low melting points (between 50 and 100 °C), are used in fire sprinkler systems and heat regulators. Indium is one of many substitutes for mercury in alkaline batteries to prevent the zinc from corroding and releasing hydrogen gas. Indium is added to some dental amalgam alloys to decrease the surface tension of the mercury and allow for less mercury and easier amalgamation. Indium's high neutron-capture cross-section for thermal neutrons makes it suitable for use in control rods for nuclear reactors, typically in an alloy of 80% silver, 15% indium, and 5% cadmium. In nuclear engineering, the (n,n') reactions of 113In and 115In are used to determine magnitudes of neutron fluxes. In 2009, Professor Mas Subramanian and former graduate student Andrew Smith at Oregon State University discovered that indium can be combined with yttrium and manganese to form an intensely blue, non-toxic, inert, fade-resistant pigment, YInMn blue, the first new inorganic blue pigment discovered in 200 years. Medical applications Radioactive indium-111 (in very small amounts) is used in nuclear medicine tests, as a radiotracer to follow the movement of labeled proteins and white blood cells to diagnose different types of infection. Indium compounds are mostly not absorbed upon ingestion and are only moderately absorbed on inhalation; they tend to be stored temporarily in the muscles, skin, and bones before being excreted, and the biological half-life of indium is about two weeks in humans. It is also tagged to growth hormone analogues like octreotide to find growth hormone receptors in neuroendocrine tumors. Biological role and precautions Indium has no metabolic role in any organism. In a similar way to aluminium salts, indium(III) ions can be toxic to the kidney when given by injection. Indium tin oxide and indium phosphide harm the pulmonary and immune systems, predominantly through ionic indium, though hydrated indium oxide is more than forty times as toxic when injected, measured by the quantity of indium introduced. People can be exposed to indium in the workplace by inhalation, ingestion, skin contact, and eye contact. Indium lung is a lung disease characterized by pulmonary alveolar proteinosis and pulmonary fibrosis, first described by Japanese researchers in 2003. , 10 cases had been described, though more than 100 indium workers had documented respiratory abnormalities. The National Institute for Occupational Safety and Health has set a recommended exposure limit (REL) of 0.1 mg/m over an eight-hour workday. Notes References Sources External links Indium at The Periodic Table of Videos (University of Nottingham) Reducing Agents > Indium low valent NIOSH Pocket Guide to Chemical Hazards (Centers for Disease Control and Prevention) Chemical elements Post-transition metals Native element minerals Chemical elements with body-centered tetragonal structure
Indium
[ "Physics" ]
4,588
[ "Chemical elements", "Atoms", "Matter" ]
14,750
https://en.wikipedia.org/wiki/Iodine
Iodine is a chemical element; it has symbol I and atomic number 53. The heaviest of the stable halogens, it exists at standard conditions as a semi-lustrous, non-metallic solid that melts to form a deep violet liquid at , and boils to a violet gas at . The element was discovered by the French chemist Bernard Courtois in 1811 and was named two years later by Joseph Louis Gay-Lussac, after the Ancient Greek , meaning 'violet'. Iodine occurs in many oxidation states, including iodide (I−), iodate (), and the various periodate anions. As the heaviest essential mineral nutrient, iodine is required for the synthesis of thyroid hormones. Iodine deficiency affects about two billion people and is the leading preventable cause of intellectual disabilities. The dominant producers of iodine today are Chile and Japan. Due to its high atomic number and ease of attachment to organic compounds, it has also found favour as a non-toxic radiocontrast material. Because of the specificity of its uptake by the human body, radioactive isotopes of iodine can also be used to treat thyroid cancer. Iodine is also used as a catalyst in the industrial production of acetic acid and some polymers. It is on the World Health Organization's List of Essential Medicines. History In 1811, iodine was discovered by French chemist Bernard Courtois, who was born to a family of manufacturers of saltpetre (an essential component of gunpowder). At the time of the Napoleonic Wars, saltpetre was in great demand in France. Saltpetre produced from French nitre beds required sodium carbonate, which could be isolated from seaweed collected on the coasts of Normandy and Brittany. To isolate the sodium carbonate, seaweed was burned and the ash washed with water. The remaining waste was destroyed by adding sulfuric acid. Courtois once added excessive sulfuric acid and a cloud of violet vapour rose. He noted that the vapour crystallised on cold surfaces, making dark black crystals. Courtois suspected that this material was a new element but lacked funding to pursue it further. Courtois gave samples to his friends, Charles Bernard Desormes (1777–1838) and Nicolas Clément (1779–1841), to continue research. He also gave some of the substance to chemist Joseph Louis Gay-Lussac (1778–1850), and to physicist André-Marie Ampère (1775–1836). On 29 November 1813, Desormes and Clément made Courtois' discovery public by describing the substance to a meeting of the Imperial Institute of France. On 6 December 1813, Gay-Lussac found and announced that the new substance was either an element or a compound of oxygen and he found that it is an element. Gay-Lussac suggested the name "iode" (anglicised as "iodine"), from the Ancient Greek (, "violet"), because of the colour of iodine vapour. Ampère had given some of his sample to British chemist Humphry Davy (1778–1829), who experimented on the substance and noted its similarity to chlorine and also found it as an element. Davy sent a letter dated 10 December to the Royal Society of London stating that he had identified a new element called iodine. Arguments erupted between Davy and Gay-Lussac over who identified iodine first, but both scientists found that both of them identified iodine first and also knew that Courtois is the first one to isolate the element. In 1873, the French medical researcher Casimir Davaine (1812–1882) discovered the antiseptic action of iodine. Antonio Grossich (1849–1926), an Istrian-born surgeon, was among the first to use sterilisation of the operative field. In 1908, he introduced tincture of iodine as a way to rapidly sterilise the human skin in the surgical field. In early periodic tables, iodine was often given the symbol J, for Jod, its name in German; in German texts, J is still frequently used in place of I. Properties Iodine is the fourth halogen, being a member of group 17 in the periodic table, below fluorine, chlorine, and bromine; since astatine and tennessine are radioactive, iodine is the heaviest stable halogen. Iodine has an electron configuration of [Kr]5s24d105p5, with the seven electrons in the fifth and outermost shell being its valence electrons. Like the other halogens, it is one electron short of a full octet and is hence an oxidising agent, reacting with many elements in order to complete its outer shell, although in keeping with periodic trends, it is the weakest oxidising agent among the stable halogens: it has the lowest electronegativity among them, just 2.66 on the Pauling scale (compare fluorine, chlorine, and bromine at 3.98, 3.16, and 2.96 respectively; astatine continues the trend with an electronegativity of 2.2). Elemental iodine hence forms diatomic molecules with chemical formula I2, where two iodine atoms share a pair of electrons in order to each achieve a stable octet for themselves; at high temperatures, these diatomic molecules reversibly dissociate a pair of iodine atoms. Similarly, the iodide anion, I−, is the strongest reducing agent among the stable halogens, being the most easily oxidised back to diatomic I2. (Astatine goes further, being indeed unstable as At− and readily oxidised to At0 or At+.) The halogens darken in colour as the group is descended: fluorine is a very pale yellow, chlorine is greenish-yellow, bromine is reddish-brown, and iodine is violet. Elemental iodine is slightly soluble in water, with one gram dissolving in 3450 mL at 20 °C and 1280 mL at 50 °C; potassium iodide may be added to increase solubility via formation of triiodide ions, among other polyiodides. Nonpolar solvents such as hexane and carbon tetrachloride provide a higher solubility. Polar solutions, such as aqueous solutions, are brown, reflecting the role of these solvents as Lewis bases; on the other hand, nonpolar solutions are violet, the color of iodine vapour. Charge-transfer complexes form when iodine is dissolved in polar solvents, hence changing the colour. Iodine is violet when dissolved in carbon tetrachloride and saturated hydrocarbons but deep brown in alcohols and amines, solvents that form charge-transfer adducts. The melting and boiling points of iodine are the highest among the halogens, conforming to the increasing trend down the group, since iodine has the largest electron cloud among them that is the most easily polarised, resulting in its molecules having the strongest Van der Waals interactions among the halogens. Similarly, iodine is the least volatile of the halogens, though the solid still can be observed to give off purple vapour. Due to this property iodine is commonly used to demonstrate sublimation directly from solid to gas, which gives rise to a misconception that it does not melt in atmospheric pressure. Because it has the largest atomic radius among the halogens, iodine has the lowest first ionisation energy, lowest electron affinity, lowest electronegativity and lowest reactivity of the halogens. The interhalogen bond in diiodine is the weakest of all the halogens. As such, 1% of a sample of gaseous iodine at atmospheric pressure is dissociated into iodine atoms at 575 °C. Temperatures greater than 750 °C are required for fluorine, chlorine, and bromine to dissociate to a similar extent. Most bonds to iodine are weaker than the analogous bonds to the lighter halogens. Gaseous iodine is composed of I2 molecules with an I–I bond length of 266.6 pm. The I–I bond is one of the longest single bonds known. It is even longer (271.5 pm) in solid orthorhombic crystalline iodine, which has the same crystal structure as chlorine and bromine. (The record is held by iodine's neighbour xenon: the Xe–Xe bond length is 308.71 pm.) As such, within the iodine molecule, significant electronic interactions occur with the two next-nearest neighbours of each atom, and these interactions give rise, in bulk iodine, to a shiny appearance and semiconducting properties. Iodine is a two-dimensional semiconductor with a band gap of 1.3 eV (125 kJ/mol): it is a semiconductor in the plane of its crystalline layers and an insulator in the perpendicular direction. Isotopes Of the forty known isotopes of iodine, only one occurs in nature, iodine-127. The others are radioactive and have half-lives too short to be primordial. As such, iodine is both monoisotopic and mononuclidic and its atomic weight is known to great precision, as it is a constant of nature. The longest-lived of the radioactive isotopes of iodine is iodine-129, which has a half-life of 15.7 million years, decaying via beta decay to stable xenon-129. Some iodine-129 was formed along with iodine-127 before the formation of the Solar System, but it has by now completely decayed away, making it an extinct radionuclide. Its former presence may be determined from an excess of its daughter xenon-129, but early attempts to use this characteristic to date the supernova source for elements in the Solar System are made difficult by alternative nuclear processes giving iodine-129 and by iodine's volatility at higher temperatures. Due to its mobility in the environment iodine-129 has been used to date very old groundwaters. Traces of iodine-129 still exist today, as it is also a cosmogenic nuclide, formed from cosmic ray spallation of atmospheric xenon: these traces make up 10−14 to 10−10 of all terrestrial iodine. It also occurs from open-air nuclear testing, and is not hazardous because of its very long half-life, the longest of all fission products. At the peak of thermonuclear testing in the 1960s and 1970s, iodine-129 still made up only about 10−7 of all terrestrial iodine. Excited states of iodine-127 and iodine-129 are often used in Mössbauer spectroscopy. The other iodine radioisotopes have much shorter half-lives, no longer than days. Some of them have medical applications involving the thyroid gland, where the iodine that enters the body is stored and concentrated. Iodine-123 has a half-life of thirteen hours and decays by electron capture to tellurium-123, emitting gamma radiation; it is used in nuclear medicine imaging, including single photon emission computed tomography (SPECT) and X-ray computed tomography (X-Ray CT) scans. Iodine-125 has a half-life of fifty-nine days, decaying by electron capture to tellurium-125 and emitting low-energy gamma radiation; the second-longest-lived iodine radioisotope, it has uses in biological assays, nuclear medicine imaging and in radiation therapy as brachytherapy to treat a number of conditions, including prostate cancer, uveal melanomas, and brain tumours. Finally, iodine-131, with a half-life of eight days, beta decays to an excited state of stable xenon-131 that then converts to the ground state by emitting gamma radiation. It is a common fission product and thus is present in high levels in radioactive fallout. It may then be absorbed through contaminated food, and will also accumulate in the thyroid. As it decays, it may cause damage to the thyroid. The primary risk from exposure to high levels of iodine-131 is the chance occurrence of radiogenic thyroid cancer in later life. Other risks include the possibility of non-cancerous growths and thyroiditis. Protection usually used against the negative effects of iodine-131 is by saturating the thyroid gland with stable iodine-127 in the form of potassium iodide tablets, taken daily for optimal prophylaxis. However, iodine-131 may also be used for medicinal purposes in radiation therapy for this very reason, when tissue destruction is desired after iodine uptake by the tissue. Iodine-131 is also used as a radioactive tracer. Chemistry and compounds Iodine is quite reactive, but it is less so than the lighter halogens, and it is a weaker oxidant. For example, it does not halogenate carbon monoxide, nitric oxide, and sulfur dioxide, which chlorine does. Many metals react with iodine. By the same token, however, since iodine has the lowest ionisation energy among the halogens and is the most easily oxidised of them, it has a more significant cationic chemistry and its higher oxidation states are rather more stable than those of bromine and chlorine, for example in iodine heptafluoride. Charge-transfer complexes The iodine molecule, I2, dissolves in CCl4 and aliphatic hydrocarbons to give bright violet solutions. In these solvents the absorption band maximum occurs in the 520 – 540 nm region and is assigned to a * to σ* transition. When I2 reacts with Lewis bases in these solvents a blue shift in I2 peak is seen and the new peak (230 – 330 nm) arises that is due to the formation of adducts, which are referred to as charge-transfer complexes. Hydrogen iodide The simplest compound of iodine is hydrogen iodide, HI. It is a colourless gas that reacts with oxygen to give water and iodine. Although it is useful in iodination reactions in the laboratory, it does not have large-scale industrial uses, unlike the other hydrogen halides. Commercially, it is usually made by reacting iodine with hydrogen sulfide or hydrazine: 2 I2 + N2H4 4 HI + N2 At room temperature, it is a colourless gas, like all of the hydrogen halides except hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative iodine atom. It melts at and boils at . It is an endothermic compound that can exothermically dissociate at room temperature, although the process is very slow unless a catalyst is present: the reaction between hydrogen and iodine at room temperature to give hydrogen iodide does not proceed to completion. The H–I bond dissociation energy is likewise the smallest of the hydrogen halides, at 295 kJ/mol. Aqueous hydrogen iodide is known as hydroiodic acid, which is a strong acid. Hydrogen iodide is exceptionally soluble in water: one litre of water will dissolve 425 litres of hydrogen iodide, and the saturated solution has only four water molecules per molecule of hydrogen iodide. Commercial so-called "concentrated" hydroiodic acid usually contains 48–57% HI by mass; the solution forms an azeotrope with boiling point at 56.7 g HI per 100 g solution. Hence hydroiodic acid cannot be concentrated past this point by evaporation of water. Unlike gaseous hydrogen iodide, hydroiodic acid has major industrial use in the manufacture of acetic acid by the Cativa process. Other binary iodine compounds With the exception of the noble gases, nearly all elements on the periodic table up to einsteinium (EsI3 is known) are known to form binary compounds with iodine. Until 1990, nitrogen triiodide was only known as an ammonia adduct. Ammonia-free NI3 was found to be isolable at –196 °C but spontaneously decomposes at 0 °C. For thermodynamic reasons related to electronegativity of the elements, neutral sulfur and selenium iodides that are stable at room temperature are also nonexistent, although S2I2 and SI2 are stable up to 183 and 9 K, respectively. As of 2022, no neutral binary selenium iodide has been unambiguously identified (at any temperature). Sulfur- and selenium-iodine polyatomic cations (e.g., [S2I42+][AsF6–]2 and [Se2I42+][Sb2F11–]2) have been prepared and characterised crystallographically. Given the large size of the iodide anion and iodine's weak oxidising power, high oxidation states are difficult to achieve in binary iodides, the maximum known being in the pentaiodides of niobium, tantalum, and protactinium. Iodides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydroiodic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen iodide gas. These methods work best when the iodide product is stable to hydrolysis. Other syntheses include high-temperature oxidative iodination of the element with iodine or hydrogen iodide, high-temperature iodination of a metal oxide or other halide by iodine, a volatile metal halide, carbon tetraiodide, or an organic iodide. For example, molybdenum(IV) oxide reacts with aluminium(III) iodide at 230 °C to give molybdenum(II) iodide. An example involving halogen exchange is given below, involving the reaction of tantalum(V) chloride with excess aluminium(III) iodide at 400 °C to give tantalum(V) iodide: 3TaCl5 + \underset{(excess)}{5AlI3} -> 3TaI5 + 5AlCl3 Lower iodides may be produced either through thermal decomposition or disproportionation, or by reducing the higher iodide with hydrogen or a metal, for example: TaI5{} + Ta ->[\text{thermal gradient}] [\ce{630^\circ C\ ->\ 575^\circ C}] Ta6I14 Most metal iodides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular iodides, as do metals in high oxidation states from +3 and above. Both ionic and covalent iodides are known for metals in oxidation state +3 (e.g. scandium iodide is mostly ionic, but aluminium iodide is not). Ionic iodides MIn tend to have the lowest melting and boiling points among the halides MXn of the same element, because the electrostatic forces of attraction between the cations and anions are weakest for the large iodide anion. In contrast, covalent iodides tend to instead have the highest melting and boiling points among the halides of the same element, since iodine is the most polarisable of the halogens and, having the most electrons among them, can contribute the most to van der Waals forces. Naturally, exceptions abound in intermediate iodides where one trend gives way to the other. Similarly, solubilities in water of predominantly ionic iodides (e.g. potassium and calcium) are the greatest among ionic halides of that element, while those of covalent iodides (e.g. silver) are the lowest of that element. In particular, silver iodide is very insoluble in water and its formation is often used as a qualitative test for iodine. Iodine halides The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY3, XY5, and XY7 (where X is heavier than Y), and iodine is no exception. Iodine forms all three possible diatomic interhalogens, a trifluoride and trichloride, as well as a pentafluoride and, exceptionally among the halogens, a heptafluoride. Numerous cationic and anionic derivatives are also characterised, such as the wine-red or bright orange compounds of and the dark brown or purplish black compounds of I2Cl+. Apart from these, some pseudohalides are also known, such as cyanogen iodide (ICN), iodine thiocyanate (ISCN), and iodine azide (IN3). Iodine monofluoride (IF) is unstable at room temperature and disproportionates very readily and irreversibly to iodine and iodine pentafluoride, and thus cannot be obtained pure. It can be synthesised from the reaction of iodine with fluorine gas in trichlorofluoromethane at −45 °C, with iodine trifluoride in trichlorofluoromethane at −78 °C, or with silver(I) fluoride at 0 °C. Iodine monochloride (ICl) and iodine monobromide (IBr), on the other hand, are moderately stable. The former, a volatile red-brown compound, was discovered independently by Joseph Louis Gay-Lussac and Humphry Davy in 1813–1814 not long after the discoveries of chlorine and iodine, and it mimics the intermediate halogen bromine so well that Justus von Liebig was misled into mistaking bromine (which he had found) for iodine monochloride. Iodine monochloride and iodine monobromide may be prepared simply by reacting iodine with chlorine or bromine at room temperature and purified by fractional crystallisation. Both are quite reactive and attack even platinum and gold, though not boron, carbon, cadmium, lead, zirconium, niobium, molybdenum, and tungsten. Their reaction with organic compounds depends on conditions. Iodine chloride vapour tends to chlorinate phenol and salicylic acid, since when iodine chloride undergoes homolytic fission, chlorine and iodine are produced and the former is more reactive. However, iodine chloride in carbon tetrachloride solution results in iodination being the main reaction, since now heterolytic fission of the I–Cl bond occurs and I+ attacks phenol as an electrophile. However, iodine monobromide tends to brominate phenol even in carbon tetrachloride solution because it tends to dissociate into its elements in solution, and bromine is more reactive than iodine. When liquid, iodine monochloride and iodine monobromide dissociate into and ions (X = Cl, Br); thus they are significant conductors of electricity and can be used as ionising solvents. Iodine trifluoride (IF3) is an unstable yellow solid that decomposes above −28 °C. It is thus little-known. It is difficult to produce because fluorine gas would tend to oxidise iodine all the way to the pentafluoride; reaction at low temperature with xenon difluoride is necessary. Iodine trichloride, which exists in the solid state as the planar dimer I2Cl6, is a bright yellow solid, synthesised by reacting iodine with liquid chlorine at −80 °C; caution is necessary during purification because it easily dissociates to iodine monochloride and chlorine and hence can act as a strong chlorinating agent. Liquid iodine trichloride conducts electricity, possibly indicating dissociation to and ions. Iodine pentafluoride (IF5), a colourless, volatile liquid, is the most thermodynamically stable iodine fluoride, and can be made by reacting iodine with fluorine gas at room temperature. It is a fluorinating agent, but is mild enough to store in glass apparatus. Again, slight electrical conductivity is present in the liquid state because of dissociation to and . The pentagonal bipyramidal iodine heptafluoride (IF7) is an extremely powerful fluorinating agent, behind only chlorine trifluoride, chlorine pentafluoride, and bromine pentafluoride among the interhalogens: it reacts with almost all the elements even at low temperatures, fluorinates Pyrex glass to form iodine(VII) oxyfluoride (IOF5), and sets carbon monoxide on fire. Iodine oxides and oxoacids Iodine oxides are the most stable of all the halogen oxides, because of the strong I–O bonds resulting from the large electronegativity difference between iodine and oxygen, and they have been known for the longest time. The stable, white, hygroscopic iodine pentoxide (I2O5) has been known since its formation in 1813 by Gay-Lussac and Davy. It is most easily made by the dehydration of iodic acid (HIO3), of which it is the anhydride. It will quickly oxidise carbon monoxide completely to carbon dioxide at room temperature, and is thus a useful reagent in determining carbon monoxide concentration. It also oxidises nitrogen oxide, ethylene, and hydrogen sulfide. It reacts with sulfur trioxide and peroxydisulfuryl difluoride (S2O6F2) to form salts of the iodyl cation, [IO2]+, and is reduced by concentrated sulfuric acid to iodosyl salts involving [IO]+. It may be fluorinated by fluorine, bromine trifluoride, sulfur tetrafluoride, or chloryl fluoride, resulting iodine pentafluoride, which also reacts with iodine pentoxide, giving iodine(V) oxyfluoride, IOF3. A few other less stable oxides are known, notably I4O9 and I2O4; their structures have not been determined, but reasonable guesses are IIII(IVO3)3 and [IO]+[IO3]− respectively. More important are the four oxoacids: hypoiodous acid (HIO), iodous acid (HIO2), iodic acid (HIO3), and periodic acid (HIO4 or H5IO6). When iodine dissolves in aqueous solution, the following reactions occur: Hypoiodous acid is unstable to disproportionation. The hypoiodite ions thus formed disproportionate immediately to give iodide and iodate: Iodous acid and iodite are even less stable and exist only as a fleeting intermediate in the oxidation of iodide to iodate, if at all. Iodates are by far the most important of these compounds, which can be made by oxidising alkali metal iodides with oxygen at 600 °C and high pressure, or by oxidising iodine with chlorates. Unlike chlorates, which disproportionate very slowly to form chloride and perchlorate, iodates are stable to disproportionation in both acidic and alkaline solutions. From these, salts of most metals can be obtained. Iodic acid is most easily made by oxidation of an aqueous iodine suspension by electrolysis or fuming nitric acid. Iodate has the weakest oxidising power of the halates, but reacts the quickest. Many periodates are known, including not only the expected tetrahedral , but also square-pyramidal , octahedral orthoperiodate , [IO3(OH)3]2−, [I2O8(OH2)]4−, and . They are usually made by oxidising alkaline sodium iodate electrochemically (with lead(IV) oxide as the anode) or by chlorine gas: They are thermodymically and kinetically powerful oxidising agents, quickly oxidising Mn2+ to , and cleaving glycols, α-diketones, α-ketols, α-aminoalcohols, and α-diamines. Orthoperiodate especially stabilises high oxidation states among metals because of its very high negative charge of −5. Orthoperiodic acid, H5IO6, is stable, and dehydrates at 100 °C in a vacuum to Metaperiodic acid, HIO4. Attempting to go further does not result in the nonexistent iodine heptoxide (I2O7), but rather iodine pentoxide and oxygen. Periodic acid may be protonated by sulfuric acid to give the cation, isoelectronic to Te(OH)6 and , and giving salts with bisulfate and sulfate. Polyiodine compounds When iodine dissolves in strong acids, such as fuming sulfuric acid, a bright blue paramagnetic solution including cations is formed. A solid salt of the diiodine cation may be obtained by oxidising iodine with antimony pentafluoride: The salt I2Sb2F11 is dark blue, and the blue tantalum analogue I2Ta2F11 is also known. Whereas the I–I bond length in I2 is 267 pm, that in is only 256 pm as the missing electron in the latter has been removed from an antibonding orbital, making the bond stronger and hence shorter. In fluorosulfuric acid solution, deep-blue reversibly dimerises below −60 °C, forming red rectangular diamagnetic . Other polyiodine cations are not as well-characterised, including bent dark-brown or black and centrosymmetric C2h green or black , known in the and salts among others. The only important polyiodide anion in aqueous solution is linear triiodide, . Its formation explains why the solubility of iodine in water may be increased by the addition of potassium iodide solution: Many other polyiodides may be found when solutions containing iodine and iodide crystallise, such as , , , and , whose salts with large, weakly polarising cations such as Cs+ may be isolated. Organoiodine compounds Organoiodine compounds have been fundamental in the development of organic synthesis, such as in the Hofmann elimination of amines, the Williamson ether synthesis, the Wurtz coupling reaction, and in Grignard reagents. The carbon–iodine bond is a common functional group that forms part of core organic chemistry; formally, these compounds may be thought of as organic derivatives of the iodide anion. The simplest organoiodine compounds, alkyl iodides, may be synthesised by the reaction of alcohols with phosphorus triiodide; these may then be used in nucleophilic substitution reactions, or for preparing Grignard reagents. The C–I bond is the weakest of all the carbon–halogen bonds due to the minuscule difference in electronegativity between carbon (2.55) and iodine (2.66). As such, iodide is the best leaving group among the halogens, to such an extent that many organoiodine compounds turn yellow when stored over time due to decomposition into elemental iodine; as such, they are commonly used in organic synthesis, because of the easy formation and cleavage of the C–I bond. They are also significantly denser than the other organohalogen compounds thanks to the high atomic weight of iodine. A few organic oxidising agents like the iodanes contain iodine in a higher oxidation state than −1, such as 2-iodoxybenzoic acid, a common reagent for the oxidation of alcohols to aldehydes, and iodobenzene dichloride (PhICl2), used for the selective chlorination of alkenes and alkynes. One of the more well-known uses of organoiodine compounds is the so-called iodoform test, where iodoform (CHI3) is produced by the exhaustive iodination of a methyl ketone (or another compound capable of being oxidised to a methyl ketone), as follows: Some drawbacks of using organoiodine compounds as compared to organochlorine or organobromine compounds is the greater expense and toxicity of the iodine derivatives, since iodine is expensive and organoiodine compounds are stronger alkylating agents. For example, iodoacetamide and iodoacetic acid denature proteins by irreversibly alkylating cysteine residues and preventing the reformation of disulfide linkages. Halogen exchange to produce iodoalkanes by the Finkelstein reaction is slightly complicated by the fact that iodide is a better leaving group than chloride or bromide. The difference is nevertheless small enough that the reaction can be driven to completion by exploiting the differential solubility of halide salts, or by using a large excess of the halide salt. In the classic Finkelstein reaction, an alkyl chloride or an alkyl bromide is converted to an alkyl iodide by treatment with a solution of sodium iodide in acetone. Sodium iodide is soluble in acetone and sodium chloride and sodium bromide are not. The reaction is driven toward products by mass action due to the precipitation of the insoluble salt. Occurrence and production Iodine is the least abundant of the stable halogens, comprising only 0.46 parts per million of Earth's crustal rocks (compare: fluorine: 544 ppm, chlorine: 126 ppm, bromine: 2.5 ppm) making it the 60th most abundant element. Iodide minerals are rare, and most deposits that are concentrated enough for economical extraction are iodate minerals instead. Examples include lautarite, Ca(IO3)2, and dietzeite, 7Ca(IO3)2·8CaCrO4. These are the minerals that occur as trace impurities in the caliche, found in Chile, whose main product is sodium nitrate. In total, they can contain at least 0.02% and at most 1% iodine by mass. Sodium iodate is extracted from the caliche and reduced to iodide by sodium bisulfite. This solution is then reacted with freshly extracted iodate, resulting in comproportionation to iodine, which may be filtered off. The caliche was the main source of iodine in the 19th century and continues to be important today, replacing kelp (which is no longer an economically viable source), but in the late 20th century brines emerged as a comparable source. The Japanese Minami Kantō gas field east of Tokyo and the American Anadarko Basin gas field in northwest Oklahoma are the two largest such sources. The brine is hotter than 60 °C from the depth of the source. The brine is first purified and acidified using sulfuric acid, then the iodide present is oxidised to iodine with chlorine. An iodine solution is produced, but is dilute and must be concentrated. Air is blown into the solution to evaporate the iodine, which is passed into an absorbing tower, where sulfur dioxide reduces the iodine. The hydrogen iodide (HI) is reacted with chlorine to precipitate the iodine. After filtering and purification the iodine is packed. These sources ensure that Chile and Japan are the largest producers of iodine today. Alternatively, the brine may be treated with silver nitrate to precipitate out iodine as silver iodide, which is then decomposed by reaction with iron to form metallic silver and a solution of iron(II) iodide. The iodine is then liberated by displacement with chlorine. Applications About half of all produced iodine goes into various organoiodine compounds, another 15% remains as the pure element, another 15% is used to form potassium iodide, and another 15% for other inorganic iodine compounds. Among the major uses of iodine compounds are catalysts, animal feed supplements, stabilisers, dyes, colourants and pigments, pharmaceutical, sanitation (from tincture of iodine), and photography; minor uses include smog inhibition, cloud seeding, and various uses in analytical chemistry. X-ray imaging As an element with high electron density and atomic number, iodine efficiently absorbs X-rays. X-ray radiocontrast agents is the top application for iodine. In this application, Organoiodine compounds are injected intravenously. This application is often in conjunction with advanced X-ray techniques such as angiography and CT scanning. At present, all water-soluble radiocontrast agents rely on iodine-containing compounds. Iodine absorbs X-rays with energies lessthan 33.3 keV due to the photoelectric effect of the innermost electrons. Biocide Use of iodine as a biocide represents a major application of the element, ranked 2nd by weight. Elemental iodine (I2) is used as an antiseptic in medicine. A number of water-soluble compounds, from triiodide (I3−, generated in situ by adding iodide to poorly water-soluble elemental iodine) to various iodophors, slowly decompose to release I2 when applied. Optical polarising films Thin-film-transistor liquid crystal displays rely on polarisation. The liquid crystal transistor is sandwiched between two polarising films and illuminated from behind. The two films prevent light transmission unless the transistor in the middle of the sandwich rotates the light. Iodine-impregnated polymer films are used in polarising optical components with the highest transmission and degree of polarisation. Co-catalyst Another significant use of iodine is as a cocatalyst for the production of acetic acid by the Monsanto and Cativa processes. In these technologies, hydroiodic acid converts the methanol feedstock into methyl iodide, which undergoes carbonylation. Hydrolysis of the resulting acetyl iodide regenerates hydroiodic acid and gives acetic acid. The majority of acetic acid is produced by these approaches. Nutrition Salts of iodide and iodate are used extensively in human and animal nutrition. This application reflects the status of iodide as an essential element, being required for two hormones. The production of ethylenediamine dihydroiodide, provided as a nutritional supplement for livestock, consumes a large portion of available iodine. Iodine is a component of iodised salt. A saturated solution of potassium iodide is used to treat acute thyrotoxicosis. It is also used to block uptake of iodine-131 in the thyroid gland (see isotopes section above), when this isotope is used as part of radiopharmaceuticals (such as iobenguane) that are not targeted to the thyroid or thyroid-type tissues. Others Inorganic iodides find specialised uses. Titanium, zirconium, hafnium, and thorium are purified by the Van Arkel–de Boer process, which involves the reversible formation of the tetraiodides of these elements. Silver iodide is a major ingredient to traditional photographic film. Thousands of kilograms of silver iodide are used annually for cloud seeding to induce rain. The organoiodine compound erythrosine is an important food colouring agent. Perfluoroalkyl iodides are precursors to important surfactants, such as perfluorooctanesulfonic acid. I is used as the radiolabel in investigating which ligands go to which plant pattern recognition receptors (PRRs). An iodine based thermochemical cycle has been evaluated for hydrogen production using energy from nuclear paper. The cycle has three steps. At , iodine reacts with sulfur dioxide and water to give hydrogen iodide and sulfuric acid: I_2+SO_2+2H_2O \rightarrow 2HI+H_2SO_4 After a separation stage, at sulfuric acid splits in sulfur dioxide and oxygen: 2H_2SO_4 \rightarrow 2SO_2+2H_2O+O_2 Hydrogen iodide, at , gives hydrogen and the initial element, iodine: 2HI \rightarrow I_2+H_2 The yield of the cycle (ratio between lower heating value of the produced hydrogen and the consumed energy for its production, is approximately 38%. , the cycle is not a competitive means of producing hydrogen. Spectroscopy The spectrum of the iodine molecule, I2, consists of (not exclusively) tens of thousands of sharp spectral lines in the wavelength range 500–700 nm. It is therefore a commonly used wavelength reference (secondary standard). By measuring with a spectroscopic Doppler-free technique while focusing on one of these lines, the hyperfine structure of the iodine molecule reveals itself. A line is now resolved such that either 15 components (from even rotational quantum numbers, Jeven), or 21 components (from odd rotational quantum numbers, Jodd) are measurable. Caesium iodide and thallium-doped sodium iodide are used in crystal scintillators for the detection of gamma rays. The efficiency is high and energy dispersive spectroscopy is possible, but the resolution is rather poor. Chemical analysis The iodide and iodate anions can be used for quantitative volumetric analysis, for example in iodometry. Iodine and starch form a blue complex, and this reaction is often used to test for either starch or iodine and as an indicator in iodometry. The iodine test for starch is still used to detect counterfeit banknotes printed on starch-containing paper. The iodine value is the mass of iodine in grams that is consumed by 100 grams of a chemical substance typically fats or oils. Iodine numbers are often used to determine the amount of unsaturation in fatty acids. This unsaturation is in the form of double bonds, which react with iodine compounds. Potassium tetraiodomercurate(II), K2HgI4, is also known as Nessler's reagent. It is once was used as a sensitive spot test for ammonia. Similarly, Mayer's reagent (potassium tetraiodomercurate(II) solution) is used as a precipitating reagent to test for alkaloids. Aqueous alkaline iodine solution is used in the iodoform test for methyl ketones. Biological role Iodine is an essential element for life and, at atomic number Z = 53, is the heaviest element commonly needed by living organisms. (Lanthanum and the other lanthanides, as well as tungsten with Z = 74 and uranium with Z = 92, are used by a few microorganisms.) It is required for the synthesis of the growth-regulating thyroid hormones tetraiodothyronine and triiodothyronine (T4 and T3 respectively, named after their number of iodine atoms). A deficiency of iodine leads to decreased production of T3 and T4 and a concomitant enlargement of the thyroid tissue in an attempt to obtain more iodine, causing the disease goitre. The major form of thyroid hormone in the blood is tetraiodothyronine (T4), which has a longer life than triiodothyronine (T3). In humans, the ratio of T4 to T3 released into the blood is between 14:1 and 20:1. T4 is converted to the active T3 (three to four times more potent than T4) within cells by deiodinases (5'-iodinase). These are further processed by decarboxylation and deiodination to produce iodothyronamine (T1a) and thyronamine (T0a'). All three isoforms of the deiodinases are selenium-containing enzymes; thus metallic selenium is needed for triiodothyronine and tetraiodothyronine production. Iodine accounts for 65% of the molecular weight of T4 and 59% of T3. Fifteen to 20 mg of iodine is concentrated in thyroid tissue and hormones, but 70% of all iodine in the body is found in other tissues, including mammary glands, eyes, gastric mucosa, thymus, cerebrospinal fluid, choroid plexus, arteries, cervix, salivary glands. During pregnancy, the placenta is able to store and accumulate iodine. In the cells of those tissues, iodine enters directly by sodium-iodide symporter (NIS). The action of iodine in mammal tissues is related to fetal and neonatal development, and in the other tissues, it is known. Dietary recommendations and intake The daily levels of intake recommended by the United States National Academy of Medicine are between 110 and 130 μg for infants up to 12 months, 90 μg for children up to eight years, 130 μg for children up to 13 years, 150 μg for adults, 220 μg for pregnant women and 290 μg for lactating women. The Tolerable Upper Intake Level (TUIL) for adults is 1,100 μg/day. This upper limit was assessed by analysing the effect of supplementation on thyroid-stimulating hormone. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR; AI and UL are defined the same as in the United States. For women and men ages 18 and older, the PRI for iodine is set at 150 μg/day; the PRI during pregnancy and lactation is 200 μg/day. For children aged 1–17 years, the PRI increases with age from 90 to 130 μg/day. These PRIs are comparable to the U.S. RDAs with the exception of that for lactation. The thyroid gland needs 70 μg/day of iodine to synthesise the requisite daily amounts of T4 and T3. The higher recommended daily allowance levels of iodine seem necessary for optimal function of a number of body systems, including mammary glands, gastric mucosa, salivary glands, brain cells, choroid plexus, thymus, arteries. Natural food sources of iodine include seafood which contains fish, seaweeds, kelp, shellfish and other foods which contain dairy products, eggs, meats, vegetables, so long as the animals ate iodine richly, and the plants are grown on iodine-rich soil. Iodised salt is fortified with potassium iodate, a salt of iodine, potassium, oxygen. As of 2000, the median intake of iodine from food in the United States was 240 to 300 μg/day for men and 190 to 210 μg/day for women. The general US population has adequate iodine nutrition, with lactating women and pregnant women having a mild risk of deficiency. In Japan, consumption was considered much higher, ranging between 5,280 μg/day to 13,800 μg/day from wakame and kombu that are eaten, both in the form of kombu and wakame and kombu and wakame umami extracts for soup stock and potato chips. However, new studies suggest that Japan's consumption is closer to 1,000–3,000 μg/day. The adult UL in Japan was last revised to 3,000 μg/day in 2015. After iodine fortification programs such as iodisation of salt have been done, some cases of iodine-induced hyperthyroidism have been observed (so-called Jod-Basedow phenomenon). The condition occurs mainly in people above 40 years of age, and the risk is higher when iodine deficiency is high and the first rise in iodine consumption is high. Deficiency In areas where there is little iodine in the diet, which are remote inland areas and faraway mountainous areas where no iodine rich foods are eaten, iodine deficiency gives rise to hypothyroidism, symptoms of which are extreme fatigue, goitre, mental slowing, depression, low weight gain, and low basal body temperatures. Iodine deficiency is the leading cause of preventable intellectual disability, a result that occurs primarily when babies or small children are rendered hypothyroidic by no iodine. The addition of iodine to salt has largely destroyed this problem in wealthier areas, but iodine deficiency remains a serious public health problem in poorer areas today. Iodine deficiency is also a problem in certain areas of all continents of the world. Information processing, fine motor skills, and visual problem solving are normalised by iodine repletion in iodine-deficient people. Precautions Toxicity Elemental iodine (I2) is toxic if taken orally undiluted. The lethal dose for an adult human is 30 mg/kg, which is about 2.1–2.4 grams for a human weighing 70 to 80 kg (even when experiments on rats demonstrated that these animals could survive after eating a 14000 mg/kg dose and are still living after that). Excess iodine is more cytotoxic in the presence of selenium deficiency. Iodine supplementation in selenium-deficient populations is problematic for this reason. The toxicity derives from its oxidising properties, through which it denaturates proteins (including enzymes). Elemental iodine is also a skin irritant. Solutions with high elemental iodine concentration, such as tincture of iodine and Lugol's solution, are capable of causing tissue damage if used in prolonged cleaning or antisepsis; similarly, liquid Povidone-iodine (Betadine) trapped against the skin resulted in chemical burns in some reported cases. Occupational exposure The U.S. Occupational Safety and Health Administration (OSHA) has set the legal limit (Permissible exposure limit) for iodine exposure in the workplace at 0.1 ppm (1 mg/m3) during an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 0.1 ppm (1 mg/m3) during an 8-hour workday. At levels of 2 ppm, iodine is immediately dangerous to life and health. Allergic reactions Some people develop a hypersensitivity to products and foods containing iodine. Applications of tincture of iodine or Betadine can cause rashes, sometimes severe. Parenteral use of iodine-based contrast agents (see above) can cause reactions ranging from a mild rash to fatal anaphylaxis. Such reactions have led to the misconception (widely held, even among physicians) that some people are allergic to iodine itself; even allergies to iodine-rich foods have been so construed. In fact, there has never been a confirmed report of a true iodine allergy, as an allergy to iodine or iodine salts is biologically impossible. Hypersensitivity reactions to products and foods containing iodine are apparently related to their other molecular components; thus, a person who has demonstrated an allergy to one food or product containing iodine may not have an allergic reaction to another. Patients with various food allergies (fishes, shellfishes, eggs, milk, seaweeds, kelp, meats, vegetables, kombu, wakame) do not have an increased risk for a contrast medium hypersensitivity. The patient's allergy history is relevant. US DEA List I status Phosphorus reduces iodine to hydroiodic acid, which is a reagent effective for reducing ephedrine and pseudoephedrine to methamphetamine. For this reason, iodine was designated by the United States Drug Enforcement Administration as a List I precursor chemical under 21 CFR 1310.02. Notes References Bibliography Chemical elements Halogens Reactive nonmetals Diatomic nonmetals Dietary minerals Oxidizing agents Gases with color Chemical elements with primitive orthorhombic structure
Iodine
[ "Physics", "Chemistry", "Materials_science" ]
11,179
[ "Chemical elements", "Redox", "Diatomic nonmetals", "Nonmetals", "Oxidizing agents", "Reactive nonmetals", "Atoms", "Matter" ]
14,773
https://en.wikipedia.org/wiki/Information%20theory
Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and formalized by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering. A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security. Applications of fundamental topics of information theory include source coding/data compression (e.g. for ZIP files), and channel coding/error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones and the development of the Internet and artificial intelligence. The theory has also found applications in other areas, including statistical inference, cryptography, neurobiology, perception, signal processing, linguistics, the evolution and function of molecular codes (bioinformatics), thermal physics, molecular dynamics, black holes, quantum computing, information retrieval, intelligence gathering, plagiarism detection, pattern recognition, anomaly detection, the analysis of music, art creation, imaging system design, study of outer space, the dimensionality of space, and epistemology. Overview Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent. Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible. A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis, such as the unit ban. Historical background The landmark event establishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. Historian James Gleick rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than the transistor. He came to be known as the "father of information theory". Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to Vannevar Bush. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation (recalling the Boltzmann constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as , where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory. In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion: "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point." With it came the ideas of: the information entropy and redundancy of a source, and its relevance through the source coding theorem; the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem; the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; as well as the bit—a new way of seeing the most fundamental unit of information. Quantities of information Information theory is based on probability theory and statistics, where quantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called entropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable. Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit or shannon, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm. In what follows, an expression of the form is considered by convention to be equal to zero whenever . This is justified because for any logarithmic base. Entropy of an information source Based on the probability mass function of each source symbol to be communicated, the Shannon entropy , in units of bits (per symbol), is given by where is the probability of occurrence of the -th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base , where is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol. Intuitively, the entropy of a discrete random variable is a measure of the amount of uncertainty associated with the value of when only its distribution is known. The entropy of a source that emits a sequence of symbols that are independent and identically distributed (iid) is bits (per message of symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length will be less than . If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If is the set of all messages that could be, and is the probability of some , then the entropy, , of is defined: (Here, is the self-information, which is the entropy contribution of an individual message, and is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable ; i.e., most unpredictable, in which case . The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit: Joint entropy The of two discrete random variables and is merely the entropy of their pairing: . This implies that if and are independent, then their joint entropy is the sum of their individual entropies. For example, if represents the position of a chess piece— the row and the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. Despite similar notation, joint entropy should not be confused with . Conditional entropy (equivocation) The or conditional uncertainty of given random variable (also called the equivocation of about ) is the average conditional entropy over : Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that: Mutual information (transinformation) Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of relative to is given by: where (Specific mutual Information) is the pointwise mutual information. A basic property of the mutual information is that That is, knowing Y, we can save an average of bits in encoding X compared to not knowing Y. Mutual information is symmetric: Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X: In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. Kullback–Leibler divergence (information gain) The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution , and an arbitrary probability distribution . If we compress data in a manner that assumes is the distribution underlying some data, when, in reality, is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric). Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution . If Alice knows the true distribution , while Bob believes (has a prior) that the distribution is , then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. Directed Information Directed information, , is an information theory measure that quantifies the information flow from the random process to the random process . The term directed information was coined by James Massey and is defined as , where is the conditional mutual information . In contrast to mutual information, directed information is not symmetric. The measures the information bits that are transmitted causally from to . The Directed information has many applications in problems where causality plays an important role such as capacity of channel with feedback, capacity of discrete memoryless networks with feedback, gambling with causal side information, compression with causal side information, real-time control communication settings, and in statistical physics. Other quantities Other important information theoretic quantities include the Rényi entropy and the Tsallis entropy (generalizations of the concept of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Also, pragmatic information has been proposed as a measure of how much information has been used in making a decision. Coding theory Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. Data compression (source coding): There are two formulations for the compression problem: lossless data compression: the data must be reconstructed exactly; lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information theory is called rate–distortion theory. Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Source theory Any process that generates successive messages can be considered a of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory. Rate Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is: that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is: that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. The information rate is defined as: It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of . Channel capacity Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. Consider the communications process over a discrete channel. A simple model of the process is shown below: Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let be the conditional probability distribution function of Y given X. We will consider to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of , the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the and is given by: This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error. Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. Capacity of particular channel models A continuous-time analog communications channel subject to Gaussian noise—see Shannon–Hartley theorem. A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of bits per channel use, where is the binary entropy function to the base-2 logarithm: A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is bits per channel use. Channels with memory and directed information In practice many channels have memory. Namely, at time the channel is given by the conditional probability. It is often more comfortable to use the notation and the channel become . In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not (if there is no feedback the directed information equals the mutual information). Fungible information Fungible information is the information for which the means of encoding is not important. Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information. Applications to other fields Intelligence uses and secrecy applications Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability. Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time. Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material. Pseudorandom number generation Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses. Seismic exploration One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods. Semiotics Semioticians and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics. Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing." Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones. Integrated process organization of neural information Quantitative information theoretic methods have been applied in cognitive science to analyze the integrated process organization of neural information in the context of the binding problem in cognitive neuroscience. In this context, either an information-theoretical measure, such as (Gerald Edelman and Giulio Tononi's functional clustering model and dynamic core hypothesis (DCH)) or (Tononi's integrated information theory (IIT) of consciousness), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston's free energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the Bayesian brain hypothesis). Miscellaneous applications Information theory also has applications in the search for extraterrestrial intelligence, black holes, bioinformatics, and gambling. See also Algorithmic probability Bayesian inference Communication theory Constructor theory – a generalization of information theory that includes quantum information Formal science Inductive probability Info-metrics Minimum message length Minimum description length Philosophy of information Applications Active networking Cryptanalysis Cryptography Cybernetics Entropy in thermodynamics and information theory Gambling Intelligence (information gathering) Seismic exploration History Hartley, R.V.L. History of information theory Shannon, C.E. Timeline of information theory Yockey, H.P. Andrey Kolmogorov Theory Coding theory Detection theory Estimation theory Fisher information Information algebra Information asymmetry Information field theory Information geometry Information theory and measure theory Kolmogorov complexity List of unsolved problems in information theory Logic of information Network coding Philosophy of information Quantum information science Source coding Concepts Ban (unit) Channel capacity Communication channel Communication source Conditional entropy Covert channel Data compression Decoder Differential entropy Fungible information Information fluctuation complexity Information entropy Joint entropy Kullback–Leibler divergence Mutual information Pointwise mutual information (PMI) Receiver (information theory) Redundancy Rényi entropy Self-information Unicity distance Variety Hamming distance Perplexity References Further reading The classic work Shannon, C.E. (1948), "A Mathematical Theory of Communication", Bell System Technical Journal, 27, pp. 379–423 & 623–656, July & October, 1948. PDF. Notes and other formats. R.V.L. Hartley, "Transmission of Information", Bell System Technical Journal, July 1928 Andrey Kolmogorov (1968), "Three approaches to the quantitative definition of information" in International Journal of Computer Mathematics, 2, pp. 157–168. Other journal articles J. L. Kelly Jr., Princeton, "A New Interpretation of Information Rate" Bell System Technical Journal, Vol. 35, July 1956, pp. 917–26. R. Landauer, IEEE.org, "Information is Physical" Proc. Workshop on Physics and Computation PhysComp'92 (IEEE Comp. Sci.Press, Los Alamitos, 1993) pp. 1–4. Textbooks on information theory Alajaji, F. and Chen, P.N. An Introduction to Single-User Information Theory. Singapore: Springer, 2018. Arndt, C. Information Measures, Information and its Description in Science and Engineering (Springer Series: Signals and Communication Technology), 2004, Gallager, R. Information Theory and Reliable Communication. New York: John Wiley and Sons, 1968. Goldman, S. Information Theory. New York: Prentice Hall, 1953. New York: Dover 1968 , 2005 Csiszar, I, Korner, J. Information Theory: Coding Theorems for Discrete Memoryless Systems Akademiai Kiado: 2nd edition, 1997. MacKay, David J. C. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. Mansuripur, M. Introduction to Information Theory. New York: Prentice Hall, 1987. McEliece, R. The Theory of Information and Coding. Cambridge, 2002. Pierce, JR. "An introduction to information theory: symbols, signals and noise". Dover (2nd Edition). 1961 (reprinted by Dover 1980). Stone, JV. Chapter 1 of book "Information Theory: A Tutorial Introduction", University of Sheffield, England, 2014. . Yeung, RW. A First Course in Information Theory Kluwer Academic/Plenum Publishers, 2002. . Yeung, RW. Information Theory and Network Coding Springer 2008, 2002. Other books Leon Brillouin, Science and Information Theory, Mineola, N.Y.: Dover, [1956, 1962] 2004. A. I. Khinchin, Mathematical Foundations of Information Theory, New York: Dover, 1957. H. S. Leff and A. F. Rex, Editors, Maxwell's Demon: Entropy, Information, Computing, Princeton University Press, Princeton, New Jersey (1990). Robert K. Logan. What is Information? - Propagating Organization in the Biosphere, the Symbolosphere, the Technosphere and the Econosphere, Toronto: DEMO Publishing. Tom Siegfried, The Bit and the Pendulum, Wiley, 2000. Charles Seife, Decoding the Universe, Viking, 2006. Jeremy Campbell, Grammatical Man, Touchstone/Simon & Schuster, 1982, Henri Theil, Economics and Information Theory, Rand McNally & Company - Chicago, 1967. Escolano, Suau, Bonev, Information Theory in Computer Vision and Pattern Recognition, Springer, 2009. Vlatko Vedral, Decoding Reality: The Universe as Quantum Information, Oxford University Press 2010. External links Lambert F. L. (1999), "Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!", Journal of Chemical Education IEEE Information Theory Society and ITSOC Monographs, Surveys, and Reviews Claude Shannon Computer-related introductions in 1948 Cybernetics Formal sciences History of logic History of mathematics Information Age Data compression
Information theory
[ "Mathematics", "Technology", "Engineering" ]
6,245
[ "Information Age", "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory", "Computing and society" ]
14,828
https://en.wikipedia.org/wiki/Isomorphism
In mathematics, an isomorphism is a structure-preserving mapping (a morphism) between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word is derived . The interest in isomorphisms lies in the fact that two isomorphic objects have the same properties (excluding further information such as additional structure or names of objects). Thus isomorphic structures cannot be distinguished from the point of view of structure only, and may be identified. In mathematical jargon, one says that two objects are . An automorphism is an isomorphism from a structure to itself. An isomorphism between two structures is a canonical isomorphism (a canonical map that is an isomorphism) if there is only one isomorphism between the two structures (as is the case for solutions of a universal property), or if the isomorphism is much more natural (in some sense) than other isomorphisms. For example, for every prime number , all fields with elements are canonically isomorphic, with a unique isomorphism. The isomorphism theorems provide canonical isomorphisms that are not unique. The term is mainly used for algebraic structures. In this case, mappings are called homomorphisms, and a homomorphism is an isomorphism if and only if it is bijective. In various areas of mathematics, isomorphisms have received specialized names, depending on the type of structure under consideration. For example: An isometry is an isomorphism of metric spaces. A homeomorphism is an isomorphism of topological spaces. A diffeomorphism is an isomorphism of spaces equipped with a differential structure, typically differentiable manifolds. A symplectomorphism is an isomorphism of symplectic manifolds. A permutation is an automorphism of a set. In geometry, isomorphisms and automorphisms are often called transformations, for example rigid transformations, affine transformations, projective transformations. Category theory, which can be viewed as a formalization of the concept of mapping between structures, provides a language that may be used to unify the approach to these different aspects of the basic idea. Examples Logarithm and exponential Let be the multiplicative group of positive real numbers, and let be the additive group of real numbers. The logarithm function satisfies for all so it is a group homomorphism. The exponential function satisfies for all so it too is a homomorphism. The identities and show that and are inverses of each other. Since is a homomorphism that has an inverse that is also a homomorphism, is an isomorphism of groups, i.e., via the isomorphism . The function is an isomorphism which translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to multiply real numbers using a ruler and a table of logarithms, or using a slide rule with a logarithmic scale. Integers modulo 6 Consider the group the integers from 0 to 5 with addition modulo 6. Also consider the group the ordered pairs where the x coordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in the x-coordinate is modulo 2 and addition in the y-coordinate is modulo 3. These structures are isomorphic under addition, under the following scheme: or in general For example, which translates in the other system as Even though these two groups "look" different in that the sets contain different elements, they are indeed isomorphic: their structures are exactly the same. More generally, the direct product of two cyclic groups and is isomorphic to if and only if m and n are coprime, per the Chinese remainder theorem. Relation-preserving isomorphism If one object consists of a set X with a binary relation R and the other object consists of a set Y with a binary relation S then an isomorphism from X to Y is a bijective function such that: S is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, well-order, strict weak order, total preorder (weak order), an equivalence relation, or a relation with any other special properties, if and only if R is. For example, R is an ordering ≤ and S an ordering then an isomorphism from X to Y is a bijective function such that Such an isomorphism is called an or (less commonly) an . If then this is a relation-preserving automorphism. Applications In algebra, isomorphisms are defined for all algebraic structures. Some are more specifically studied; for example: Linear isomorphisms between vector spaces; they are specified by invertible matrices. Group isomorphisms between groups; the classification of isomorphism classes of finite groups is an open problem. Ring isomorphism between rings. Field isomorphisms are the same as ring isomorphism between fields; their study, and more specifically the study of field automorphisms is an important part of Galois theory. Just as the automorphisms of an algebraic structure form a group, the isomorphisms between two algebras sharing a common structure form a heap. Letting a particular isomorphism identify the two structures turns this heap into a group. In mathematical analysis, the Laplace transform is an isomorphism mapping hard differential equations into easier algebraic equations. In graph theory, an isomorphism between two graphs G and H is a bijective map f from the vertices of G to the vertices of H that preserves the "edge structure" in the sense that there is an edge from vertex u to vertex v in G if and only if there is an edge from to in H. See graph isomorphism. In order theory, an isomorphism between two partially ordered sets P and Q is a bijective map from P to Q that preserves the order structure in the sense that for any elements and of P we have less than in P if and only if is less than in Q. As an example, the set {1,2,3,6} of whole numbers ordered by the is-a-factor-of relation is isomorphic to the set {O, A, B, AB} of blood types ordered by the can-donate-to relation. See order isomorphism. In mathematical analysis, an isomorphism between two Hilbert spaces is a bijection preserving addition, scalar multiplication, and inner product. In early theories of logical atomism, the formal relationship between facts and true propositions was theorized by Bertrand Russell and Ludwig Wittgenstein to be isomorphic. An example of this line of thinking can be found in Russell's Introduction to Mathematical Philosophy. In cybernetics, the good regulator or Conant–Ashby theorem is stated "Every good regulator of a system must be a model of that system". Whether regulated or self-regulating, an isomorphism is required between the regulator and processing parts of the system. Category theoretic view In category theory, given a category C, an isomorphism is a morphism that has an inverse morphism that is, and Two categories and are isomorphic if there exist functors and which are mutually inverse to each other, that is, (the identity functor on ) and (the identity functor on ). Isomorphism vs. bijective morphism In a concrete category (roughly, a category whose objects are sets (perhaps with extra structure) and whose morphisms are structure-preserving functions), such as the category of topological spaces or categories of algebraic objects (like the category of groups, the category of rings, and the category of modules), an isomorphism must be bijective on the underlying sets. In algebraic categories (specifically, categories of varieties in the sense of universal algebra), an isomorphism is the same as a homomorphism which is bijective on underlying sets. However, there are concrete categories in which bijective morphisms are not necessarily isomorphisms (such as the category of topological spaces). Isomorphism class Since a composition of isomorphisms is an isomorphism, since the identity is an isomorphism and since the inverse of an isomorphism is an isomorphism, the relation that two mathematical objects are isomorphic is an equivalence relation. An equivalence class given by isomorphisms is commonly called an isomorphism class. Examples Examples of isomorphism classes are plentiful in mathematics. Two sets are isomorphic if there is a bijection between them. The isomorphism class of a finite set can be identified with the non-negative integer representing the number of elements it contains. The isomorphism class of a finite-dimensional vector space can be identified with the non-negative integer representing its dimension. The classification of finite simple groups enumerates the isomorphism classes of all finite simple groups. The classification of closed surfaces enumerates the isomorphism classes of all connected closed surfaces. Ordinals are essentially defined as isomorphism classes of well-ordered sets (though there are technical issues involved). However, there are circumstances in which the isomorphism class of an object conceals vital information about it. Given a mathematical structure, it is common that two substructures belong to the same isomorphism class. However, the way they are included in the whole structure can not be studied if they are identified. For example, in a finite-dimensional vector space, all subspaces of the same dimension are isomorphic, but must be distinguished to consider their intersection, sum, etc. The associative algebras consisting of coquaternions and 2 × 2 real matrices are isomorphic as rings. Yet they appear in different contexts for application (plane mapping and kinematics) so the isomorphism is insufficient to merge the concepts. In homotopy theory, the fundamental group of a space at a point , though technically denoted to emphasize the dependence on the base point, is often written lazily as simply if is path connected. The reason for this is that the existence of a path between two points allows one to identify loops at one with loops at the other; however, unless is abelian this isomorphism is non-unique. Furthermore, the classification of covering spaces makes strict reference to particular subgroups of , specifically distinguishing between isomorphic but conjugate subgroups, and therefore amalgamating the elements of an isomorphism class into a single featureless object seriously decreases the level of detail provided by the theory. Relation to equality Although there are cases where isomorphic objects can be considered equal, one must distinguish and . Equality is when two objects are the same, and therefore everything that is true about one object is true about the other. On the other hand, isomorphisms are related to some structure, and two isomorphic objects share only the properties that are related to this structure. For example, the sets are ; they are merely different representations—the first an intensional one (in set builder notation), and the second extensional (by explicit enumeration)—of the same subset of the integers. By contrast, the sets and are not since they do not have the same elements. They are isomorphic as sets, but there are many choices (in fact 6) of an isomorphism between them: one isomorphism is while another is and no one isomorphism is intrinsically better than any other. On this view and in this sense, these two sets are not equal because one cannot consider them : one can choose an isomorphism between them, but that is a weaker claim than identity—and valid only in the context of the chosen isomorphism. Also, integers and even numbers are isomorphic as ordered sets and abelian groups (for addition), but cannot be considered equal sets, since one is a proper subset of the other. On the other hand, when sets (or other mathematical objects) are defined only by their properties, without considering the nature of their elements, one often considers them to be equal. This is generally the case with solutions of universal properties. For example, the rational numbers are usually defined as equivalence classes of pairs of integers, although nobody thinks of a rational number as a set (equivalence class). The universal property of the rational numbers is essentially that they form a field that contains the integers and does not contain any proper subfield. It results that given two fields with these properties, there is a unique field isomorphism between them. This allows identifying these two fields, since every property of one of them can be transferred to the other through the isomorphism. For example, the real numbers that are obtained by dividing two integers (inside the real numbers) form the smallest subfield of the real numbers. There is thus a unique isomorphism from the rational numbers (defined as equivalence classes of pairs) to the quotients of two real numbers that are integers. This allows identifying these two sorts of rational numbers. See also Bisimulation Equivalence relation Heap (mathematics) Isometry Isomorphism class Isomorphism theorem Universal property Coherent isomorphism Balanced category Notes References Further reading External links Morphisms Equivalence (mathematics)
Isomorphism
[ "Mathematics" ]
2,694
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Mathematical relations", "Category theory", "Morphisms" ]
14,838
https://en.wikipedia.org/wiki/Inertial%20frame%20of%20reference
In classical physics and special relativity, an inertial frame of reference (also called an inertial space or a Galilean reference frame) is a frame of reference in which objects exhibit inertia: they remain at rest or in uniform motion relative to the frame until acted upon by external forces. In such a frame, the laws of nature can be observed without the need to correct for acceleration. All frames of reference with zero acceleration are in a state of constant rectilinear motion (straight-line motion) with respect to one another. In such a frame, an object with zero net force acting on it, is perceived to move with a constant velocity, or, equivalently, Newton's first law of motion holds. Such frames are known as inertial. Some physicists, like Isaac Newton, originally thought that one of these frames was absolute — the one approximated by the fixed stars. However, this is not required for the definition, and it is now known that those stars are in fact moving. According to the principle of special relativity, all physical laws look the same in all inertial reference frames, and no inertial frame is privileged over another. Measurements of objects in one inertial frame can be converted to measurements in another by a simple transformation — the Galilean transformation in Newtonian physics or the Lorentz transformation (combined with a translation) in special relativity; these approximately match when the relative speed of the frames is low, but differ as it approaches the speed of light. By contrast, a non-inertial reference frame has non-zero acceleration. In such a frame, the interactions between physical objects vary depending on the acceleration of that frame with respect to an inertial frame. Viewed from the perspective of classical mechanics and special relativity, the usual physical forces caused by the interaction of objects have to be supplemented by fictitious forces caused by inertia. Viewed from the perspective of general relativity theory, the fictitious (i.e. inertial) forces are attributed to geodesic motion in spacetime. Due to Earth's rotation, its surface is not an inertial frame of reference. The Coriolis effect can deflect certain forms of motion as seen from Earth, and the centrifugal force will reduce the effective gravity at the equator. Nevertheless, for many applications the Earth is an adequate approximation of an inertial reference frame. Introduction The motion of a body can only be described relative to something else—other bodies, observers, or a set of spacetime coordinates. These are called frames of reference. According to the first postulate of special relativity, all physical laws take their simplest form in an inertial frame, and there exist multiple inertial frames interrelated by uniform translation: This simplicity manifests itself in that inertial frames have self-contained physics without the need for external causes, while physics in non-inertial frames has external causes. The principle of simplicity can be used within Newtonian physics as well as in special relativity: However, this definition of inertial frames is understood to apply in the Newtonian realm and ignores relativistic effects. In practical terms, the equivalence of inertial reference frames means that scientists within a box moving with a constant absolute velocity cannot determine this velocity by any experiment. Otherwise, the differences would set up an absolute standard reference frame. According to this definition, supplemented with the constancy of the speed of light, inertial frames of reference transform among themselves according to the Poincaré group of symmetry transformations, of which the Lorentz transformations are a subgroup. In Newtonian mechanics, inertial frames of reference are related by the Galilean group of symmetries. Newton's inertial frame of reference Absolute space Newton posited an absolute space considered well-approximated by a frame of reference stationary relative to the fixed stars. An inertial frame was then one in uniform translation relative to absolute space. However, some "relativists", even at the time of Newton, felt that absolute space was a defect of the formulation, and should be replaced. The expression inertial frame of reference () was coined by Ludwig Lange in 1885, to replace Newton's definitions of "absolute space and time" with a more operational definition: The inadequacy of the notion of "absolute space" in Newtonian mechanics is spelled out by Blagojevich: The utility of operational definitions was carried much further in the special theory of relativity. Some historical background including Lange's definition is provided by DiSalle, who says in summary: Newtonian mechanics Classical theories that use the Galilean transformation postulate the equivalence of all inertial reference frames. The Galilean transformation transforms coordinates from one inertial reference frame, , to another, , by simple addition or subtraction of coordinates: where r0 and t0 represent shifts in the origin of space and time, and v is the relative velocity of the two inertial reference frames. Under Galilean transformations, the time t2 − t1 between two events is the same for all reference frames and the distance between two simultaneous events (or, equivalently, the length of any object, |r2 − r1|) is also the same. Within the realm of Newtonian mechanics, an inertial frame of reference, or inertial reference frame, is one in which Newton's first law of motion is valid. However, the principle of special relativity generalizes the notion of an inertial frame to include all physical laws, not simply Newton's first law. Newton viewed the first law as valid in any reference frame that is in uniform motion (neither rotating nor accelerating) relative to absolute space; as a practical matter, "absolute space" was considered to be the fixed stars In the theory of relativity the notion of absolute space or a privileged frame is abandoned, and an inertial frame in the field of classical mechanics is defined as: Hence, with respect to an inertial frame, an object or body accelerates only when a physical force is applied, and (following Newton's first law of motion), in the absence of a net force, a body at rest will remain at rest and a body in motion will continue to move uniformly—that is, in a straight line and at constant speed. Newtonian inertial frames transform among each other according to the Galilean group of symmetries. If this rule is interpreted as saying that straight-line motion is an indication of zero net force, the rule does not identify inertial reference frames because straight-line motion can be observed in a variety of frames. If the rule is interpreted as defining an inertial frame, then being able to determine when zero net force is applied is crucial. The problem was summarized by Einstein: There are several approaches to this issue. One approach is to argue that all real forces drop off with distance from their sources in a known manner, so it is only needed that a body is far enough away from all sources to ensure that no force is present. A possible issue with this approach is the historically long-lived view that the distant universe might affect matters (Mach's principle). Another approach is to identify all real sources for real forces and account for them. A possible issue with this approach is the possibility of missing something, or accounting inappropriately for their influence, perhaps, again, due to Mach's principle and an incomplete understanding of the universe. A third approach is to look at the way the forces transform when shifting reference frames. Fictitious forces, those that arise due to the acceleration of a frame, disappear in inertial frames and have complicated rules of transformation in general cases. Based on the universality of physical law and the request for frames where the laws are most simply expressed, inertial frames are distinguished by the absence of such fictitious forces. Newton enunciated a principle of relativity himself in one of his corollaries to the laws of motion: This principle differs from the special principle in two ways: first, it is restricted to mechanics, and second, it makes no mention of simplicity. It shares the special principle of the invariance of the form of the description among mutually translating reference frames. The role of fictitious forces in classifying reference frames is pursued further below. Special relativity Einstein's theory of special relativity, like Newtonian mechanics, postulates the equivalence of all inertial reference frames. However, because special relativity postulates that the speed of light in free space is invariant, the transformation between inertial frames is the Lorentz transformation, not the Galilean transformation which is used in Newtonian mechanics. The invariance of the speed of light leads to counter-intuitive phenomena, such as time dilation, length contraction, and the relativity of simultaneity. The predictions of special relativity have been extensively verified experimentally. The Lorentz transformation reduces to the Galilean transformation as the speed of light approaches infinity or as the relative velocity between frames approaches zero. Examples Simple example Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 meters. The car in front is traveling at 22 meters per second and the car behind is traveling at 30 meters per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose. First, we could observe the two cars from the side of the road. We define our "frame of reference" S as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where is the position in meters of car one after time t in seconds and is the position of car two after time t. Notice that these formulas predict at t = 0 s the first car is 200m down the road and the second car is right beside us, as expected. We want to find the time at which . Therefore, we set and solve for , that is: Alternatively, we could choose a frame of reference S′ situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of . To catch up to the first car, it will take a time of , that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at . It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one can convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you can deduct five minutes from the time displayed on your watch to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three). Additional example For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving to the right. However, for the person facing west, the car was moving to the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system. For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the -axis, and the direction in front of him as the positive -axis. To him, the car moves along the axis with some velocity in the positive -direction. Alfred's frame of reference is considered an inertial frame because he is not accelerating, ignoring effects such as Earth's rotation and gravity. Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive -axis, and the direction in front of her as the positive -axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving – for instance, as she drives past Alfred, she observes him moving with velocity in the negative -direction. If she is driving north, then north is the positive -direction; if she turns east, east becomes the positive -direction. Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be in the negative -direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, in the negative -direction. However, if she is accelerating at rate in the negative -direction (in other words, slowing down), she will find Candace's acceleration to be in the negative -direction—a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive -direction (speeding up), she will observe Candace's acceleration as in the negative -direction—a larger value than Alfred's measurement. Non-inertial frames Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below. General relativity General relativity is based upon the principle of equivalence: This idea was introduced in Einstein's 1907 article "Principle of Relativity and Gravitation" and later developed in 1911. Support for this principle is found in the Eötvös experiment, which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 1011. For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin. Einstein's general theory modifies the distinction between nominally "inertial" and "non-inertial" effects by replacing special relativity's "flat" Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion, whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity. However, the general theory reduces to the special theory over sufficiently small regions of spacetime, where curvature effects become less important and the earlier inertial frame arguments can come back into play. Consequently, modern special relativity is now sometimes described as only a "local theory". "Local" can encompass, for example, the entire Milky Way galaxy: The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the Solar System. Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian. Inertial frames and rotation In an inertial frame, Newton's first law, the law of inertia, is satisfied: Any free motion has a constant magnitude and direction. Newton's second law for a particle takes the form: with F the net force (a vector), m the mass of a particle and a the acceleration of the particle (also a vector) which would be measured by an observer at rest in the frame. The force F is the vector sum of all "real" forces on the particle, such as contact forces, electromagnetic, gravitational, and nuclear forces. In contrast, Newton's second law in a rotating frame of reference (a non-inertial frame of reference), rotating at angular rate Ω about an axis, takes the form: which looks the same as in an inertial frame, but now the force F′ is the resultant of not only F, but also additional terms (the paragraph following this equation presents the main points without detailed mathematics): where the angular rotation of the frame is expressed by the vector Ω pointing in the direction of the axis of rotation, and with magnitude equal to the angular rate of rotation Ω, symbol × denotes the vector cross product, vector xB locates the body and vector vB is the velocity of the body according to a rotating observer (different from the velocity seen by the inertial observer). The extra terms in the force F′ are the "fictitious" forces for this frame, whose causes are external to the system in the frame. The first extra term is the Coriolis force, the second the centrifugal force, and the third the Euler force. These terms all have these properties: they vanish when Ω = 0; that is, they are zero for an inertial frame (which, of course, does not rotate); they take on a different magnitude and direction in every rotating frame, depending upon its particular value of Ω; they are ubiquitous in the rotating frame (affect every particle, regardless of circumstance); and they have no apparent source in identifiable physical sources, in particular, matter. Also, fictitious forces do not drop off with distance (unlike, for example, nuclear forces or electrical forces). For example, the centrifugal force that appears to emanate from the axis of rotation in a rotating frame increases with distance from the axis. All observers agree on the real forces, F; only non-inertial observers need fictitious forces. The laws of physics in the inertial frame are simpler because unnecessary forces are not present. In Newton's time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space. In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton's laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces, for example, the Coriolis force and the centrifugal force. Two experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket. In both cases, application of Newton's second law would not work for the rotating observer without invoking centrifugal and Coriolis forces to account for their observations (tension in the case of the spheres; parabolic water surface in the case of the rotating bucket). As now known, the fixed stars are not fixed. Those that reside in the Milky Way turn with the galaxy, exhibiting proper motions. Those that are outside our galaxy (such as nebulae once mistaken to be stars) participate in their own motion as well, partly due to expansion of the universe, and partly due to peculiar velocities. For instance, the Andromeda Galaxy is on collision course with the Milky Way at a speed of 117 km/s. The concept of inertial frames of reference is no longer tied to either the fixed stars or to absolute space. Rather, the identification of an inertial frame is based on the simplicity of the laws of physics in the frame. The laws of nature take a simpler form in inertial frames of reference because in these frames one did not have to introduce inertial forces when writing down Newton's law of motion. In practice, using a frame of reference based upon the fixed stars as though it were an inertial frame of reference introduces little discrepancy. For example, the centrifugal acceleration of the Earth because of its rotation about the Sun is about thirty million times greater than that of the Sun about the galactic center. To illustrate further, consider the question: "Does the Universe rotate?" An answer might explain the shape of the Milky Way galaxy using the laws of physics, although other observations might be more definitive; that is, provide larger discrepancies or less measurement uncertainty, like the anisotropy of the microwave background radiation or Big Bang nucleosynthesis. The flatness of the Milky Way depends on its rate of rotation in an inertial frame of reference. If its apparent rate of rotation is attributed entirely to rotation in an inertial frame, a different "flatness" is predicted than if it is supposed that part of this rotation is actually due to rotation of the universe and should not be included in the rotation of the galaxy itself. Based upon the laws of physics, a model is set up in which one parameter is the rate of rotation of the Universe. If the laws of physics agree more accurately with observations in a model with rotation than without it, we are inclined to select the best-fit value for rotation, subject to all other pertinent experimental observations. If no value of the rotation parameter is successful and theory is not within observational error, a modification of physical law is considered, for example, dark matter is invoked to explain the galactic rotation curve. So far, observations show any rotation of the universe is very slow, no faster than once every years (10−13 rad/yr), and debate persists over whether there is any rotation. However, if rotation were found, interpretation of observations in a frame tied to the universe would have to be corrected for the fictitious forces inherent in such rotation in classical physics and special relativity, or interpreted as the curvature of spacetime and the motion of matter along the geodesics in general relativity. When quantum effects are important, there are additional conceptual complications that arise in quantum reference frames. Primed frames An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x′, y′, a′. The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r′. From the geometry of the situation Taking the first and second derivatives of this with respect to time where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame. These equations allow transformations between the two coordinate systems; for example, Newton's second law can be written as When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect). A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried). This arrangement leads to the equation (see Fictitious force for a derivation): or, to solve for the acceleration in the accelerated frame, Multiplying through by the mass m gives where (Euler force), (Coriolis force), (centrifugal force). Separating non-inertial from inertial reference frames Theory Inertial and non-inertial reference frames can be distinguished by the absence or presence of fictitious forces. The presence of fictitious forces indicates the physical laws are not the simplest laws available, in terms of the special principle of relativity, a frame where fictitious forces are present is not an inertial frame: Bodies in non-inertial reference frames are subject to so-called fictitious forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames. To apply the Newtonian definition of an inertial frame, the understanding of separation between "fictitious" forces and "real" forces must be made clear. For example, consider a stationary object in an inertial frame. Being at rest, no net force is applied. But in a frame rotating about a fixed axis, the object appears to move in a circle, and is subject to centripetal force. How can it be decided that the rotating frame is a non-inertial frame? There are two approaches to this resolution: one approach is to look for the origin of the fictitious forces (the Coriolis force and the centrifugal force). It will be found there are no sources for these forces, no associated force carriers, no originating bodies. A second approach is to look at a variety of frames of reference. For any inertial frame, the Coriolis force and the centrifugal force disappear, so application of the principle of special relativity would identify these frames where the forces disappear as sharing the same and the simplest physical laws, and hence rule that the rotating frame is not an inertial frame. Newton examined this problem himself using rotating spheres, as shown in Figure 2 and Figure 3. He pointed out that if the spheres are not rotating, the tension in the tying string is measured as zero in every frame of reference. If the spheres only appear to rotate (that is, we are watching stationary spheres from a rotating frame), the zero tension in the string is accounted for by observing that the centripetal force is supplied by the centrifugal and Coriolis forces in combination, so no tension is needed. If the spheres really are rotating, the tension observed is exactly the centripetal force required by the circular motion. Thus, measurement of the tension in the string identifies the inertial frame: it is the one where the tension in the string provides exactly the centripetal force demanded by the motion as it is observed in that frame, and not a different value. That is, the inertial frame is the one where the fictitious forces vanish. For linear acceleration, Newton expressed the idea of undetectability of straight-line accelerations held in common: This principle generalizes the notion of an inertial frame. For example, an observer confined in a free-falling lift will assert that he himself is a valid inertial frame, even if he is accelerating under gravity, so long as he has no knowledge about anything outside the lift. So, strictly speaking, inertial frame is a relative concept. With this in mind, inertial frames can collectively be defined as a set of frames which are stationary or moving at constant velocity with respect to each other, so that a single inertial frame is defined as an element of this set. For these ideas to apply, everything observed in the frame has to be subject to a base-line, common acceleration shared by the frame itself. That situation would apply, for example, to the elevator example, where all objects are subject to the same gravitational acceleration, and the elevator itself accelerates at the same rate. Applications Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it. Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source. A gyrocompass, employed for navigation of seagoing vessels, finds the geometric north. It does so, not by sensing the Earth's magnetic field, but by using inertial space as its reference. The outer casing of the gyrocompass device is held in such a way that it remains aligned with the local plumb line. When the gyroscope wheel inside the gyrocompass device is spun up, the way the gyroscope wheel is suspended causes the gyroscope wheel to gradually align its spinning axis with the Earth's axis. Alignment with the Earth's axis is the only direction for which the gyroscope's spinning axis can be stationary with respect to the Earth and not be required to change direction with respect to inertial space. After being spun up, a gyrocompass can reach the direction of alignment with the Earth's axis in as little as a quarter of an hour. See also Absolute rotation Diffeomorphism Galilean invariance General covariance Local reference frame Lorentz covariance Newton's first law Quantum reference frame References Further reading Edwin F. Taylor and John Archibald Wheeler, Spacetime Physics, 2nd ed. (Freeman, NY, 1992) Albert Einstein, Relativity, the special and the general theories, 15th ed. (1954) Albert Einstein, On the Electrodynamics of Moving Bodies, included in The Principle of Relativity, page 38. Dover 1923 Rotation of the Universe B Ciobanu, I Radinchi Modeling the electric and magnetic fields in a rotating universe Rom. Journ. Phys., Vol. 53, Nos. 1–2, P. 405–415, Bucharest, 2008 Yuri N. Obukhov, Thoralf Chrobok, Mike Scherfner Shear-free rotating inflation Phys. Rev. D 66, 043518 (2002) [5 pages] Yuri N. Obukhov On physical foundations and observational effects of cosmic rotation (2000) Li-Xin Li Effect of the Global Rotation of the Universe on the Formation of Galaxies General Relativity and Gravitation, 30 (1998) P Birch Is the Universe rotating? Nature 298, 451 – 454 (29 July 1982) Kurt Gödel An example of a new type of cosmological solutions of Einstein's field equations of gravitation Rev. Mod. Phys., Vol. 21, p. 447, 1949. External links Stanford Encyclopedia of Philosophy entry showing scenes as viewed from both an inertial frame and a rotating frame of reference, visualizing the Coriolis and centrifugal forces. Classical mechanics Frames of reference Theory of relativity Orbits
Inertial frame of reference
[ "Physics", "Mathematics" ]
6,794
[ "Frames of reference", "Classical mechanics", "Theory of relativity", "Mechanics", "Coordinate systems" ]
14,865
https://en.wikipedia.org/wiki/Isotropy
In physics and geometry, isotropy () is uniformity in all orientations. Precise definitions depend on the subject area. Exceptions, or inequalities, are frequently indicated by the prefix or , hence anisotropy. Anisotropy is also used to describe situations where properties vary systematically, dependent on direction. Isotropic radiation has the same intensity regardless of the direction of measurement, and an isotropic field exerts the same action regardless of how the test particle is oriented. Mathematics Within mathematics, isotropy has a few different meanings: Isotropic manifolds A manifold is isotropic if the geometry on the manifold is the same regardless of direction. A similar concept is homogeneity. Isotropic quadratic form A quadratic form q is said to be isotropic if there is a non-zero vector v such that ; such a v is an isotropic vector or null vector. In complex geometry, a line through the origin in the direction of an isotropic vector is an isotropic line. Isotropic coordinates Isotropic coordinates are coordinates on an isotropic chart for Lorentzian manifolds. Isotropy groupAn isotropy group is the group of isomorphisms from any object to itself in a groupoid. An isotropy representation is a representation of an isotropy group. Isotropic position A probability distribution over a vector space is in isotropic position if its covariance matrix is the identity. Isotropic vector field The vector field generated by a point source is said to be isotropic if, for any spherical neighborhood centered at the point source, the magnitude of the vector determined by any point on the sphere is invariant under a change in direction. For an example, starlight appears to be isotropic. Physics Quantum mechanics or particle physics When a spinless particle (or even an unpolarized particle with spin) decays, the resulting decay distribution must be isotropic in the rest frame of the decaying particle - regardless of the detailed physics of the decay. This follows from rotational invariance of the Hamiltonian, which in turn is guaranteed for a spherically symmetric potential. Gases The kinetic theory of gases also exemplifies isotropy. It is assumed that the molecules move in random directions and as a consequence, there is an equal probability of a molecule moving in any direction. Thus when there are many molecules in the gas, with high probability there will be very similar numbers moving in one direction as any other, demonstrating approximate isotropy. Fluid dynamics Fluid flow is isotropic if there is no directional preference (e.g. in fully developed 3D turbulence). An example of anisotropy is in flows with a background density as gravity works in only one direction. The apparent surface separating two differing isotropic fluids would be referred to as an isotrope. Thermal expansion A solid is said to be isotropic if the expansion of solid is equal in all directions when thermal energy is provided to the solid. Electromagnetics An isotropic medium is one such that the permittivity, ε, and permeability, μ, of the medium are uniform in all directions of the medium, the simplest instance being free space. Optics Optical isotropy means having the same optical properties in all directions. The individual reflectance or transmittance of the domains is averaged for micro-heterogeneous samples if the macroscopic reflectance or transmittance is to be calculated. This can be verified simply by investigating, for example, a polycrystalline material under a polarizing microscope having the polarizers crossed: If the crystallites are larger than the resolution limit, they will be visible. Cosmology The cosmological principle, which underpins much of modern cosmology (including the Big Bang theory of the evolution of the observable universe), assumes that the universe is both isotropic and homogeneous, meaning that the universe has no preferred location (is the same everywhere) and has no preferred direction. Observations made in 2006 suggest that, on distance-scales much larger than galaxies, galaxy clusters are "Great" features, but small compared to so-called multiverse scenarios. Materials science In the study of mechanical properties of materials, "isotropic" means having identical values of a property in all directions. This definition is also used in geology and mineralogy. Glass and metals are examples of isotropic materials. Common anisotropic materials include wood (because its material properties are different parallel to and perpendicular to the grain) and layered rocks such as slate. Isotropic materials are useful since they are easier to shape, and their behavior is easier to predict. Anisotropic materials can be tailored to the forces an object is expected to experience. For example, the fibers in carbon fiber materials and rebars in reinforced concrete are oriented to withstand tension. Microfabrication In industrial processes, such as etching steps, "isotropic" means that the process proceeds at the same rate, regardless of direction. Simple chemical reaction and removal of a substrate by an acid, a solvent or a reactive gas is often very close to isotropic. Conversely, "anisotropic" means that the attack rate of the substrate is higher in a certain direction. Anisotropic etch processes, where vertical etch-rate is high but lateral etch-rate is very small, are essential processes in microfabrication of integrated circuits and MEMS devices. Antenna (radio) An isotropic antenna is an idealized "radiating element" used as a reference; an antenna that broadcasts power equally (calculated by the Poynting vector) in all directions. The gain of an arbitrary antenna is usually reported in decibels relative to an isotropic antenna, and is expressed as dBi or dB(i). In cells (a.k.a. muscle fibers), the term "isotropic" refers to the light bands (I bands) that contribute to the striated pattern of the cells. Pharmacology While it is well established that the skin provides an ideal site for the administration of local and systemic drugs, it presents a formidable barrier to the permeation of most substances. Recently, isotropic formulations have been used extensively in dermatology for drug delivery. Computer science ImagingA volume such as a computed tomography is said to have isotropic voxel spacing when the space between any two adjacent voxels is the same along each axis x, y, z. E.g., voxel spacing is isotropic if the center of voxel (i, j, k) is 1.38 mm from that of (i+1, j, k), 1.38 mm from that of (i, j+1, k) and 1.38 mm from that of (i, j, k+1) for all indices i, j, k. Other sciences Economics and geography An isotropic region is a region that has the same properties everywhere. Such a region is a construction needed in many types of models. See also Rotational invariance Isotropic bands Isotropic coordinates Transverse isotropy Bi isotropic Symmetry References Orientation (geometry) Symmetry
Isotropy
[ "Physics", "Mathematics" ]
1,497
[ "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)", "Symmetry" ]
14,909
https://en.wikipedia.org/wiki/Inertia
Inertia is the natural tendency of objects in motion to stay in motion and objects at rest to stay at rest, unless a force causes the velocity to change. It is one of the fundamental principles in classical physics, and described by Isaac Newton in his first law of motion (also known as The Principle of Inertia). It is one of the primary manifestations of mass, one of the core quantitative properties of physical systems. Newton writes: In his 1687 work Philosophiæ Naturalis Principia Mathematica, Newton defined inertia as a property: History and development Early understanding of inertial motion Professor John H. Lienhard points out the Mozi – based on a Chinese text from the Warring States period (475–221 BCE) – as having given the first description of inertia. Before the European Renaissance, the prevailing theory of motion in western philosophy was that of Aristotle (384–322 BCE). On the surface of the Earth, the inertia property of physical objects is often masked by gravity and the effects of friction and air resistance, both of which tend to decrease the speed of moving objects (commonly to the point of rest). This misled the philosopher Aristotle to believe that objects would move only as long as force was applied to them. Aristotle said that all moving objects (on Earth) eventually come to rest unless an external power (force) continued to move them. Aristotle explained the continued motion of projectiles, after being separated from their projector, as an (itself unexplained) action of the surrounding medium continuing to move the projectile. Despite its general acceptance, Aristotle's concept of motion was disputed on several occasions by notable philosophers over nearly two millennia. For example, Lucretius (following, presumably, Epicurus) stated that the "default state" of the matter was motion, not stasis (stagnation). In the 6th century, John Philoponus criticized the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of a surrounding medium, but by some property imparted to the object when it was set in motion. Although this was not the modern concept of inertia, for there was still the need for a power to keep a body in motion, it proved a fundamental step in that direction. This view was strongly opposed by Averroes and by many scholastic philosophers who supported Aristotle. However, this view did not go unchallenged in the Islamic world, where Philoponus had several supporters who further developed his ideas. In the 11th century, Persian polymath Ibn Sina (Avicenna) claimed that a projectile in a vacuum would not stop unless acted upon. Theory of impetus In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus, dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also believed that impetus could be not only linear but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan's theory was followed up by his pupil Albert of Saxony (1316–1390) and the Oxford Calculators, who performed various experiments which further undermined the Aristotelian model. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of illustrating the laws of motion with graphs. Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone: Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion. Classical inertia According to science historian Charles Coulston Gillispie, inertia "entered science as a physical consequence of Descartes' geometrization of space-matter, combined with the immutability of God." The first physicist to completely break away from the Aristotelian model of motion was Isaac Beeckman in 1614. The term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae (published in three parts from 1617 to 1621). However, the meaning of Kepler's term, which he derived from the Latin word for "idleness" or "laziness", was not quite the same as its modern interpretation. Kepler defined inertia only in terms of resistance to movement, once again based on the axiomatic assumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to those concepts as it is today. The principle of inertia, as formulated by Aristotle for "motions in a void", includes that a mundane object tends to resist a change in motion. The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the Earth is never at rest, but is actually in constant motion around the Sun. Galileo, in his further development of the Copernican model, recognized these problems with the then-accepted nature of motion and, at least partially, as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle: A body moving on a level surface will continue in the same direction at a constant speed unless disturbed. Galileo writes that "all external impediments removed, a heavy body on a spherical surface concentric with the earth will maintain itself in that state in which it has been; if placed in a movement towards the west (for example), it will maintain itself in that movement." This notion, which is termed "circular inertia" or "horizontal circular inertia" by historians of science, is a precursor to, but is distinct from, Newton's notion of rectilinear inertia. For Galileo, a motion is "horizontal" if it does not carry the moving body towards or away from the center of the Earth, and for him, "a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping." It is also worth noting that Galileo later (in 1632) concluded that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. This observation ultimately came to be the basis for Albert Einstein to develop the theory of special relativity. Concepts of inertia in Galileo's writings would later come to be refined, modified, and codified by Isaac Newton as the first of his laws of motion (first published in Newton's work, Philosophiæ Naturalis Principia Mathematica, in 1687): Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon. Despite having defined the concept in his laws of motion, Newton did not actually use the term "inertia.” In fact, he originally viewed the respective phenomena as being caused by "innate forces" inherent in matter which resist any acceleration. Given this perspective, and borrowing from Kepler, Newton conceived of "inertia" as "the innate force possessed by an object which resists changes in motion", thus defining "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself. However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one that we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon as described by Newton's first law of motion, and the two concepts are now considered to be equivalent. Relativity Albert Einstein's theory of special relativity, as proposed in his 1905 paper entitled "On the Electrodynamics of Moving Bodies", was built on the understanding of inertial reference frames developed by Galileo, Huygens and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass, energy, and distance, Einstein's concept of inertia remained at first unchanged from Newton's original meaning. However, this resulted in a limitation inherent in special relativity: the principle of relativity could only apply to inertial reference frames. To address this limitation, Einstein developed his general theory of relativity ("The Foundation of the General Theory of Relativity", 1916), which provided a theory including noninertial (accelerated) reference frames. In general relativity, the concept of inertial motion got a broader meaning. Taking into account general relativity, inertial motion is any movement of a body that is not affected by forces of electrical, magnetic, or other origin, but that is only under the influence of gravitational masses. Physically speaking, this happens to be exactly what a properly functioning three-axis accelerometer is indicating when it does not detect any proper acceleration. Etymology The term inertia comes from the Latin word iners, meaning idle or sluggish. Rotational inertia A quantity related to inertia is rotational inertia (→ moment of inertia), the property that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum remains unchanged unless an external torque is applied; this is called conservation of angular momentum. Rotational inertia is often considered in relation to a rigid body. For example, a gyroscope uses the property that it resists any change in the axis of rotation. See also Flywheel energy storage devices which may also be known as an Inertia battery General relativity Vertical and horizontal Inertial navigation system Inertial response of synchronous generators in an electrical grid Kinetic energy List of moments of inertia Mach's principle Newton's laws of motion Classical mechanics Special relativity Parallel axis theorem References Further reading Butterfield, H (1957), The Origins of Modern Science, . Clement, J (1982), "Students' preconceptions in introductory mechanics", American Journal of Physics vol 50, pp 66–71 Crombie, A C (1959), Medieval and Early Modern Science, vol. 2. McCloskey, M (1983), "Intuitive physics", Scientific American, April, pp. 114–123. McCloskey, M & Carmazza, A (1980), "Curvilinear motion in the absence of external forces: naïve beliefs about the motion of objects", Science vol. 210, pp. 1139–1141. External links Why Does the Earth Spin? (YouTube) Classical mechanics Gyroscopes Mass Velocity Articles containing video clips
Inertia
[ "Physics", "Mathematics" ]
2,446
[ "Scalar physical quantities", "Physical phenomena", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Size", "Motion (physics)", "Mechanics", "Vector physical quantities", "Velocity", "Wikipedia categories named after physical quantities", "Matter" ]
14,951
https://en.wikipedia.org/wiki/Ionic%20bonding
Ionic bonding is a type of chemical bonding that involves the electrostatic attraction between oppositely charged ions, or between two atoms with sharply different electronegativities, and is the primary interaction occurring in ionic compounds. It is one of the main types of bonding, along with covalent bonding and metallic bonding. Ions are atoms (or groups of atoms) with an electrostatic charge. Atoms that gain electrons make negatively charged ions (called anions). Atoms that lose electrons make positively charged ions (called cations). This transfer of electrons is known as electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be more complex, e.g. molecular ions like or . In simpler words, an ionic bond results from the transfer of electrons from a metal to a non-metal to obtain a full valence shell for both atoms. Clean ionic bonding — in which one atom or molecule completely transfers an electron to another — cannot exist: all ionic compounds have some degree of covalent bonding or electron sharing. Thus, the term "ionic bonding" is given when the ionic character is greater than the covalent character – that is, a bond in which there is a large difference in electronegativity between the two atoms, causing the bonding to be more polar (ionic) than in covalent bonding where electrons are shared more equally. Bonds with partially ionic and partially covalent characters are called polar covalent bonds. Ionic compounds conduct electricity when molten or in solution, typically not when solid. Ionic compounds generally have a high melting point, depending on the charge of the ions they consist of. The higher the charges the stronger the cohesive forces and the higher the melting point. They also tend to be soluble in water; the stronger the cohesive forces, the lower the solubility. Overview Atoms that have an almost full or almost empty valence shell tend to be very reactive. Strongly electronegative atoms (such as halogens) often have only one or two empty electron states in their valence shell, and frequently bond with other atoms or gain electrons to form anions. Weakly electronegative atoms (such as alkali metals) have relatively few valence electrons, which can easily be lost to strongly electronegative atoms. As a result, weakly electronegative atoms tend to distort their electron cloud and form cations. Properties of ionic bonds They are considered to be among the strongest of all types of chemical bonds. This often causes ionic compounds to be very stable. Ionic bonds have high bond energy. Bond energy is the mean amount of energy required to break the bond in the gaseous state. Most ionic compounds exist in the form of a crystal structure, in which the ions occupy the corners of the crystal. Such a structure is called a crystal lattice. Ionic compounds lose their crystal lattice structure and break up into ions when dissolved in water or any other polar solvent. This process is called solvation. The presence of these free ions makes aqueous ionic compound solutions good conductors of electricity. The same occurs when the compounds are heated above their melting point in a process known as melting. Formation Ionic bonding can result from a redox reaction when atoms of an element (usually metal), whose ionization energy is low, give some of their electrons to achieve a stable electron configuration. In doing so, cations are formed. An atom of another element (usually nonmetal) with greater electron affinity accepts one or more electrons to attain a stable electron configuration, and after accepting electrons an atom becomes an anion. Typically, the stable electron configuration is one of the noble gases for elements in the s-block and the p-block, and particular stable electron configurations for d-block and f-block elements. The electrostatic attraction between the anions and cations leads to the formation of a solid with a crystallographic lattice in which the ions are stacked in an alternating fashion. In such a lattice, it is usually not possible to distinguish discrete molecular units, so that the compounds formed are not molecular. However, the ions themselves can be complex and form molecular ions like the acetate anion or the ammonium cation. For example, common table salt is sodium chloride. When sodium (Na) and chlorine (Cl) are combined, the sodium atoms each lose an electron, forming cations (Na+), and the chlorine atoms each gain an electron to form anions (Cl−). These ions are then attracted to each other in a 1:1 ratio to form sodium chloride (NaCl). Na + Cl → Na+ + Cl− → NaCl However, to maintain charge neutrality, strict ratios between anions and cations are observed so that ionic compounds, in general, obey the rules of stoichiometry despite not being molecular compounds. For compounds that are transitional to the alloys and possess mixed ionic and metallic bonding, this may not be the case anymore. Many sulfides, e.g., do form non-stoichiometric compounds. Many ionic compounds are referred to as salts as they can also be formed by the neutralization reaction of an Arrhenius base like NaOH with an Arrhenius acid like HCl NaOH + HCl → NaCl + H2O The salt NaCl is then said to consist of the acid rest Cl− and the base rest Na+. The removal of electrons to form the cation is endothermic, raising the system's overall energy. There may also be energy changes associated with breaking of existing bonds or the addition of more than one electron to form anions. However, the action of the anion's accepting the cation's valence electrons and the subsequent attraction of the ions to each other releases (lattice) energy and, thus, lowers the overall energy of the system. Ionic bonding will occur only if the overall energy change for the reaction is favorable. In general, the reaction is exothermic, but, e.g., the formation of mercuric oxide (HgO) is endothermic. The charge of the resulting ions is a major factor in the strength of ionic bonding, e.g. a salt C+A− is held together by electrostatic forces roughly four times weaker than C2+A2− according to Coulomb's law, where C and A represent a generic cation and anion respectively. The sizes of the ions and the particular packing of the lattice are ignored in this rather simplistic argument. Structures Ionic compounds in the solid state form lattice structures. The two principal factors in determining the form of the lattice are the relative charges of the ions and their relative sizes. Some structures are adopted by a number of compounds; for example, the structure of the rock salt sodium chloride is also adopted by many alkali halides, and binary oxides such as magnesium oxide. Pauling's rules provide guidelines for predicting and rationalizing the crystal structures of ionic crystals Strength of the bonding For a solid crystalline ionic compound the enthalpy change in forming the solid from gaseous ions is termed the lattice energy. The experimental value for the lattice energy can be determined using the Born–Haber cycle. It can also be calculated (predicted) using the Born–Landé equation as the sum of the electrostatic potential energy, calculated by summing interactions between cations and anions, and a short-range repulsive potential energy term. The electrostatic potential can be expressed in terms of the interionic separation and a constant (Madelung constant) that takes account of the geometry of the crystal. The further away from the nucleus the weaker the shield. The Born–Landé equation gives a reasonable fit to the lattice energy of, e.g., sodium chloride, where the calculated (predicted) value is −756 kJ/mol, which compares to −787 kJ/mol using the Born–Haber cycle. In aqueous solution the binding strength can be described by the Bjerrum or Fuoss equation as function of the ion charges, rather independent of the nature of the ions such as polarizability or size. The strength of salt bridges is most often evaluated by measurements of equilibria between molecules containing cationic and anionic sites, most often in solution. Equilibrium constants in water indicate additive free energy contributions for each salt bridge. Another method for the identification of hydrogen bonds in complicated molecules is crystallography, sometimes also NMR-spectroscopy. The attractive forces defining the strength of ionic bonding can be modeled by Coulomb's Law. Ionic bond strengths are typically (cited ranges vary) between 170 and 1500 kJ/mol. Polarization power effects Ions in crystal lattices of purely ionic compounds are spherical; however, if the positive ion is small and/or highly charged, it will distort the electron cloud of the negative ion, an effect summarised in Fajans' rules. This polarization of the negative ion leads to a build-up of extra charge density between the two nuclei, that is, to partial covalency. Larger negative ions are more easily polarized, but the effect is usually important only when positive ions with charges of 3+ (e.g., Al3+) are involved. However, 2+ ions (Be2+) or even 1+ (Li+) show some polarizing power because their sizes are so small (e.g., LiI is ionic but has some covalent bonding present). Note that this is not the ionic polarization effect that refers to the displacement of ions in the lattice due to the application of an electric field. Comparison with covalent bonding In ionic bonding, the atoms are bound by the attraction of oppositely charged ions, whereas, in covalent bonding, atoms are bound by sharing electrons to attain stable electron configurations. In covalent bonding, the molecular geometry around each atom is determined by valence shell electron pair repulsion VSEPR rules, whereas, in ionic materials, the geometry follows maximum packing rules. One could say that covalent bonding is more directional in the sense that the energy penalty for not adhering to the optimum bond angles is large, whereas ionic bonding has no such penalty. There are no shared electron pairs to repel each other, the ions should simply be packed as efficiently as possible. This often leads to much higher coordination numbers. In NaCl, each ion has 6 bonds and all bond angles are 90°. In CsCl the coordination number is 8. By comparison, carbon typically has a maximum of four bonds. Purely ionic bonding cannot exist, as the proximity of the entities involved in the bonding allows some degree of sharing electron density between them. Therefore, all ionic bonding has some covalent character. Thus, bonding is considered ionic where the ionic character is greater than the covalent character. The larger the difference in electronegativity between the two types of atoms involved in the bonding, the more ionic (polar) it is. Bonds with partially ionic and partially covalent character are called polar covalent bonds. For example, Na–Cl and Mg–O interactions have a few percent covalency, while Si–O bonds are usually ~50% ionic and ~50% covalent. Pauling estimated that an electronegativity difference of 1.7 (on the Pauling scale) corresponds to 50% ionic character, so that a difference greater than 1.7 corresponds to a bond which is predominantly ionic. Ionic character in covalent bonds can be directly measured for atoms having quadrupolar nuclei (2H, 14N, 81,79Br, 35,37Cl or 127I). These nuclei are generally objects of NQR nuclear quadrupole resonance and NMR nuclear magnetic resonance studies. Interactions between the nuclear quadrupole moments Q and the electric field gradients (EFG) are characterized via the nuclear quadrupole coupling constants QCC = where the eqzz term corresponds to the principal component of the EFG tensor and e is the elementary charge. In turn, the electric field gradient opens the way to description of bonding modes in molecules when the QCC values are accurately determined by NMR or NQR methods. In general, when ionic bonding occurs in the solid (or liquid) state, it is not possible to talk about a single "ionic bond" between two individual atoms, because the cohesive forces that keep the lattice together are of a more collective nature. This is quite different in the case of covalent bonding, where we can often speak of a distinct bond localized between two particular atoms. However, even if ionic bonding is combined with some covalency, the result is not necessarily discrete bonds of a localized character. In such cases, the resulting bonding often requires description in terms of a band structure consisting of gigantic molecular orbitals spanning the entire crystal. Thus, the bonding in the solid often retains its collective rather than localized nature. When the difference in electronegativity is decreased, the bonding may then lead to a semiconductor, a semimetal or eventually a metallic conductor with metallic bonding. See also Coulomb's law Salt bridge (protein and supramolecular) Ionic potential Linear combination of atomic orbitals Hybridization Chemical polarity Ioliomics Electron configuration Aufbau principle Quantum numbers Azimuthal quantum number Principal quantum number Magnetic quantum number Spin quantum number References External links Ionic bonding tutorial Video on ionic bonding Chemical bonding Ions Supramolecular chemistry
Ionic bonding
[ "Physics", "Chemistry", "Materials_science" ]
2,774
[ "Matter", "Condensed matter physics", "nan", "Nanotechnology", "Chemical bonding", "Ions", "Supramolecular chemistry" ]
14,958
https://en.wikipedia.org/wiki/Immune%20system
The immune system is a network of biological systems that protects an organism from diseases. It detects and responds to a wide variety of pathogens, from viruses to bacteria, as well as cancer cells, parasitic worms, and also objects such as wood splinters, distinguishing them from the organism's own healthy tissue. Many species have two major subsystems of the immune system. The innate immune system provides a preconfigured response to broad groups of situations and stimuli. The adaptive immune system provides a tailored response to each stimulus by learning to recognize molecules it has previously encountered. Both use molecules and cells to perform their functions. Nearly all organisms have some kind of immune system. Bacteria have a rudimentary immune system in the form of enzymes that protect against viral infections. Other basic immune mechanisms evolved in ancient plants and animals and remain in their modern descendants. These mechanisms include phagocytosis, antimicrobial peptides called defensins, and the complement system. Jawed vertebrates, including humans, have even more sophisticated defense mechanisms, including the ability to adapt to recognize pathogens more efficiently. Adaptive (or acquired) immunity creates an immunological memory leading to an enhanced response to subsequent encounters with that same pathogen. This process of acquired immunity is the basis of vaccination. Dysfunction of the immune system can cause autoimmune diseases, inflammatory diseases and cancer. Immunodeficiency occurs when the immune system is less active than normal, resulting in recurring and life-threatening infections. In humans, immunodeficiency can be the result of a genetic disease such as severe combined immunodeficiency, acquired conditions such as HIV/AIDS, or the use of immunosuppressive medication. Autoimmunity results from a hyperactive immune system attacking normal tissues as if they were foreign organisms. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Immunology covers the study of all aspects of the immune system. Layered defense The immune system protects its host from infection with layered defenses of increasing specificity. Physical barriers prevent pathogens such as bacteria and viruses from entering the organism. If a pathogen breaches these barriers, the innate immune system provides an immediate, but non-specific response. Innate immune systems are found in all animals. If pathogens successfully evade the innate response, vertebrates possess a second layer of protection, the adaptive immune system, which is activated by the innate response. Here, the immune system adapts its response during an infection to improve its recognition of the pathogen. This improved response is then retained after the pathogen has been eliminated, in the form of an immunological memory, and allows the adaptive immune system to mount faster and stronger attacks each time this pathogen is encountered. Both innate and adaptive immunity depend on the ability of the immune system to distinguish between self and non-self molecules. In immunology, self molecules are components of an organism's body that can be distinguished from foreign substances by the immune system. Conversely, non-self molecules are those recognized as foreign molecules. One class of non-self molecules are called antigens (originally named for being antibody generators) and are defined as substances that bind to specific immune receptors and elicit an immune response. Surface barriers Several barriers protect organisms from infection, including mechanical, chemical, and biological barriers. The waxy cuticle of most leaves, the exoskeleton of insects, the shells and membranes of externally deposited eggs, and skin are examples of mechanical barriers that are the first line of defense against infection. Organisms cannot be completely sealed from their environments, so systems act to protect body openings such as the lungs, intestines, and the genitourinary tract. In the lungs, coughing and sneezing mechanically eject pathogens and other irritants from the respiratory tract. The flushing action of tears and urine also mechanically expels pathogens, while mucus secreted by the respiratory and gastrointestinal tract serves to trap and entangle microorganisms. Chemical barriers also protect against infection. The skin and respiratory tract secrete antimicrobial peptides such as the β-defensins. Enzymes such as lysozyme and phospholipase A2 in saliva, tears, and breast milk are also antibacterials. Vaginal secretions serve as a chemical barrier following menarche, when they become slightly acidic, while semen contains defensins and zinc to kill pathogens. In the stomach, gastric acid serves as a chemical defense against ingested pathogens. Within the genitourinary and gastrointestinal tracts, commensal flora serve as biological barriers by competing with pathogenic bacteria for food and space and, in some cases, changing the conditions in their environment, such as pH or available iron. As a result, the probability that pathogens will reach sufficient numbers to cause illness is reduced. Innate immune system Microorganisms or toxins that successfully enter an organism encounter the cells and mechanisms of the innate immune system. The innate response is usually triggered when microbes are identified by pattern recognition receptors, which recognize components that are conserved among broad groups of microorganisms, or when damaged, injured or stressed cells send out alarm signals, many of which are recognized by the same receptors as those that recognize pathogens. Innate immune defenses are non-specific, meaning these systems respond to pathogens in a generic way. This system does not confer long-lasting immunity against a pathogen. The innate immune system is the dominant system of host defense in most organisms, and the only one in plants. Immune sensing Cells in the innate immune system use pattern recognition receptors to recognize molecular structures that are produced by pathogens. They are proteins expressed, mainly, by cells of the innate immune system, such as dendritic cells, macrophages, monocytes, neutrophils, and epithelial cells, to identify two classes of molecules: pathogen-associated molecular patterns (PAMPs), which are associated with microbial pathogens, and damage-associated molecular patterns (DAMPs), which are associated with components of host's cells that are released during cell damage or cell death. Recognition of extracellular or endosomal PAMPs is mediated by transmembrane proteins known as toll-like receptors (TLRs). TLRs share a typical structural motif, the leucine rich repeats (LRRs), which give them a curved shape. Toll-like receptors were first discovered in Drosophila and trigger the synthesis and secretion of cytokines and activation of other host defense programs that are necessary for both innate or adaptive immune responses. Ten toll-like receptors have been described in humans. Cells in the innate immune system have pattern recognition receptors, which detect infection or cell damage, inside. Three major classes of these "cytosolic" receptors are NOD–like receptors, RIG (retinoic acid-inducible gene)-like receptors, and cytosolic DNA sensors. Innate immune cells Some leukocytes (white blood cells) act like independent, single-celled organisms and are the second arm of the innate immune system. The innate leukocytes include the "professional" phagocytes (macrophages, neutrophils, and dendritic cells). These cells identify and eliminate pathogens, either by attacking larger pathogens through contact or by engulfing and then killing microorganisms. The other cells involved in the innate response include innate lymphoid cells, mast cells, eosinophils, basophils, and natural killer cells. Phagocytosis is an important feature of cellular innate immunity performed by cells called phagocytes that engulf pathogens or particles. Phagocytes generally patrol the body searching for pathogens, but can be called to specific locations by cytokines. Once a pathogen has been engulfed by a phagocyte, it becomes trapped in an intracellular vesicle called a phagosome, which subsequently fuses with another vesicle called a lysosome to form a phagolysosome. The pathogen is killed by the activity of digestive enzymes or following a respiratory burst that releases free radicals into the phagolysosome. Phagocytosis evolved as a means of acquiring nutrients, but this role was extended in phagocytes to include engulfment of pathogens as a defense mechanism. Phagocytosis probably represents the oldest form of host defense, as phagocytes have been identified in both vertebrate and invertebrate animals. Neutrophils and macrophages are phagocytes that travel throughout the body in pursuit of invading pathogens. Neutrophils are normally found in the bloodstream and are the most abundant type of phagocyte, representing 50% to 60% of total circulating leukocytes. During the acute phase of inflammation, neutrophils migrate toward the site of inflammation in a process called chemotaxis and are usually the first cells to arrive at the scene of infection. Macrophages are versatile cells that reside within tissues and produce an array of chemicals including enzymes, complement proteins, and cytokines. They can also act as scavengers that rid the body of worn-out cells and other debris and as antigen-presenting cells (APCs) that activate the adaptive immune system. Dendritic cells are phagocytes in tissues that are in contact with the external environment; therefore, they are located mainly in the skin, nose, lungs, stomach, and intestines. They are named for their resemblance to neuronal dendrites, as both have many spine-like projections. Dendritic cells serve as a link between the bodily tissues and the innate and adaptive immune systems, as they present antigens to T cells, one of the key cell types of the adaptive immune system. Granulocytes are leukocytes that have granules in their cytoplasm. In this category are neutrophils, mast cells, basophils, and eosinophils. Mast cells reside in connective tissues and mucous membranes and regulate the inflammatory response. They are most often associated with allergy and anaphylaxis. Basophils and eosinophils are related to neutrophils. They secrete chemical mediators that are involved in defending against parasites and play a role in allergic reactions, such as asthma. Innate lymphoid cells (ILCs) are a group of innate immune cells that are derived from common lymphoid progenitor and belong to the lymphoid lineage. These cells are defined by the absence of antigen-specific B- or T-cell receptor (TCR) because of the lack of recombination activating gene. ILCs do not express myeloid or dendritic cell markers. Natural killer cells (NK cells) are lymphocytes and a component of the innate immune system that does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self". This term describes cells with low levels of a cell-surface marker called MHC I (major histocompatibility complex)—a situation that can arise in viral infections of host cells. Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors, which essentially put the brakes on NK cells. Inflammation Inflammation is one of the first responses of the immune system to infection. The symptoms of inflammation are redness, swelling, heat, and pain, which are caused by increased blood flow into tissue. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have antiviral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote the healing of any damaged tissue following the removal of pathogens. The pattern-recognition receptors called inflammasomes are multiprotein complexes (consisting of an NLR, the adaptor protein ASC, and the effector molecule pro-caspase-1) that form in response to cytosolic PAMPs and DAMPs, whose function is to generate active forms of the inflammatory cytokines IL-1β and IL-18. Humoral defenses The complement system is a biochemical cascade that attacks the surfaces of foreign cells. It contains over 20 different proteins and is named for its ability to "complement" the killing of pathogens by antibodies. Complement is the major humoral component of the innate immune response. Many species have complement systems, including non-mammals like plants, fish, and some invertebrates. In humans, this response is activated by complement binding to antibodies that have attached to these microbes or the binding of complement proteins to carbohydrates on the surfaces of microbes. This recognition signal triggers a rapid killing response. The speed of the response is a result of signal amplification that occurs after sequential proteolytic activation of complement molecules, which are also proteases. After complement proteins initially bind to the microbe, they activate their protease activity, which in turn activates other complement proteases, and so on. This produces a catalytic cascade that amplifies the initial signal by controlled positive feedback. The cascade results in the production of peptides that attract immune cells, increase vascular permeability, and opsonize (coat) the surface of a pathogen, marking it for destruction. This deposition of complement can also kill cells directly by disrupting their plasma membrane via the formation of a membrane attack complex. Adaptive immune system The adaptive immune system evolved in early vertebrates and allows for a stronger immune response as well as immunological memory, where each pathogen is "remembered" by a signature antigen. The adaptive immune response is antigen-specific and requires the recognition of specific "non-self" antigens during a process called antigen presentation. Antigen specificity allows for the generation of responses that are tailored to specific pathogens or pathogen-infected cells. The ability to mount these tailored responses is maintained in the body by "memory cells". Should a pathogen infect the body more than once, these specific memory cells are used to quickly eliminate it. Recognition of antigen The cells of the adaptive immune system are special types of leukocytes, called lymphocytes. B cells and T cells are the major types of lymphocytes and are derived from hematopoietic stem cells in the bone marrow. B cells are involved in the humoral immune response, whereas T cells are involved in cell-mediated immune response. Killer T cells only recognize antigens coupled to Class I MHC molecules, while helper T cells and regulatory T cells only recognize antigens coupled to Class II MHC molecules. These two mechanisms of antigen presentation reflect the different roles of the two types of T cell. A third, minor subtype are the γδ T cells that recognize intact antigens that are not bound to MHC receptors. The double-positive T cells are exposed to a wide variety of self-antigens in the thymus, in which iodine is necessary for its thymus development and activity. In contrast, the B cell antigen-specific receptor is an antibody molecule on the B cell surface and recognizes native (unprocessed) antigen without any need for antigen processing. Such antigens may be large molecules found on the surfaces of pathogens, but can also be small haptens (such as penicillin) attached to carrier molecule. Each lineage of B cell expresses a different antibody, so the complete set of B cell antigen receptors represent all the antibodies that the body can manufacture. When B or T cells encounter their related antigens they multiply and many "clones" of the cells are produced that target the same antigen. This is called clonal selection. Antigen presentation to T lymphocytes Both B cells and T cells carry receptor molecules that recognize specific targets. T cells recognize a "non-self" target, such as a pathogen, only after antigens (small fragments of the pathogen) have been processed and presented in combination with a "self" receptor called a major histocompatibility complex (MHC) molecule. Cell mediated immunity There are two major subtypes of T cells: the killer T cell and the helper T cell. In addition there are regulatory T cells which have a role in modulating immune response. Killer T cells Killer T cells are a sub-group of T cells that kill cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional. As with B cells, each type of T cell recognizes a different antigen. Killer T cells are activated when their T-cell receptor binds to this specific antigen in a complex with the MHC Class I receptor of another cell. Recognition of this MHC:antigen complex is aided by a co-receptor on the T cell, called CD8. The T cell then travels throughout the body in search of cells where the MHC I receptors bear this antigen. When an activated T cell contacts such cells, it releases cytotoxins, such as perforin, which form pores in the target cell's plasma membrane, allowing ions, water and toxins to enter. The entry of another toxin called granulysin (a protease) induces the target cell to undergo apoptosis. T cell killing of host cells is particularly important in preventing the replication of viruses. T cell activation is tightly controlled and generally requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T cells (see below). Helper T cells Helper T cells regulate both the innate and adaptive immune responses and help determine which immune responses the body makes to a particular pathogen. These cells have no cytotoxic activity and do not kill infected cells or clear pathogens directly. They instead control the immune response by directing other cells to perform these tasks. Helper T cells express T cell receptors that recognize antigen bound to Class II MHC molecules. The MHC:antigen complex is also recognized by the helper cell's CD4 co-receptor, which recruits molecules inside the T cell (such as Lck) that are responsible for the T cell's activation. Helper T cells have a weaker association with the MHC:antigen complex than observed for killer T cells, meaning many receptors (around 200–300) on the helper T cell must be bound by an MHC:antigen to activate the helper cell, while killer T cells can be activated by engagement of a single MHC:antigen molecule. Helper T cell activation also requires longer duration of engagement with an antigen-presenting cell. The activation of a resting helper T cell causes it to release cytokines that influence the activity of many cell types. Cytokine signals produced by helper T cells enhance the microbicidal function of macrophages and the activity of killer T cells. In addition, helper T cell activation causes an upregulation of molecules expressed on the T cell's surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells. Gamma delta T cells Gamma delta T cells (γδ T cells) possess an alternative T-cell receptor (TCR) as opposed to CD4+ and CD8+ (αβ) T cells and share the characteristics of helper T cells, cytotoxic T cells and NK cells. The conditions that produce responses from γδ T cells are not fully understood. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as CD1d-restricted natural killer T cells, γδ T cells straddle the border between innate and adaptive immunity. On one hand, γδ T cells are a component of adaptive immunity as they rearrange TCR genes to produce receptor diversity and can also develop a memory phenotype. On the other hand, the various subsets are also part of the innate immune system, as restricted TCR or NK receptors may be used as pattern recognition receptors. For example, large numbers of human Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted Vδ1+ T cells in epithelia respond to stressed epithelial cells. Humoral immune response A B cell identifies pathogens when antibodies on its surface bind to a specific foreign antigen. This antigen/antibody complex is taken up by the B cell and processed by proteolysis into peptides. The B cell then displays these antigenic peptides on its surface MHC class II molecules. This combination of MHC and antigen attracts a matching helper T cell, which releases lymphokines and activates the B cell. As the activated B cell then begins to divide, its offspring (plasma cells) secrete millions of copies of the antibody that recognizes this antigen. These antibodies circulate in blood plasma and lymph, bind to pathogens expressing the antigen and mark them for destruction by complement activation or for uptake and destruction by phagocytes. Antibodies can also neutralize challenges directly, by binding to bacterial toxins or by interfering with the receptors that viruses and bacteria use to infect cells. Newborn infants have no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. During pregnancy, a particular type of antibody, called IgG, is transported from mother to baby directly through the placenta, so human babies have high levels of antibodies even at birth, with the same range of antigen specificities as their mother. Breast milk or colostrum also contains antibodies that are transferred to the gut of the infant and protect against bacterial infections until the newborn can synthesize its own antibodies. This is passive immunity because the fetus does not actually make any memory cells or antibodies—it only borrows them. This passive immunity is usually short-term, lasting from a few days up to several months. In medicine, protective passive immunity can also be transferred artificially from one individual to another. Immunological memory When B cells and T cells are activated and begin to replicate, some of their offspring become long-lived memory cells. Throughout the lifetime of an animal, these memory cells remember each specific pathogen encountered and can mount a strong response if the pathogen is detected again. T-cells recognize pathogens by small protein-based infection signals, called antigens, that bind to directly to T-cell surface receptors. B-cells use the protein, immunoglobulin, to recognize pathogens by their antigens. This is "adaptive" because it occurs during the lifetime of an individual as an adaptation to infection with that pathogen and prepares the immune system for future challenges. Immunological memory can be in the form of either passive short-term memory or active long-term memory. Physiological regulation The immune system is involved in many aspects of physiological regulation in the body. The immune system interacts intimately with other systems, such as the endocrine and the nervous systems. The immune system also plays a crucial role in embryogenesis (development of the embryo), as well as in tissue repair and regeneration. Hormones Hormones can act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. By contrast, male sex hormones such as testosterone seem to be immunosuppressive. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D. Vitamin D Although cellular studies indicate that vitamin D has receptors and probable functions in the immune system, there is no clinical evidence to prove that vitamin D deficiency increases the risk for immune diseases or vitamin D supplementation lowers immune disease risk. A 2011 United States Institute of Medicine report stated that "outcomes related to ... immune functioning and autoimmune disorders, and infections ... could not be linked reliably with calcium or vitamin D intake and were often conflicting." Sleep and rest The immune system is affected by sleep and rest, and sleep deprivation is detrimental to immune function. Complex feedback loops involving cytokines, such as interleukin-1 and tumor necrosis factor-α produced in response to infection, appear to also play a role in the regulation of non-rapid eye movement (REM) sleep. Thus the immune response to infection may result in changes to the sleep cycle, including an increase in slow-wave sleep relative to REM sleep. In people with sleep deprivation, active immunizations may have a diminished effect and may result in lower antibody production, and a lower immune response, than would be noted in a well-rested individual. Additionally, proteins such as NFIL3, which have been shown to be closely intertwined with both T-cell differentiation and circadian rhythms, can be affected through the disturbance of natural light and dark cycles through instances of sleep deprivation. These disruptions can lead to an increase in chronic conditions such as heart disease, chronic pain, and asthma. In addition to the negative consequences of sleep deprivation, sleep and the intertwined circadian system have been shown to have strong regulatory effects on immunological functions affecting both innate and adaptive immunity. First, during the early slow-wave-sleep stage, a sudden drop in blood levels of cortisol, epinephrine, and norepinephrine causes increased blood levels of the hormones leptin, pituitary growth hormone, and prolactin. These signals induce a pro-inflammatory state through the production of the pro-inflammatory cytokines interleukin-1, interleukin-12, TNF-alpha and IFN-gamma. These cytokines then stimulate immune functions such as immune cell activation, proliferation, and differentiation. During this time of a slowly evolving adaptive immune response, there is a peak in undifferentiated or less differentiated cells, like naïve and central memory T cells. In addition to these effects, the milieu of hormones produced at this time (leptin, pituitary growth hormone, and prolactin) supports the interactions between APCs and T-cells, a shift of the Th1/Th2 cytokine balance towards one that supports Th1, an increase in overall Th cell proliferation, and naïve T cell migration to lymph nodes. This is also thought to support the formation of long-lasting immune memory through the initiation of Th1 immune responses. During wake periods, differentiated effector cells, such as cytotoxic natural killer cells and cytotoxic T lymphocytes, peak to elicit an effective response against any intruding pathogens. Anti-inflammatory molecules, such as cortisol and catecholamines, also peak during awake active times. Inflammation would cause serious cognitive and physical impairments if it were to occur during wake times, and inflammation may occur during sleep times due to the presence of melatonin. Inflammation causes a great deal of oxidative stress and the presence of melatonin during sleep times could actively counteract free radical production during this time. Physical exercise Physical exercise has a positive effect on the immune system and depending on the frequency and intensity, the pathogenic effects of diseases caused by bacteria and viruses are moderated. Immediately after intense exercise there is a transient immunodepression, where the number of circulating lymphocytes decreases and antibody production declines. This may give rise to a window of opportunity for infection and reactivation of latent virus infections, but the evidence is inconclusive. Changes at the cellular level During exercise there is an increase in circulating white blood cells of all types. This is caused by the frictional force of blood flowing on the endothelial cell surface and catecholamines affecting β-adrenergic receptors (βARs). The number of neutrophils in the blood increases and remains raised for up to six hours and immature forms are present. Although the increase in neutrophils ("neutrophilia") is similar to that seen during bacterial infections, after exercise the cell population returns to normal by around 24 hours. The number of circulating lymphocytes (mainly natural killer cells) decreases during intense exercise but returns to normal after 4 to 6 hours. Although up to 2% of the cells die most migrate from the blood to the tissues, mainly the intestines and lungs, where pathogens are most likely to be encountered. Some monocytes leave the blood circulation and migrate to the muscles where they differentiate and become macrophages. These cells differentiate into two types: proliferative macrophages, which are responsible for increasing the number of stem cells and restorative macrophages, which are involved their maturing to muscle cells. Repair and regeneration The immune system, particularly the innate component, plays a decisive role in tissue repair after an insult. Key actors include macrophages and neutrophils, but other cellular actors, including γδ T cells, innate lymphoid cells (ILCs), and regulatory T cells (Tregs), are also important. The plasticity of immune cells and the balance between pro-inflammatory and anti-inflammatory signals are crucial aspects of efficient tissue repair. Immune components and pathways are involved in regeneration as well, for example in amphibians such as in axolotl limb regeneration. According to one hypothesis, organisms that can regenerate (e.g., axolotls) could be less immunocompetent than organisms that cannot regenerate. Disorders of human immunity Failures of host defense occur and fall into three broad categories: immunodeficiencies, autoimmunity, and hypersensitivities. Immunodeficiencies Immunodeficiencies occur when one or more of the components of the immune system are inactive. The ability of the immune system to respond to pathogens is diminished in both the young and the elderly, with immune responses beginning to decline at around 50 years of age due to immunosenescence. In developed countries, obesity, alcoholism, and drug use are common causes of poor immune function, while malnutrition is the most common cause of immunodeficiency in developing countries. Diets lacking sufficient protein are associated with impaired cell-mediated immunity, complement activity, phagocyte function, IgA antibody concentrations, and cytokine production. Additionally, the loss of the thymus at an early age through genetic mutation or surgical removal results in severe immunodeficiency and a high susceptibility to infection. Immunodeficiencies can also be inherited or 'acquired'. Severe combined immunodeficiency is a rare genetic disorder characterized by the disturbed development of functional T cells and B cells caused by numerous genetic mutations. Chronic granulomatous disease, where phagocytes have a reduced ability to destroy pathogens, is an example of an inherited, or congenital, immunodeficiency. AIDS and some types of cancer cause acquired immunodeficiency. Autoimmunity Overactive immune responses form the other end of immune dysfunction, particularly the autoimmune diseases. Here, the immune system fails to properly distinguish between self and non-self, and attacks part of the body. Under normal circumstances, many T cells and antibodies react with "self" peptides. One of the functions of specialized cells (located in the thymus and bone marrow) is to present young lymphocytes with self antigens produced throughout the body and to eliminate those cells that recognize self-antigens, preventing autoimmunity. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Hypersensitivity Hypersensitivity is an immune response that damages the body's own tissues. It is divided into four classes (Type I – IV) based on the mechanisms involved and the time course of the hypersensitive reaction. Type I hypersensitivity is an immediate or anaphylactic reaction, often associated with allergy. Symptoms can range from mild discomfort to death. Type I hypersensitivity is mediated by IgE, which triggers degranulation of mast cells and basophils when cross-linked by antigen. Type II hypersensitivity occurs when antibodies bind to antigens on the individual's own cells, marking them for destruction. This is also called antibody-dependent (or cytotoxic) hypersensitivity, and is mediated by IgG and IgM antibodies. Immune complexes (aggregations of antigens, complement proteins, and IgG and IgM antibodies) deposited in various tissues trigger Type III hypersensitivity reactions. Type IV hypersensitivity (also known as cell-mediated or delayed type hypersensitivity) usually takes between two and three days to develop. Type IV reactions are involved in many autoimmune and infectious diseases, but may also involve contact dermatitis. These reactions are mediated by T cells, monocytes, and macrophages. Idiopathic inflammation Inflammation is one of the first responses of the immune system to infection, but it can appear without known cause. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens. Manipulation in medicine The immune response can be manipulated to suppress unwanted responses resulting from autoimmunity, allergy, and transplant rejection, and to stimulate protective responses against pathogens that largely elude the immune system (see immunization) or cancer. Immunosuppression Immunosuppressive drugs are used to control autoimmune disorders or inflammation when excessive tissue damage occurs, and to prevent rejection after an organ transplant. Anti-inflammatory drugs are often used to control the effects of inflammation. Glucocorticoids are the most powerful of these drugs and can have many undesirable side effects, such as central obesity, hyperglycemia, and osteoporosis. Their use is tightly controlled. Lower doses of anti-inflammatory drugs are often used in conjunction with cytotoxic or immunosuppressive drugs such as methotrexate or azathioprine. Cytotoxic drugs inhibit the immune response by killing dividing cells such as activated T cells. This killing is indiscriminate and other constantly dividing cells and their organs are affected, which causes toxic side effects. Immunosuppressive drugs such as cyclosporin prevent T cells from responding to signals correctly by inhibiting signal transduction pathways. Immunostimulation Claims made by marketers of various products and alternative health providers, such as chiropractors, homeopaths, and acupuncturists to be able to stimulate or "boost" the immune system generally lack meaningful explanation and evidence of effectiveness. Vaccination Long-term active memory is acquired following infection by activation of B and T cells. Active immunity can also be generated artificially, through vaccination. The principle behind vaccination (also called immunization) is to introduce an antigen from a pathogen to stimulate the immune system and develop specific immunity against that particular pathogen without causing disease associated with that organism. This deliberate induction of an immune response is successful because it exploits the natural specificity of the immune system, as well as its inducibility. With infectious disease remaining one of the leading causes of death in the human population, vaccination represents the most effective manipulation of the immune system mankind has developed. Many vaccines are based on acellular components of micro-organisms, including harmless toxin components. Since many antigens derived from acellular vaccines do not strongly induce the adaptive response, most bacterial vaccines are provided with additional adjuvants that activate the antigen-presenting cells of the innate immune system and maximize immunogenicity. Tumor immunology Another important role of the immune system is to identify and eliminate tumors. This is called immune surveillance. The transformed cells of tumors express antigens that are not found on normal cells. To the immune system, these antigens appear foreign, and their presence causes immune cells to attack the transformed tumor cells. The antigens expressed by tumors have several sources; some are derived from oncogenic viruses like human papillomavirus, which causes cancer of the cervix, vulva, vagina, penis, anus, mouth, and throat, while others are the organism's own proteins that occur at low levels in normal cells but reach high levels in tumor cells. One example is an enzyme called tyrosinase that, when expressed at high levels, transforms certain skin cells (for example, melanocytes) into tumors called melanomas. A third possible source of tumor antigens are proteins normally important for regulating cell growth and survival, that commonly mutate into cancer inducing molecules called oncogenes. The main response of the immune system to tumors is to destroy the abnormal cells using killer T cells, sometimes with the assistance of helper T cells. Tumor antigens are presented on MHC class I molecules in a similar way to viral antigens. This allows killer T cells to recognize the tumor cell as abnormal. NK cells also kill tumorous cells in a similar way, especially if the tumor cells have fewer MHC class I molecules on their surface than normal; this is a common phenomenon with tumors. Sometimes antibodies are generated against tumor cells allowing for their destruction by the complement system. Some tumors evade the immune system and go on to become cancers. Tumor cells often have a reduced number of MHC class I molecules on their surface, thus avoiding detection by killer T cells. Some tumor cells also release products that inhibit the immune response; for example by secreting the cytokine TGF-β, which suppresses the activity of macrophages and lymphocytes. In addition, immunological tolerance may develop against tumor antigens, so the immune system no longer attacks the tumor cells. Paradoxically, macrophages can promote tumor growth when tumor cells send out cytokines that attract macrophages, which then generate cytokines and growth factors such as tumor-necrosis factor alpha that nurture tumor development or promote stem-cell-like plasticity. In addition, a combination of hypoxia in the tumor and a cytokine produced by macrophages induces tumor cells to decrease production of a protein that blocks metastasis and thereby assists spread of cancer cells. Anti-tumor M1 macrophages are recruited in early phases to tumor development but are progressively differentiated to M2 with pro-tumor effect, an immunosuppressor switch. The hypoxia reduces the cytokine production for the anti-tumor response and progressively macrophages acquire pro-tumor M2 functions driven by the tumor microenvironment, including IL-4 and IL-10. Cancer immunotherapy covers the medical ways to stimulate the immune system to attack cancer tumors. Predicting immunogenicity Some drugs can cause a neutralizing immune response, meaning that the immune system produces neutralizing antibodies that counteract the action of the drugs, particularly if the drugs are administered repeatedly, or in larger doses. This limits the effectiveness of drugs based on larger peptides and proteins (which are typically larger than 6000 Da). In some cases, the drug itself is not immunogenic, but may be co-administered with an immunogenic compound, as is sometimes the case for Taxol. Computational methods have been developed to predict the immunogenicity of peptides and proteins, which are particularly useful in designing therapeutic antibodies, assessing likely virulence of mutations in viral coat particles, and validation of proposed peptide-based drug treatments. Early techniques relied mainly on the observation that hydrophilic amino acids are overrepresented in epitope regions than hydrophobic amino acids; however, more recent developments rely on machine learning techniques using databases of existing known epitopes, usually on well-studied virus proteins, as a training set. A publicly accessible database has been established for the cataloguing of epitopes from pathogens known to be recognizable by B cells. The emerging field of bioinformatics-based studies of immunogenicity is referred to as immunoinformatics. Immunoproteomics is the study of large sets of proteins (proteomics) involved in the immune response. Evolution and other mechanisms Evolution of the immune system It is likely that a multicomponent, adaptive immune system arose with the first vertebrates, as invertebrates do not generate lymphocytes or an antibody-based humoral response. Immune systems evolved in deuterostomes as shown in the cladogram. Many species, however, use mechanisms that appear to be precursors of these aspects of vertebrate immunity. Immune systems appear even in the structurally simplest forms of life, with bacteria using a unique defense mechanism, called the restriction modification system to protect themselves from viral pathogens, called bacteriophages. Prokaryotes (bacteria and archea) also possess acquired immunity, through a system that uses CRISPR sequences to retain fragments of the genomes of phage that they have come into contact with in the past, which allows them to block virus replication through a form of RNA interference. Prokaryotes also possess other defense mechanisms. Offensive elements of the immune systems are also present in unicellular eukaryotes, but studies of their roles in defense are few. Pattern recognition receptors are proteins used by nearly all organisms to identify molecules associated with pathogens. Antimicrobial peptides called defensins are an evolutionarily conserved component of the innate immune response found in all animals and plants, and represent the main form of invertebrate systemic immunity. The complement system and phagocytic cells are also used by most forms of invertebrate life. Ribonucleases and the RNA interference pathway are conserved across all eukaryotes, and are thought to play a role in the immune response to viruses. Unlike animals, plants lack phagocytic cells, but many plant immune responses involve systemic chemical signals that are sent through a plant. Individual plant cells respond to molecules associated with pathogens known as pathogen-associated molecular patterns or PAMPs. When a part of a plant becomes infected, the plant produces a localized hypersensitive response, whereby cells at the site of infection undergo rapid apoptosis to prevent the spread of the disease to other parts of the plant. Systemic acquired resistance is a type of defensive response used by plants that renders the entire plant resistant to a particular infectious agent. RNA silencing mechanisms are particularly important in this systemic response as they can block virus replication. Alternative adaptive immune system Evolution of the adaptive immune system occurred in an ancestor of the jawed vertebrates. Many of the classical molecules of the adaptive immune system (for example, immunoglobulins and T-cell receptors) exist only in jawed vertebrates. A distinct lymphocyte-derived molecule has been discovered in primitive jawless vertebrates, such as the lamprey and hagfish. These animals possess a large array of molecules called Variable lymphocyte receptors (VLRs) that, like the antigen receptors of jawed vertebrates, are produced from only a small number (one or two) of genes. These molecules are believed to bind pathogenic antigens in a similar way to antibodies, and with the same degree of specificity. Manipulation by pathogens The success of any pathogen depends on its ability to elude host immune responses. Therefore, pathogens evolved several methods that allow them to successfully infect a host, while evading detection or destruction by the immune system. Bacteria often overcome physical barriers by secreting enzymes that digest the barrier, for example, by using a type II secretion system. Alternatively, using a type III secretion system, they may insert a hollow tube into the host cell, providing a direct route for proteins to move from the pathogen to the host. These proteins are often used to shut down host defenses. An evasion strategy used by several pathogens to avoid the innate immune system is to hide within the cells of their host (also called intracellular pathogenesis). Here, a pathogen spends most of its life-cycle inside host cells, where it is shielded from direct contact with immune cells, antibodies and complement. Some examples of intracellular pathogens include viruses, the food poisoning bacterium Salmonella and the eukaryotic parasites that cause malaria (Plasmodium spp.) and leishmaniasis (Leishmania spp.). Other bacteria, such as Mycobacterium tuberculosis, live inside a protective capsule that prevents lysis by complement. Many pathogens secrete compounds that diminish or misdirect the host's immune response. Some bacteria form biofilms to protect themselves from the cells and proteins of the immune system. Such biofilms are present in many successful infections, such as the chronic Pseudomonas aeruginosa and Burkholderia cenocepacia infections characteristic of cystic fibrosis. Other bacteria generate surface proteins that bind to antibodies, rendering them ineffective; examples include Streptococcus (protein G), Staphylococcus aureus (protein A), and Peptostreptococcus magnus (protein L). The mechanisms used to evade the adaptive immune system are more complicated. The simplest approach is to rapidly change non-essential epitopes (amino acids and/or sugars) on the surface of the pathogen, while keeping essential epitopes concealed. This is called antigenic variation. An example is HIV, which mutates rapidly, so the proteins on its viral envelope that are essential for entry into its host target cell are constantly changing. These frequent changes in antigens may explain the failures of vaccines directed at this virus. The parasite Trypanosoma brucei uses a similar strategy, constantly switching one type of surface protein for another, allowing it to stay one step ahead of the antibody response. Masking antigens with host molecules is another common strategy for avoiding detection by the immune system. In HIV, the envelope that covers the virion is formed from the outermost membrane of the host cell; such "self-cloaked" viruses make it difficult for the immune system to identify them as "non-self" structures. History of immunology Immunology is a science that examines the structure and function of the immune system. It originates from medicine and early studies on the causes of immunity to disease. The earliest known reference to immunity was during the plague of Athens in 430 BC. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. In the 18th century, Pierre-Louis Moreau de Maupertuis experimented with scorpion venom and observed that certain dogs and mice were immune to this venom. In the 10th century, Persian physician al-Razi (also known as Rhazes) wrote the first recorded theory of acquired immunity, noting that a smallpox bout protected its survivors from future infections. Although he explained the immunity in terms of "excess moisture" being expelled from the blood—therefore preventing a second occurrence of the disease—this theory explained many observations about smallpox known during this time. These and other observations of acquired immunity were later exploited by Louis Pasteur in his development of vaccination and his proposed germ theory of disease. Pasteur's theory was in direct opposition to contemporary theories of disease, such as the miasma theory. It was not until Robert Koch's 1891 proofs, for which he was awarded a Nobel Prize in 1905, that microorganisms were confirmed as the cause of infectious disease. Viruses were confirmed as human pathogens in 1901, with the discovery of the yellow fever virus by Walter Reed. Immunology made a great advance towards the end of the 19th century, through rapid developments in the study of humoral immunity and cellular immunity. Particularly important was the work of Paul Ehrlich, who proposed the side-chain theory to explain the specificity of the antigen-antibody reaction; his contributions to the understanding of humoral immunity were recognized by the award of a joint Nobel Prize in 1908, along with the founder of cellular immunology, Elie Metchnikoff. In 1974, Niels Kaj Jerne developed the immune network theory; he shared a Nobel Prize in 1984 with Georges J. F. Köhler and César Milstein for theories related to the immune system. See also Fc receptor List of human cell types Neuroimmune system Original antigenic sin – when the immune system uses immunological memory upon encountering a slightly different pathogen Plant disease resistance Polyclonal response References Citations General bibliography Further reading (The book's sources are only online.) A popular science explanation of the immune system. External links
Immune system
[ "Biology" ]
10,417
[ "Immune system", "Organ systems" ]
14,959
https://en.wikipedia.org/wiki/Immunology
Immunology is a branch of biology and medicine that covers the study of immune systems in all organisms. Immunology charts, measures, and contextualizes the physiological functioning of the immune system in states of both health and diseases; malfunctions of the immune system in immunological disorders (such as autoimmune diseases, hypersensitivities, immune deficiency, and transplant rejection); and the physical, chemical, and physiological characteristics of the components of the immune system in vitro, in situ, and in vivo. Immunology has applications in numerous disciplines of medicine, particularly in the fields of organ transplantation, oncology, rheumatology, virology, bacteriology, parasitology, psychiatry, and dermatology. The term was coined by Russian biologist Ilya Ilyich Mechnikov, who advanced studies on immunology and received the Nobel Prize for his work in 1908 with Paul Ehrlich "in recognition of their work on immunity". He pinned small thorns into starfish larvae and noticed unusual cells surrounding the thorns. This was the active response of the body trying to maintain its integrity. It was Mechnikov who first observed the phenomenon of phagocytosis, in which the body defends itself against a foreign body. Ehrlich accustomed mice to the poisonous ricin and abrin. After feeding them with small but increasing dosages of ricin he ascertained that they had become "ricin-proof". Ehrlich interpreted this as immunization and observed that it was abruptly initiated after a few days and was still in existence after several months. Prior to the designation of immunity, from the etymological root , which is Latin for 'exempt', early physicians characterized organs that would later be proven as essential components of the immune system. The important lymphoid organs of the immune system are the thymus, bone marrow, and chief lymphatic tissues such as spleen, tonsils, lymph vessels, lymph nodes, adenoids, and liver. However, many components of the immune system are cellular in nature, and not associated with specific organs, but rather embedded or circulating in various tissues located throughout the body. Classical immunology Classical immunology ties in with the fields of epidemiology and medicine. It studies the relationship between the body systems, pathogens, and immunity. The earliest written mention of immunity can be traced back to the plague of Athens in 430 BCE. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. Many other ancient societies have references to this phenomenon, but it was not until the 19th and 20th centuries before the concept developed into scientific theory. The study of the molecular and cellular components that comprise the immune system, including their function and interaction, is the central science of immunology. The immune system has been divided into a more primitive innate immune system and, in vertebrates, an acquired or adaptive immune system. The latter is further divided into humoral (or antibody) and cell-mediated components. The immune system has the capability of self and non-self-recognition. An antigen is a substance that ignites the immune response. The cells involved in recognizing the antigen are Lymphocytes. Once they recognize, they secrete antibodies. Antibodies are proteins that neutralize the disease-causing microorganisms. Antibodies do not directly kill pathogens, but instead, identify antigens as targets for destruction by other immune cells such as phagocytes or NK cells. The (antibody) response is defined as the interaction between antibodies and antigens. Antibodies are specific proteins released from a certain class of immune cells known as B lymphocytes, while antigens are defined as anything that elicits the generation of antibodies (antibody generators). Immunology rests on an understanding of the properties of these two biological entities and the cellular response to both. It is now getting clear that the immune responses contribute to the development of many common disorders not traditionally viewed as immunologic, including metabolic, cardiovascular, cancer, and neurodegenerative conditions like Alzheimer's disease. Besides, there are direct implications of the immune system in the infectious diseases (tuberculosis, malaria, hepatitis, pneumonia, dysentery, and helminth infestations) as well. Hence, research in the field of immunology is of prime importance for the advancements in the fields of modern medicine, biomedical research, and biotechnology. Immunological research continues to become more specialized, pursuing non-classical models of immunity and functions of cells, organs and systems not previously associated with the immune system (Yemeserach 2010). Diagnostic immunology The specificity of the bond between antibody and antigen has made the antibody an excellent tool for the detection of substances by a variety of diagnostic techniques. Antibodies specific for a desired antigen can be conjugated with an isotopic (radio) or fluorescent label or with a color-forming enzyme in order to detect it. However, the similarity between some antigens can lead to false positives and other errors in such tests by antibodies cross-reacting with antigens that are not exact matches. Immunotherapy The use of immune system components or antigens to treat a disease or disorder is known as immunotherapy. Immunotherapy is most commonly used to treat allergies, autoimmune disorders such as Crohn's disease, Hashimoto's thyroiditis and rheumatoid arthritis, and certain cancers. Immunotherapy is also often used for patients who are immunosuppressed (such as those with HIV) and people with other immune deficiencies. This includes regulating factors such as IL-2, IL-10, GM-CSF B, IFN-α. Clinical immunology Clinical immunology is the study of diseases caused by disorders of the immune system (failure, aberrant action, and malignant growth of the cellular elements of the system). It also involves diseases of other systems, where immune reactions play a part in the pathology and clinical features. The diseases caused by disorders of the immune system fall into two broad categories: immunodeficiency, in which parts of the immune system fail to provide an adequate response (examples include chronic granulomatous disease and primary immune diseases); autoimmunity, in which the immune system attacks its own host's body (examples include systemic lupus erythematosus, rheumatoid arthritis, Hashimoto's disease and myasthenia gravis). Other immune system disorders include various hypersensitivities (such as in asthma and other allergies) that respond inappropriately to otherwise harmless compounds. The most well-known disease that affects the immune system itself is AIDS, an immunodeficiency characterized by the suppression of CD4+ ("helper") T cells, dendritic cells and macrophages by the human immunodeficiency virus (HIV). Clinical immunologists also study ways to prevent the immune system's attempts to destroy allografts (transplant rejection). Clinical immunology and allergy is usually a subspecialty of internal medicine or pediatrics. Fellows in Clinical Immunology are typically exposed to many of the different aspects of the specialty and treat allergic conditions, primary immunodeficiencies and systemic autoimmune and autoinflammatory conditions. As part of their training fellows may do additional rotations in rheumatology, pulmonology, otorhinolaryngology, dermatology and the immunologic lab. Clinical and pathology immunology When health conditions worsen to emergency status, portions of immune system organs, including the thymus, spleen, bone marrow, lymph nodes, and other lymphatic tissues, can be surgically excised for examination while patients are still alive. Theoretical immunology Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff, it was cells – more precisely, phagocytes – that were responsible for immune responses. In contrast, the humoral theory of immunity, held by Robert Koch and Emil von Behring, among others, stated that the active immune agents were soluble components (molecules) found in the organism's "humors" rather than its cells. In the mid-1950s, Macfarlane Burnet, inspired by a suggestion made by Niels Jerne, formulated the clonal selection theory (CST) of immunity. On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (e.g., pathogens, an allograft) trigger a destructive immune response. The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells. The self/nonself theory of immunity and the self/nonself vocabulary have been criticized, but remain very influential. More recently, several theoretical frameworks have been suggested in immunology, including "autopoietic" views, "cognitive immune" views, the "danger model" (or "danger theory"), and the "discontinuity" theory. The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions. Developmental immunology The body's capability to react to antigens depends on a person's age, antigen type, maternal factors and the area where the antigen is presented. Neonates are said to be in a state of physiological immunodeficiency, because both their innate and adaptive immunological responses are greatly suppressed. Once born, a child's immune system responds favorably to protein antigens while not as well to glycoproteins and polysaccharides. In fact, many of the infections acquired by neonates are caused by low virulence organisms like Staphylococcus and Pseudomonas. In neonates, opsonic activity and the ability to activate the complement cascade is very limited. For example, the mean level of C3 in a newborn is approximately 65% of that found in the adult. Phagocytic activity is also greatly impaired in newborns. This is due to lower opsonic activity, as well as diminished up-regulation of integrin and selectin receptors, which limit the ability of neutrophils to interact with adhesion molecules in the endothelium. Their monocytes are slow and have a reduced ATP production, which also limits the newborn's phagocytic activity. Although, the number of total lymphocytes is significantly higher than in adults, the cellular and humoral immunity is also impaired. Antigen-presenting cells in newborns have a reduced capability to activate T cells. Also, T cells of a newborn proliferate poorly and produce very small amounts of cytokines like IL-2, IL-4, IL-5, IL-12, and IFN-g which limits their capacity to activate the humoral response as well as the phagocitic activity of macrophage. B cells develop early during gestation but are not fully active. Maternal factors also play a role in the body's immune response. At birth, most of the immunoglobulin present is maternal IgG. These antibodies are transferred from the placenta to the fetus using the FcRn (neonatal Fc receptor). Because IgM, IgD, IgE and IgA do not cross the placenta, they are almost undetectable at birth. Some IgA is provided by breast milk. These passively-acquired antibodies can protect the newborn for up to 18 months, but their response is usually short-lived and of low affinity. These antibodies can also produce a negative response. If a child is exposed to the antibody for a particular antigen before being exposed to the antigen itself then the child will produce a dampened response. Passively acquired maternal antibodies can suppress the antibody response to active immunization. Similarly, the response of T-cells to vaccination differs in children compared to adults, and vaccines that induce Th1 responses in adults do not readily elicit these same responses in neonates. Between six and nine months after birth, a child's immune system begins to respond more strongly to glycoproteins, but there is usually no marked improvement in their response to polysaccharides until they are at least one year old. This can be the reason for distinct time frames found in vaccination schedules. During adolescence, the human body undergoes various physical, physiological and immunological changes triggered and mediated by hormones, of which the most significant in females is 17-β-estradiol (an estrogen) and, in males, is testosterone. Estradiol usually begins to act around the age of 10 and testosterone some months later. There is evidence that these steroids not only act directly on the primary and secondary sexual characteristics but also have an effect on the development and regulation of the immune system, including an increased risk in developing pubescent and post-pubescent autoimmunity. There is also some evidence that cell surface receptors on B cells and macrophages may detect sex hormones in the system. The female sex hormone 17-β-estradiol has been shown to regulate the level of immunological response, while some male androgens such as testosterone seem to suppress the stress response to infection. Other androgens, however, such as DHEA, increase immune response. As in females, the male sex hormones seem to have more control of the immune system during puberty and post-puberty than during the rest of a male's adult life. Physical changes during puberty such as thymic involution also affect immunological response. Ecoimmunology and behavioural immunity Ecoimmunology, or ecological immunology, explores the relationship between the immune system of an organism and its social, biotic and abiotic environment. More recent ecoimmunological research has focused on host pathogen defences traditionally considered "non-immunological", such as pathogen avoidance, self-medication, symbiont-mediated defenses, and fecundity trade-offs. Behavioural immunity, a phrase coined by Mark Schaller, specifically refers to psychological pathogen avoidance drivers, such as disgust aroused by stimuli encountered around pathogen-infected individuals, such as the smell of vomit. More broadly, "behavioural" ecological immunity has been demonstrated in multiple species. For example, the Monarch butterfly often lays its eggs on certain toxic milkweed species when infected with parasites. These toxins reduce parasite growth in the offspring of the infected Monarch. However, when uninfected Monarch butterflies are forced to feed only on these toxic plants, they suffer a fitness cost as reduced lifespan relative to other uninfected Monarch butterflies. This indicates that laying eggs on toxic plants is a costly behaviour in Monarchs which has probably evolved to reduce the severity of parasite infection. Symbiont-mediated defenses are also heritable across host generations, despite a non-genetic direct basis for the transmission. Aphids, for example, rely on several different symbionts for defense from key parasites, and can vertically transmit their symbionts from parent to offspring. Therefore, a symbiont that successfully confers protection from a parasite is more likely to be passed to the host offspring, allowing coevolution with parasites attacking the host in a way similar to traditional immunity. The preserved immune tissues of extinct species, such as the thylacine (Thylacine cynocephalus), can also provide insights into their biology. Cancer immunology The study of the interaction of the immune system with cancer cells can lead to diagnostic tests and therapies with which to find and fight cancer. The immunology concerned with physiological reaction characteristic of the immune state. Inflammation is an immune response that has been observed in many types of cancers. Reproductive immunology This area of the immunology is devoted to the study of immunological aspects of the reproductive process including fetus acceptance. The term has also been used by fertility clinics to address fertility problems, recurrent miscarriages, premature deliveries and dangerous complications such as pre-eclampsia. See also List of immunologists Immunomics International Reviews of Immunology Outline of immunology History of immunology Osteoimmunology References External links American Association of Immunologists British Society for Immunology Federation of Clinical Immunology Societies
Immunology
[ "Biology" ]
3,546
[ "Immunology" ]
15,022
https://en.wikipedia.org/wiki/Infrared
Infrared (IR; sometimes called infrared light) is electromagnetic radiation (EMR) with wavelengths longer than that of visible light but shorter than microwaves. The infrared spectral band begins with waves that are just longer than those of red light (the longest waves in the visible spectrum), so IR is invisible to the human eye. IR is generally understood to include wavelengths from around to . IR is commonly divided between longer-wavelength thermal IR, emitted from terrestrial sources, and shorter-wavelength IR or near-IR, part of the solar spectrum. Longer IR wavelengths (30–100 μm) are sometimes included as part of the terahertz radiation band. Almost all black-body radiation from objects near room temperature is in the IR band. As a form of EMR, IR carries energy and momentum, exerts radiation pressure, and has properties corresponding to both those of a wave and of a particle, the photon. It was long known that fires emit invisible heat; in 1681 the pioneering experimenter Edme Mariotte showed that glass, though transparent to sunlight, obstructed radiant heat. In 1800 the astronomer Sir William Herschel discovered that infrared radiation is a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on a thermometer. Slightly more than half of the energy from the Sun was eventually found, through Herschel's studies, to arrive on Earth in the form of infrared. The balance between absorbed and emitted infrared radiation has an important effect on Earth's climate. Infrared radiation is emitted or absorbed by molecules when changing rotational-vibrational movements. It excites vibrational modes in a molecule through a change in the dipole moment, making it a useful frequency range for study of these energy states for molecules of the proper symmetry. Infrared spectroscopy examines absorption and transmission of photons in the infrared range. Infrared radiation is used in industrial, scientific, military, commercial, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses sensor-equipped telescopes to penetrate dusty regions of space such as molecular clouds, to detect objects such as planets, and to view highly red-shifted objects from the early days of the universe. Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, to assist firefighting, and to detect the overheating of electrical components. Military and civilian applications include target acquisition, surveillance, night vision, homing, and tracking. Humans at normal body temperature radiate chiefly at wavelengths around 10 μm. Non-military uses include thermal efficiency analysis, environmental monitoring, industrial facility inspections, detection of grow-ops, remote temperature sensing, short-range wireless communication, spectroscopy, and weather forecasting. Definition and relationship to the electromagnetic spectrum There is no universally accepted definition of the range of infrared radiation. Typically, it is taken to extend from the nominal red edge of the visible spectrum at 780 nm to 1 mm. This range of wavelengths corresponds to a frequency range of approximately 430 THz down to 300 GHz. Beyond infrared is the microwave portion of the electromagnetic spectrum. Increasingly, terahertz radiation is counted as part of the microwave band, not infrared, moving the band edge of infrared to 0.1 mm (3 THz). Nature Sunlight, at an effective temperature of 5,780 K (5,510 °C, 9,940 °F), is composed of near-thermal-spectrum radiation that is slightly more than half infrared. At zenith, sunlight provides an irradiance of just over 1 kW per square meter at sea level. Of this energy, 527 W is infrared radiation, 445 W is visible light, and 32 W is ultraviolet radiation. Nearly all the infrared radiation in sunlight is near infrared, shorter than 4 μm. On the surface of Earth, at far lower temperatures than the surface of the Sun, some thermal radiation consists of infrared in the mid-infrared region, much longer than in sunlight. Black-body, or thermal, radiation is continuous: it radiates at all wavelengths. Of these natural thermal radiation processes, only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy. Regions In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance with Wien's displacement law. The infrared band is often subdivided into smaller sections, although how the IR spectrum is thereby divided varies between different areas in which IR is employed. Visible limit Infrared radiation is generally considered to begin with wavelengths longer than visible by the human eye. There is no hard wavelength limit to what is visible, as the eye's sensitivity decreases rapidly but smoothly, for wavelengths exceeding about 700 nm. Therefore wavelengths just longer than that can be seen if they are sufficiently bright, though they may still be classified as infrared according to usual definitions. Light from a near-IR laser may thus appear dim red and can present a hazard since it may actually be quite bright. Even IR at wavelengths up to 1,050 nm from pulsed lasers can be seen by humans under certain conditions. Commonly used subdivision scheme A commonly used subdivision scheme is: NIR and SWIR together is sometimes called "reflected infrared", whereas MWIR and LWIR is sometimes referred to as "thermal infrared". CIE division scheme The International Commission on Illumination (CIE) recommended the division of infrared radiation into the following three bands: ISO 20473 scheme ISO 20473 specifies the following scheme: Astronomy division scheme Astronomers typically divide the infrared spectrum as follows: These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges, and hence different environments in space. The most common photometric system used in astronomy allocates capital letters to different spectral regions according to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region. These letters are commonly understood in reference to atmospheric windows and appear, for instance, in the titles of many papers. Sensor response division scheme A third scheme divides up the band based on the response of various detectors: Near-infrared: from 0.7 to 1.0 μm (from the approximate end of the response of the human eye to that of silicon). Short-wave infrared: 1.0 to 3 μm (from the cut-off of silicon to that of the MWIR atmospheric window). InGaAs covers to about 1.8 μm; the less sensitive lead salts cover this region. Cryogenically cooled MCT detectors can cover the region of 1.0–2.5μm. Mid-wave infrared: 3 to 5 μm (defined by the atmospheric window and covered by indium antimonide, InSb and mercury cadmium telluride, HgCdTe, and partially by lead selenide, PbSe). Long-wave infrared: 8 to 12, or 7 to 14 μm (this is the atmospheric window covered by HgCdTe and microbolometers). Very-long wave infrared (VLWIR) (12 to about 30 μm, covered by doped silicon). Near-infrared is the region closest in wavelength to the radiation detectable by the human eye. mid- and far-infrared are progressively further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow technical reasons (the common silicon detectors are sensitive to about 1,050 nm, while InGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific configuration). No international standards for these specifications are currently available. The onset of infrared is defined (according to different standards) at various values typically between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by common light sources. Particularly intense near-IR light (e.g., from lasers, LEDs or bright daylight with the visible light filtered out) can be detected up to approximately 780 nm, and will be perceived as red light. Intense light sources providing wavelengths as long as 1,050 nm can be seen as a dull red glow, causing some difficulty in near-IR illumination of scenes in the dark (usually this practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect that consists of IR-glowing foliage. Telecommunication bands In optical communications, the part of the infrared spectrum that is used is divided into seven bands based on availability of light sources, transmitting/absorbing materials (fibers), and detectors: The C-band is the dominant band for long-distance telecommunications networks. The S and L bands are based on less well established technology, and are not as widely deployed. Heat Infrared radiation is popularly known as "heat radiation", but light and electromagnetic waves of any frequency will heat surfaces that absorb them. Infrared light from the Sun accounts for 49% of the heating of Earth, with the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at room temperature will emit radiation concentrated mostly in the 8 to 25 μm band, but this is not distinct from the emission of visible light by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's displacement law). Heat is energy in transit that flows due to a temperature difference. Unlike heat transmitted by thermal conduction or thermal convection, thermal radiation can propagate through a vacuum. Thermal radiation is characterized by a particular spectrum of many wavelengths that are associated with emission from an object, due to the vibration of its molecules at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures such radiation is associated with spectra far above the infrared, extending into visible, ultraviolet, and even X-ray regions (e.g. the solar corona). Thus, the popular association of infrared radiation with thermal radiation is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth. The concept of emissivity is important in understanding the infrared emissions of objects. This is a property of a surface that describes how its thermal emissions deviate from the ideal of a black body. To further explain, two objects at the same physical temperature may not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler (assuming, as is often the case, that the surrounding environment is cooler than the objects being viewed). When an object has less than perfect emissivity, it obtains properties of reflectivity and/or transparency, and so the temperature of the surrounding environment is partially reflected by and/or transmitted through the object. If the object were in a hotter environment, then a lower emissivity object at the same temperature would likely appear to be hotter than a more emissive one. For that reason, incorrect selection of emissivity and not accounting for environmental temperatures will give inaccurate results when using infrared cameras and pyrometers. Applications Night vision Infrared is used in night vision equipment when there is insufficient visible light to see. Night vision devices operate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light. Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source. The use of infrared light and night vision devices should not be confused with thermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment. Thermography Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termed pyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to greatly reduced production costs. Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nm or 9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black-body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name). Hyperspectral imaging A hyperspectral image is a "picture" containing continuous spectrum through a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements. Thermal infrared hyperspectral imaging can be similarly performed using a thermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the Sun or the Moon. Such cameras are typically applied for geological measurements, outdoor surveillance and UAV applications. Other imaging In infrared photography, infrared filters are used to capture the near-infrared spectrum. Digital cameras often use infrared blockers. Cheaper digital cameras and camera phones have less effective filters and can view intense near-infrared, appearing as a bright purple-white color. This is especially pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging using far-infrared or terahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments such as terahertz time-domain spectroscopy. Tracking Infrared tracking, also known as infrared homing, refers to a passive missile guidance system, which uses the emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as "heat-seekers" since infrared (IR) is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background. Heating Infrared radiation can be used as a deliberate heating source. For example, it is used in infrared saunas to heat the occupants. It may also be used in other heating applications, such as to remove ice from the wings of aircraft (de-icing). Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings, forming of plastics, annealing, plastic welding, and print drying. In these applications, infrared heaters replace convection ovens and contact heating. Cooling A variety of technologies or proposed technologies take advantage of infrared emissions to cool buildings or other systems. The LWIR (8–15 μm) region is especially useful since some radiation at these wavelengths can escape into space through the atmosphere's infrared window. This is how passive daytime radiative cooling (PDRC) surfaces are able to achieve sub-ambient cooling temperatures under direct solar intensity, enhancing terrestrial heat flow to outer space with zero energy consumption or pollution. PDRC surfaces maximize shortwave solar reflectance to lessen heat gain while maintaining strong longwave infrared (LWIR) thermal radiation heat transfer. When imagined on a worldwide scale, this cooling method has been proposed as a way to slow and even reverse global warming, with some estimates proposing a global surface area coverage of 1-2% to balance global heat fluxes. Communications IR data transmission is also employed in short-range communication among computer peripherals and personal digital assistants. These devices usually conform to standards published by IrDA, the Infrared Data Association. Remote controls and IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation that may be concentrated by a lens into a beam that the user aims at the detector. The beam is modulated, i.e. switched on and off, according to a code which the receiver interprets. Usually very near-IR is used (below 800 nm) for practical reasons. This wavelength is efficiently detected by inexpensive silicon photodiodes, which the receiver uses to convert the detected radiation to an electric current. That electrical signal is passed through a high-pass filter which retains the rapid pulsations due to the IR transmitter but filters out slowly changing infrared radiation from ambient light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way for remote controls to command appliances. Infrared remote control protocols like RC-5, SIRC, are used to communicate with infrared. Free-space optical communication using infrared lasers can be a relatively inexpensive way to install a communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic cable, except for the radiation damage. "Since the eye cannot detect IR, blinking or closing the eyes to help prevent or reduce damage may not happen." Infrared lasers are used to provide the light for optical fiber communications systems. Wavelengths around 1,330 nm (least dispersion) or 1,550 nm (best transmission) are the best choices for standard silica fibers. IR data transmission of audio versions of printed signs is being researched as an aid for visually impaired people through the Remote infrared audible signage project. Transmitting IR data from one device to another is sometimes referred to as beaming. IR is sometimes used for assistive audio as an alternative to an audio induction loop. Spectroscopy Infrared vibrational spectroscopy (see also near-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change in dipole in the molecule then it will absorb a photon that has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared light. Typically, the technique is used to study organic compounds using light radiation from the mid-infrared, 4,000–400 cm−1. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will show a broad O-H absorption around 3200 cm−1). The unit for expressing radiation in this application, cm−1, is the spectroscopic wavenumber. It is the frequency divided by the speed of light in vacuum. Thin film metrology In the semiconductor industry, infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction Coefficient (k) can be determined via the Forouhi–Bloomer dispersion equations. The reflectance from the infrared light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench structures. Meteorology Weather satellites equipped with scanning radiometers produce thermal or infrared images, which can then enable a trained analyst to determine cloud heights and types, to calculate land and surface water temperatures, and to locate ocean surface features. The scanning is typically in the range 10.3–12.5 μm (IR4 and IR5 channels). Clouds with high and cold tops, such as cyclones or cumulonimbus clouds, are often displayed as red or black, lower warmer clouds such as stratus or stratocumulus are displayed as blue or grey, with intermediate clouds shaded accordingly. Hot land surfaces are shown as dark-grey or black. One disadvantage of infrared imagery is that low clouds such as stratus or fog can have a temperature similar to the surrounding land or sea surface and do not show up. However, using the difference in brightness of the IR4 channel (10.3–11.5 μm) and the near-infrared channel (1.58–1.64 μm), low clouds can be distinguished, producing a fog satellite picture. The main advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be studied. These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream, which are valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to protect their crops against frost or increase their catch from the sea. Even El Niño phenomena can be spotted. Using color-digitized techniques, the gray-shaded thermal images can be converted to color for easier identification of desired information. The main water vapour channel at 6.40 to 7.08 μm can be imaged by some weather satellites and shows the amount of moisture in the atmosphere. Climatology In the field of climatology, atmospheric infrared radiation is monitored to detect trends in the energy exchange between the Earth and the atmosphere. These trends provide information on long-term changes in Earth's climate. It is one of the primary parameters studied in research into global warming, together with solar radiation. A pyrgeometer is utilized in this field of research to perform continuous outdoor measurements. This is a broadband infrared radiometer with sensitivity for infrared radiation between approximately 4.5 μm and 50 μm. Astronomy Astronomers observe objects in the infrared portion of the electromagnetic spectrum using optical components, including mirrors, lenses and solid state digital detectors. For this reason it is classified as part of optical astronomy. To form an image, the components of an infrared telescope need to be carefully shielded from heat sources, and the detectors are chilled using liquid helium. The sensitivity of Earth-based infrared telescopes is significantly limited by water vapor in the atmosphere, which absorbs a portion of the infrared radiation arriving from space outside of selected atmospheric windows. This limitation can be partially alleviated by placing the telescope observatory at a high altitude, or by carrying the telescope aloft with a balloon or an aircraft. Space telescopes do not suffer from this handicap, and so outer space is considered the ideal location for infrared astronomy. The infrared portion of the spectrum has several useful benefits for astronomers. Cold, dark molecular clouds of gas and dust in our galaxy will glow with radiated heat as they are irradiated by imbedded stars. Infrared can also be used to detect protostars before they begin to emit visible light. Stars emit a smaller portion of their energy in the infrared spectrum, so nearby cool objects such as planets can be more readily detected. (In the visible light spectrum, the glare from the star will drown out the reflected light from a planet.) Infrared light is also useful for observing the cores of active galaxies, which are often cloaked in gas and dust. Distant galaxies with a high redshift will have the peak portion of their spectrum shifted toward longer wavelengths, so they are more readily observed in the infrared. Cleaning Infrared cleaning is a technique used by some motion picture film scanners, film scanners and flatbed scanners to reduce or remove the effect of dust and scratches upon the finished scan. It works by collecting an additional infrared channel from the scan at the same position and resolution as the three visible color channels (red, green, and blue). The infrared channel, in combination with the other channels, is used to detect the location of scratches and dust. Once located, those defects can be corrected by scaling or replaced by inpainting. Art conservation and analysis Infrared reflectography can be applied to paintings to reveal underlying layers in a non-destructive manner, in particular the artist's underdrawing or outline drawn as a guide. Art conservators use the technique to examine how the visible layers of paint differ from the underdrawing or layers in between (such alterations are called pentimenti when made by the original artist). This is very useful information in deciding whether a painting is the prime version by the original artist or a copy, and whether it has been altered by over-enthusiastic restoration work. In general, the more pentimenti, the more likely a painting is to be the prime version. It also gives useful insights into working practices. Reflectography often reveals the artist's use of carbon black, which shows up well in reflectograms, as long as it has not also been used in the ground underlying the whole painting. Recent progress in the design of infrared-sensitive cameras makes it possible to discover and depict not only underpaintings and pentimenti, but entire paintings that were later overpainted by the artist. Notable examples are Picasso's Woman Ironing and Blue Room, where in both cases a portrait of a man has been made visible under the painting as it is known today. Similar uses of infrared are made by conservators and scientists on various types of objects, especially very old written documents such as the Dead Sea Scrolls, the Roman works in the Villa of the Papyri, and the Silk Road texts found in the Dunhuang Caves. Carbon black used in ink can show up extremely well. Biological systems The pit viper has a pair of infrared sensory pits on its head. There is uncertainty regarding the exact thermal sensitivity of this biological infrared detection system. Other organisms that have thermoreceptive organs are pythons (family Pythonidae), some boas (family Boidae), the Common Vampire Bat (Desmodus rotundus), a variety of jewel beetles (Melanophila acuminata), darkly pigmented butterflies (Pachliopta aristolochiae and Troides rhadamantus plateni), and possibly blood-sucking bugs (Triatoma infestans). By detecting the heat that their prey emits, crotaline and boid snakes identify and capture their prey using their IR-sensitive pit organs. Comparably, IR-sensitive pits on the Common Vampire Bat (Desmodus rotundus) aid in the identification of blood-rich regions on its warm-blooded victim. The jewel beetle, Melanophila acuminata, locates forest fires via infrared pit organs, where on recently burnt trees, they deposit their eggs. Thermoreceptors on the wings and antennae of butterflies with dark pigmentation, such Pachliopta aristolochiae and Troides rhadamantus plateni, shield them from heat damage as they sunbathe in the sun. Additionally, it's hypothesised that thermoreceptors let bloodsucking bugs (Triatoma infestans) locate their warm-blooded victims by sensing their body heat. Some fungi like Venturia inaequalis require near-infrared light for ejection. Although near-infrared vision (780–1,000 nm) has long been deemed impossible due to noise in visual pigments, sensation of near-infrared light was reported in the common carp and in three cichlid species. Fish use NIR to capture prey and for phototactic swimming orientation. NIR sensation in fish may be relevant under poor lighting conditions during twilight and in turbid surface waters. Photobiomodulation Near-infrared light, or photobiomodulation, is used for treatment of chemotherapy-induced oral ulceration as well as wound healing. There is some work relating to anti-herpes virus treatment. Research projects include work on central nervous system healing effects via cytochrome c oxidase upregulation and other possible mechanisms. Health hazards Strong infrared radiation in certain industry high-heat settings may be hazardous to the eyes, resulting in damage or blindness to the user. Since the radiation is invisible, special IR-proof goggles must be worn in such places. Scientific history The discovery of infrared radiation is ascribed to William Herschel, the astronomer, in the early 19th century. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the infrared, beyond the red part of the spectrum, through an increase in the temperature recorded on a thermometer. He was surprised at the result and called them "Calorific Rays". The term "infrared" did not appear until late 19th century. An earlier experiment in 1790 by Marc-Auguste Pictet demonstrated the reflection and focusing of radiant heat via mirrors in the absence of visible light. Other important dates include: 1830: Leopoldo Nobili made the first thermopile IR detector. 1840: John Herschel produces the first thermal image, called a thermogram. 1860: Gustav Kirchhoff formulated the blackbody theorem . 1873: Willoughby Smith discovered the photoconductivity of selenium. 1878: Samuel Pierpont Langley invents the first bolometer, a device which is able to measure small temperature fluctuations, and thus the power of far infrared sources. 1879: Stefan–Boltzmann law formulated empirically that the power radiated by a blackbody is proportional to T4. 1880s and 1890s: Lord Rayleigh and Wilhelm Wien solved part of the blackbody equation, but both solutions diverged in parts of the electromagnetic spectrum. This problem was called the "ultraviolet catastrophe and infrared catastrophe". 1892: Willem Henri Julius published infrared spectra of 20 organic compounds measured with a bolometer in units of angular displacement. 1901: Max Planck published the blackbody equation and theorem. He solved the problem by quantizing the allowable energy transitions. 1905: Albert Einstein developed the theory of the photoelectric effect. 1905–1908: William Coblentz published infrared spectra in units of wavelength (micrometers) for several chemical compounds in Investigations of Infra-Red Spectra. 1917: Theodore Case developed the thallous sulfide detector, which helped produce the first infrared search and track device able to detect aircraft at a range of one mile (1.6 km). 1935: Lead salts – early missile guidance in World War II. 1938: Yeou Ta predicted that the pyroelectric effect could be used to detect infrared radiation. 1945: The Zielgerät 1229 "Vampir" infrared weapon system was introduced as the first portable infrared device for military applications. 1952: Heinrich Welker grew synthetic InSb crystals. 1950s and 1960s: Nomenclature and radiometric units defined by Fred Nicodemenus, G. J. Zissis and R. Clark; Robert Clark Jones defined D*. 1958: W. D. Lawson (Royal Radar Establishment in Malvern) discovered IR detection properties of Mercury cadmium telluride (HgCdTe). 1958: Falcon and Sidewinder missiles were developed using infrared technology. 1960s: Paul Kruse and his colleagues at Honeywell Research Center demonstrate the use of HgCdTe as an effective compound for infrared detection. 1962: J. Cooper demonstrated pyroelectric detection. 1964: W. G. Evans discovered infrared thermoreceptors in a pyrophile beetle. 1965: First IR handbook; first commercial imagers (Barnes, Agema (now part of FLIR Systems Inc.)); Richard Hudson's landmark text; F4 TRAM FLIR by Hughes; phenomenology pioneered by Fred Simmons and A. T. Stair; U.S. Army's night vision lab formed (now Night Vision and Electronic Sensors Directorate (NVESD)), and Rachets develops detection, recognition and identification modeling there. 1970: Willard Boyle and George E. Smith proposed CCD at Bell Labs for picture phone. 1973: Common module program started by NVESD. 1978: Infrared imaging astronomy came of age, observatories planned, IRTF on Mauna Kea opened; 32 × 32 and 64 × 64 arrays produced using InSb, HgCdTe and other materials. 2013: On 14 February, researchers developed a neural implant that gives rats the ability to sense infrared light, which for the first time provides living creatures with new abilities, instead of simply replacing or augmenting existing abilities. See also Notes References External links Infrared: A Historical Perspective (Omega Engineering) Infrared Data Association , a standards organization for infrared data interconnection SIRC Protocol How to build a USB infrared receiver to control PC's remotely Infrared Waves: detailed explanation of infrared light. (NASA) Herschel's original paper from 1800 announcing the discovery of infrared light The thermographic's library , collection of thermogram Infrared reflectography in analysis of paintings at ColourLex Molly Faries, Techniques and Applications – Analytical Capabilities of Infrared Reflectography: An Art Historian s Perspective , in Scientific Examination of Art: Modern Techniques in Conservation and Analysis, Sackler NAS Colloquium, 2005 Electromagnetic spectrum
Infrared
[ "Physics" ]
7,051
[ "Infrared", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
624,406
https://en.wikipedia.org/wiki/Amagat%27s%20law
Amagat's law or the law of partial volumes describes the behaviour and properties of mixtures of ideal (as well as some cases of non-ideal) gases. It is of use in chemistry and thermodynamics. It is named after Emile Amagat. Overview Amagat's law states that the extensive volume of a gas mixture is equal to the sum of volumes of the component gases, if the temperature and the pressure remain the same: This is the experimental expression of volume as an extensive quantity. According to Amagat's law of partial volume, the total volume of a non-reacting mixture of gases at constant temperature and pressure should be equal to the sum of the individual partial volumes of the constituent gases. So if are considered to be the partial volumes of components in the gaseous mixture, then the total volume would be represented as Both Amagat's and Dalton's law predict the properties of gas mixtures. Their predictions are the same for ideal gases. However, for real (non-ideal) gases, the results differ. Dalton's law of partial pressures assumes that the gases in the mixture are non-interacting (with each other) and each gas independently applies its own pressure, the sum of which is the total pressure. Amagat's law assumes that the volumes of the component gases (again at the same temperature and pressure) are additive; the interactions of the different gases are the same as the average interactions of the components. The interactions can be interpreted in terms of a second virial coefficient for the mixture. For two components, the second virial coefficient for the mixture can be expressed as where the subscripts refer to components 1 and 2, the are the mole fractions, and the are the second virial coefficients. The cross term of the mixture is given by for Dalton's law and for Amagat's law. When the volumes of each component gas (same temperature and pressure) are very similar, then Amagat's law becomes mathematically equivalent to Vegard's law for solid mixtures. Ideal gas mixture When Amagat's law is valid and the gas mixture is made of ideal gases, where: is the pressure of the gas mixture, is the volume of the i-th component of the gas mixture, is the total volume of the gas mixture, is the amount of substance of i-th component of the gas mixture (in mol), is the total amount of substance of gas mixture (in mol), is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant, is the absolute temperature of the gas mixture (in K), is the mole fraction of the i-th component of the gas mixture. It follows that the mole fraction and volume fraction are the same. This is true also for other equation of state. References Eponymous laws of physics Gas laws Gases
Amagat's law
[ "Physics", "Chemistry" ]
595
[ "Matter", "Phases of matter", "Gas laws", "Statistical mechanics", "Gases" ]
624,666
https://en.wikipedia.org/wiki/List%20of%20partition%20topics
Generally, a partition is a division of a whole into non-overlapping parts. Among the kinds of partitions considered in mathematics are partition of a set or an ordered partition of a set, partition of a graph, partition of an integer, partition of an interval, partition of unity, partition of a matrix; see block matrix, and partition of the sum of squares in statistics problems, especially in the analysis of variance, quotition and partition, two ways of viewing the operation of division of integers. Integer partitions Composition (combinatorics) Ewens's sampling formula Ferrers graph Glaisher's theorem Landau's function Partition function (number theory) Pentagonal number theorem Plane partition Quotition and partition Rank of a partition Crank of a partition Solid partition Young tableau Young's lattice Set partitions Bell number Bell polynomials Dobinski's formula Cumulant Data clustering Equivalence relation Exact cover Knuth's Algorithm X Dancing Links Exponential formula Faà di Bruno's formula Feshbach–Fano partitioning Foliation Frequency partition Graph partition Kernel of a function Lamination (topology) Matroid partitioning Multipartition Multiplicative partition Noncrossing partition Ordered partition of a set Partition calculus Partition function (quantum field theory) Partition function (statistical mechanics) Derivation of the partition function Partition of an interval Partition of a set Ordered partition Partition refinement Disjoint-set data structure Partition problem 3-partition problem Partition topology Quotition and partition Recursive partitioning Stirling number Stirling transform Stratification (mathematics) Tverberg partition Twelvefold way In probability and stochastic processes Chinese restaurant process Dobinski's formula Ewens's sampling formula Law of total cumulance Partition Partition topics
List of partition topics
[ "Mathematics" ]
359
[ "Enumerative combinatorics", "Combinatorics" ]
624,714
https://en.wikipedia.org/wiki/Thomson%20scattering
Thomson scattering is the elastic scattering of electromagnetic radiation by a free charged particle, as described by classical electromagnetism. It is the low-energy limit of Compton scattering: the particle's kinetic energy and photon frequency do not change as a result of the scattering. This limit is valid as long as the photon energy is much smaller than the mass energy of the particle: , or equivalently, if the wavelength of the light is much greater than the Compton wavelength of the particle (e.g., for electrons, longer wavelengths than hard x-rays). Description of the phenomenon Thomson scattering is a model for the effect of electromagnetic fields on electrons when the field energy is much less than the rest mass of the electron . In the model the electric field of the incident wave accelerates the charged particle, causing it, in turn, to emit radiation at the same frequency as the incident wave, and thus the wave is scattered. Thomson scattering is an important phenomenon in plasma physics and was first explained by the physicist J. J. Thomson. As long as the motion of the particle is non-relativistic (i.e. its speed is much less than the speed of light), the main cause of the acceleration of the particle will be due to the electric field component of the incident wave. In a first approximation, the influence of the magnetic field can be neglected. The particle will move in the direction of the oscillating electric field, resulting in electromagnetic dipole radiation. The moving particle radiates most strongly in a direction perpendicular to its acceleration and that radiation will be polarized along the direction of its motion. Therefore, depending on where an observer is located, the light scattered from a small volume element may appear to be more or less polarized. The electric fields of the incoming and observed wave (i.e. the outgoing wave) can be divided up into those components lying in the plane of observation (formed by the incoming and observed waves) and those components perpendicular to that plane. Those components lying in the plane are referred to as "radial" and those perpendicular to the plane are "tangential". (It is difficult to make these terms seem natural, but it is standard terminology.) The diagram on the right depicts the plane of observation. It shows the radial component of the incident electric field, which causes the charged particles at the scattering point to exhibit a radial component of acceleration (i.e., a component tangent to the plane of observation). It can be shown that the amplitude of the observed wave will be proportional to the cosine of χ, the angle between the incident and observed waves. The intensity, which is the square of the amplitude, will then be diminished by a factor of cos2(χ). It can be seen that the tangential components (perpendicular to the plane of the diagram) will not be affected in this way. The scattering is best described by an emission coefficient which is defined as ε where ε dt dV dΩ dλ is the energy scattered by a volume element in time dt into solid angle dΩ between wavelengths λ and λ+dλ. From the point of view of an observer, there are two emission coefficients, εr corresponding to radially polarized light and εt corresponding to tangentially polarized light. For unpolarized incident light, these are given by: where is the density of charged particles at the scattering point, is incident flux (i.e. energy/time/area/wavelength), is the angle between the incident and scattered photons (see figure above) and is the Thomson cross section for the charged particle, defined below. The total energy radiated by a volume element in time dt between wavelengths λ and λ+dλ is found by integrating the sum of the emission coefficients over all directions (solid angle): The Thomson differential cross section, related to the sum of the emissivity coefficients, is given by expressed in SI units; q is the charge per particle, m the mass of particle, and a constant, the permittivity of free space. (To obtain an expression in cgs units, drop the factor of 4ε0.) Integrating over the solid angle, we obtain the Thomson cross section in SI units. The important feature is that the cross section is independent of light frequency. The cross section is proportional by a simple numerical factor to the square of the classical radius of a point particle of mass m and charge q, namely Alternatively, this can be expressed in terms of , the Compton wavelength, and the fine structure constant: For an electron, the Thomson cross-section is numerically given by: Examples of Thomson scattering The cosmic microwave background contains a small linearly-polarized component attributed to Thomson scattering. That polarized component mapping out the so-called E-modes was first detected by DASI in 2002. The solar K-corona is the result of the Thomson scattering of solar radiation from solar coronal electrons. The ESA and NASA SOHO mission and the NASA STEREO mission generate three-dimensional images of the electron density around the Sun by measuring this K-corona from three separate satellites. In tokamaks, corona of ICF targets and other experimental fusion devices, the electron temperatures and densities in the plasma can be measured with high accuracy by detecting the effect of Thomson scattering of a high-intensity laser beam. An upgraded Thomson scattering system in the Wendelstein 7-X stellarator uses Nd:YAG lasers to emit multiple pulses in quick succession. The intervals within each burst can range from 2 ms to 33.3 ms, permitting up to twelve consecutive measurements. Synchronization with plasma events is made possible by a newly added trigger system that facilitates real-time analysis of transient plasma events. In the Sunyaev–Zeldovich effect, where the photon energy is much less than the electron rest mass, the inverse-Compton scattering can be approximated as Thomson scattering in the rest frame of the electron. Models for X-ray crystallography are based on Thomson scattering. See also Compton scattering Kapitsa–Dirac effect Klein–Nishina formula References Further reading External links Thomson scattering notes Thomson scattering: principle and measurements Atomic physics Scattering Plasma diagnostics
Thomson scattering
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
1,265
[ "Nuclear physics", "Plasma physics", "Quantum mechanics", "Measuring instruments", "Plasma diagnostics", "Scattering", "Condensed matter physics", "Atomic physics", "Particle physics", "Atomic", " molecular", " and optical physics" ]
625,226
https://en.wikipedia.org/wiki/Reversible%20reaction
A reversible reaction is a reaction in which the conversion of reactants to products and the conversion of products to reactants occur simultaneously. \mathit aA{} + \mathit bB <=> \mathit cC{} + \mathit dD A and B can react to form C and D or, in the reverse reaction, C and D can react to form A and B. This is distinct from a reversible process in thermodynamics. Weak acids and bases undergo reversible reactions. For example, carbonic acid: H2CO3 (l) + H2O(l) ⇌ HCO3−(aq) + H3O+(aq). The concentrations of reactants and products in an equilibrium mixture are determined by the analytical concentrations of the reagents (A and B or C and D) and the equilibrium constant, K. The magnitude of the equilibrium constant depends on the Gibbs free energy change for the reaction. So, when the free energy change is large (more than about 30 kJ mol−1), the equilibrium constant is large (log K > 3) and the concentrations of the reactants at equilibrium are very small. Such a reaction is sometimes considered to be an irreversible reaction, although small amounts of the reactants are still expected to be present in the reacting system. A truly irreversible chemical reaction is usually achieved when one of the products exits the reacting system, for example, as does carbon dioxide (volatile) in the reaction CaCO3 + 2HCl → CaCl2 + H2O + CO2↑ History The concept of a reversible reaction was introduced by Claude Louis Berthollet in 1803, after he had observed the formation of sodium carbonate crystals at the edge of a salt lake (one of the natron lakes in Egypt, in limestone): 2NaCl + CaCO3 → Na2CO3 + CaCl2 He recognized this as the reverse of the familiar reaction Na2CO3 + CaCl2→ 2NaCl + CaCO3 Until then, chemical reactions were thought to always proceed in one direction. Berthollet reasoned that the excess of salt in the lake helped push the "reverse" reaction towards the formation of sodium carbonate. In 1864, Peter Waage and Cato Maximilian Guldberg formulated their law of mass action which quantified Berthollet's observation. Between 1884 and 1888, Le Chatelier and Braun formulated Le Chatelier's principle, which extended the same idea to a more general statement on the effects of factors other than concentration on the position of the equilibrium. Reaction kinetics For the reversible reaction A⇌B, the forward step A→B has a rate constant and the backwards step B→A has a rate constant . The concentration of A obeys the following differential equation: If we consider that the concentration of product B at anytime is equal to the concentration of reactants at time zero minus the concentration of reactants at time , we can set up the following equation: Combining and , we can write . Separation of variables is possible and using an initial value , we obtain: and after some algebra we arrive at the final kinetic expression: . The concentration of A and B at infinite time has a behavior as follows: Thus, the formula can be linearized in order to determine : To find the individual constants and , the following formula is required: See also Dynamic equilibrium Chemical equilibrium Irreversibility Microscopic reversibility Static equilibrium References Equilibrium chemistry Physical chemistry
Reversible reaction
[ "Physics", "Chemistry" ]
725
[ "Equilibrium chemistry", "Physical chemistry", "Applied and interdisciplinary physics", "nan" ]
625,341
https://en.wikipedia.org/wiki/Nanopore
A nanopore is a pore of nanometer size. It may, for example, be created by a pore-forming protein or as a hole in synthetic materials such as silicon or graphene. When a nanopore is present in an electrically insulating membrane, it can be used as a single-molecule detector. It can be a biological protein channel in a high electrical resistance lipid bilayer, a pore in a solid-state membrane or a hybrid of these – a protein channel set in a synthetic membrane. The detection principle is based on monitoring the ionic current passing through the nanopore as a voltage is applied across the membrane. When the nanopore is of molecular dimensions, passage of molecules (e.g., DNA) cause interruptions of the "open" current level, leading to a "translocation event" signal. The passage of RNA or single-stranded DNA molecules through the membrane-embedded alpha-hemolysin channel (1.5 nm diameter), for example, causes a ~90% blockage of the current (measured at 1 M KCl solution). It may be considered a Coulter counter for much smaller particles. Types Organic Nanopores may be formed by pore-forming proteins, typically a hollow core passing through a mushroom-shaped protein molecule. Examples of pore-forming proteins are alpha hemolysin, aerolysin, and MspA porin. In typical laboratory nanopore experiments, a single protein nanopore is inserted into a lipid bilayer membrane and single-channel electrophysiology measurements are taken. Newer pore-forming proteins have been extracted from bacteriophages for study into their use as nanopores. These pores are generally selected due to their diameter being above 2 nm, the diameter of double-stranded DNA. Larger nanopores can be up to 20 nm in a diameter. These pores allow small molecules like oxygen, glucose and insulin to pass however they prevent large immune system molecules like immunoglobins from passing. As an example, rat pancreatic cells are microencapsulated, they receive nutrients and release insulin through nanopores being totally isolated from their neighboring environment i.e. foreign cells. This knowledge can help to replace nonfunctional islets of Langerhans cells in the pancreas (responsible for producing insulin), by harvested piglet cells. They can be implanted underneath the human skin without the need of immunosuppressants which put diabetic patients at a risk of infection. Inorganic Solid-state nanopores are generally made in silicon compound membranes, one of the most common being silicon nitride. The second type of widely used solid-state nanopores are glass nanopores fabricated by laser-assisted pulling of glass capillary. Solid-state nanopores can be manufactured with several techniques including ion-beam sculpting, dielectric breakdown, electron beam exposure using TEM and Ion track etching. More recently, the use of graphene as a material for solid-state nanopore sensing has been explored. Another example of solid-state nanopores is a box-shaped graphene (BSG) nanostructure. The BSG nanostructure is a multilayer system of parallel hollow nanochannels located along the surface and having quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm. The typical width of channel facets makes about 25 nm. Size-tunable elastomeric nanopores have been fabricated, allowing accurate measurement of nanoparticles as they occlude the flow of ionic current. This measurement methodology can be used to measure a wide range of particle types. In contrast to the limitations of solid-state pores, they allow for the optimization of the resistance pulse magnitude relative to the background current by matching the pore-size closely to the particle-size. As detection occurs on a particle by particle basis, the true average and polydispersity distribution can be determined. Using this principle, the world's only commercial tunable nanopore-based particle detection system has been developed by Izon Science Ltd. The box-shaped graphene (BSG) nanostructure can be used as a basis for building devices with changeable pore sizes. Nanopore based sequencing The observation that a passing strand of DNA containing different bases corresponds with shifts in current values has led to the development of nanopore sequencing. Nanopore sequencing can occur with bacterial nanopores as mentioned in the above section as well as with the Nanopore sequencing device(s) is created by Oxford Nanopore Technologies. Monomer identification From a fundamental standpoint, nucleotides from DNA or RNA are identified based on shifts in current as the strand is entering the pore. The approach that Oxford Nanopore Technologies uses for nanopore DNA sequencing labeled DNA sample is loaded to the flow cell within the nanopore. The DNA fragment is guided to the nanopore and commences the unfolding of the helix. As the unwound helix moves through the nanopore, it is correlated with a change in the current value which is measured in thousand times per second. Nanopore analysis software can take this alternating current value for each base detected, and obtain the resulting DNA sequence. Similarly with the usage of biological nanopores, as a constant voltage is applied to the system, the alternating current can be observed. As DNA, RNA or peptides enter the pore, shifts in the current can be observed through this system that are characteristic of the monomer being identified. Ion current rectification (ICR) is an important phenomenon for nanopore. Ion current rectification can also be used as a drug sensor and be employed to investigate charge status in the polymer membrane. Applications to nanopore sequencing Apart from rapid DNA sequencing, other applications include separation of single stranded and double stranded DNA in solution, and the determination of length of polymers. At this stage, nanopores are making contributions to the understanding of polymer biophysics, single-molecule analysis of DNA-protein interactions, as well as peptide sequencing. When it comes to peptide sequencing bacterial nanopores like hemolysin, can be applied to both RNA, DNA and most recently protein sequencing. Such as when applied in a study in which peptides with the same Glycine-Proline-Proline repeat were synthesized, and then put through nanopore analysis, an accurate sequence was able to be attained. This can also be used to identify differences in stereochemistry of peptides based on intermolecular ionic interactions. Some configuration changes of protein could also be observed from the translocation curve. Understanding this also contributes more data to understanding the sequence of the peptide fully in its environment. Usage of another bacterial derived nanopore, an aerolysin nanopore, has shown ability having shown similar ability in distinguishing residues within a peptide has also shown the ability to identify toxins present even in proclaimed "very pure" protein samples, while demonstrating stability over varying pH values. A limitation to the usage of bacterial nanopores would be that peptides as short as six residues were accurately detected, but with larger more negatively charged peptides resulted in more background signal that is not representative of the molecule. Alternate applications Since the discovery of track-etched technology in the late 1960s, filter membranes with needed diameter have found application potential in various fields including food safety, environmental pollution, biology, medicine, fuel cell, and chemistry. These track-etched membranes are typically made in polymer membrane through track-etching procedure, during which the polymer membrane is first irradiated by heavy ion beam to form tracks and then cylindrical pores or asymmetric pores are created along the track after wet etching. As important as fabrication of the filter membranes with proper diameters, characterizations and measurements of these materials are of the same paramount. Until now, a few of methods have been developed, which can be classified into the following categories according to the physical mechanisms they exploited: imaging methods such as scanning electron microscopy (SEM), transmission electron microscopy (TEM), atomic force microscopy (AFM); fluid transport such as bubble point and gas transport; fluid adsorptions such as nitrogen adsorption/desorption (BEH), mercury porosimetry, liquid-vapor equilibrium (BJH), gas-liquid equilibrium (permoporometry) and liquid-solid equilibrium (thermoporometry); electronic conductance; ultrasonic spectroscopy; and molecular transport. More recently, the use of light transmission technique as a method for nanopore size measurement has been proposed. See also Coulomb blockade Hemolysin Nanofluidics Nanometre Nanopore sequencing Nanoporous materials Pore-forming toxin References Further reading External links Computer simulations of nanopore devices Conical Nanopore Sensors Biomimetic Channels and Ionic Devices Nanotechnology
Nanopore
[ "Materials_science", "Engineering" ]
1,829
[ "Nanotechnology", "Materials science" ]
626,196
https://en.wikipedia.org/wiki/Electromagnetic%20propulsion
Electromagnetic propulsion (EMP) is the principle of accelerating an object by the utilization of a flowing electrical current and magnetic fields. The electrical current is used to either create an opposing magnetic field, or to charge a field, which can then be repelled. When a current flows through a conductor in a magnetic field, an electromagnetic force known as a Lorentz force, pushes the conductor in a direction perpendicular to the conductor and the magnetic field. This repulsing force is what causes propulsion in a system designed to take advantage of the phenomenon. The term electromagnetic propulsion (EMP) can be described by its individual components: electromagneticusing electricity to create a magnetic field, and propulsionthe process of propelling something. When a fluid (liquid or gas) is employed as the moving conductor, the propulsion may be termed magnetohydrodynamic drive. One key difference between EMP and propulsion achieved by electric motors is that the electrical energy used for EMP is not used to produce rotational energy for motion; though both use magnetic fields and a flowing electrical current. The science of electromagnetic propulsion does not have origins with any one individual and has application in many different fields. The thought of using magnets for propulsion continues to this day and has been dreamed of since at least 1897 when John Munro published his fictional story "A Trip to Venus". can be seen in maglev trains and military railguns. Other applications that remain not widely used or still in development include ion thruster for low orbiting satellites and magnetohydrodynamic drive for ships and submarines. History One of the first recorded discoveries regarding electromagnetic propulsion was in 1889 when Professor Elihu Thomson made public his work with electromagnetic waves and alternating currents. A few years later Emile Bachelet proposed the idea of a metal carriage levitated in air above the rails in a modern railway, which he showcased in the early 1890s. In the 1960s Eric Roberts Laithwaite developed the linear induction motor, which built upon these principles and introduced the first practical application of electromagnetic propulsion. In 1966 James R. Powell and Gordon Danby patented the superconducting maglev transportation system, and after this engineers around the world raced to create the first high-speed rail. From 1984 to 1995 the first commercial automated maglev system ran in Birmingham. It was a low speed Maglev shuttle that ran from the Birmingham International Airport to the Birmingham International Railway System. In the USSR at the beginning of 1960th at the Institute of Hydrodynamics, Novosibirsk, Russia, prof. V.F. Minin laid down the experimental foundations of electromagnetic accelerating of bodies to hypersonic velocity. Uses Trains Electromagnetic propulsion is utilized in transportation systems to minimize friction and maximize speed over long distances. This has mainly been implemented in high-speed rail systems that use a linear induction motor to power trains by magnetic currents. It has also been utilized in theme parks to create high-speed roller coasters and water rides. Maglev In a maglev train the primary coil assembly lies below the reaction plate. There is a 1–10 cm (0.39-3.93 inch) air gap between that eliminates friction, allowing for speeds up to 500 km/h (310 mph). An alternating electric current is supplied to the coils, which creates a change in polarity of the magnetic field. This pulls the train forward from the front, and thrusts the train forward from the back. A typical Maglev train costs three cents per passenger mile, or seven cents per ton mile (not including construction costs). This compares to 15 cents per passenger miles for travel by plane and 30 cents for ton mile for travel by intercity trucks. Maglev tracks have high longevity due to minimal friction and an even distribution of weight. Most last for at least 50 years and require little maintenance during this time. Maglev trains are promoted for their energy efficiency since they run on electricity, which can be produced by coal, nuclear, hydro, fusion, wind or solar power without requiring oil. On average most trains travel 483 km/h (300 mph) and use 0.4 megajoules per passenger mile. Using a 20 mi/gallon car with 1.8 people as a comparison, travel by car is typically 97 km/h (60 mph) and uses 4 megajoules per passenger mile. The carbon dioxide emissions are based upon the method of electrical production and fuel use. Many renewable electrical production methods generate little or no carbon dioxide during production (although carbon dioxide may be released during manufacture of the components, e.g. the steel used in wind turbines). The running of the train is significantly quieter than other trains, trucks or airplanes. Assembly: Linear Induction Motor A linear induction motor consists of two parts: the primary coil assembly and the reaction plate. The primary coil assembly consists of phase windings surrounded by steel laminations, and includes a thermal sensor within a thermal epoxy. The reaction plate consists of a 3.2 mm (0.125 inch) thick aluminum or copper plate bonded to a 6.4 mm (0.25 inch) thick cold rolled steel sheet. There is an air gap between these two parts that creates the frictionless property an electromagnetic propulsion system encompasses. Functioning of a linear induction motor begins with an AC force that is supplied to the coil windings within the primary coil assembly. This creates a traveling magnetic field that induces a current in the reaction plate, which then creates its own magnetic field. The magnetic fields in the primary coil assembly and reaction plate alternate, which generates force and direct linear motion. Spacecraft There are multiple applications for EMP technologies in the field of aerospace. Many of these applications are conceptual as of now, however, there are also several applications that range from near term to next century. One of such applications is the use of EMP to control fine adjustments of orbiting satellites. One of these particular systems is based on the direct interactions of the vehicle's own electromagnetic field and the magnetic field of the Earth. The thrust force may be thought of as an electrodynamic force of interaction of the electric current inside its conductors with the applied natural field of the Earth. To attain a greater force of interaction, the magnetic field must be propagated further from the flight craft. The advantages of such systems is the very precise and instantaneous control over the thrust force. In addition, the expected electrical efficiencies are far greater than those of current chemical rockets that attain propulsion through the intermediate use of heat; this results in low efficiencies and large amounts of gaseous pollutants. The electrical energy in the coil of the EMP system is translated to potential and kinetic energy through direct energy conversion. This results in the system having the same high efficiencies as other electrical machines while excluding the ejection of any substance into the environment. The current thrust-to mass ratios of these systems are relatively low. Nevertheless, since they do not require reaction mass, the vehicle mass is constant. Also, the thrust can be continuous with relatively low electric consumption. The biggest limitation would be mainly the electrical conductance of materials to produce the necessary values of the current in the propulsion system. Ships and Submarines EMP and its applications for seagoing ships and submarines have been investigated since at least 1958 when Warren Rice filed a patent describing the technology. The technology described by Rice considered charging the hull of the vessel itself. The design was later refined by allowing the water to flow through thrusters as described in a later patent by James Meng. The arrangement consists of a water channel open at both ends extending longitudinally through or attached to the ship, a means for producing magnetic field throughout the water channel, electrodes at each side of the channel and source of power to send direct current through the channel at right angles to magnetic flux in accordance with Lorentz force. Elevators Cable-free elevators using EMP, capable of moving both vertically and horizontally, have been developed by German engineering firm Thyssen Krupp for use in high rise, high density buildings. See also Coilgun Magnetohydrodynamics Railgun References Propulsion Electromagnetic components Space elevator
Electromagnetic propulsion
[ "Astronomy", "Technology" ]
1,640
[ "Exploratory engineering", "Astronomical hypotheses", "Space elevator" ]
627,040
https://en.wikipedia.org/wiki/Electronic%20signature
An electronic signature, or e-signature, is data that is logically associated with other data and which is used by the signatory to sign the associated data. This type of signature has the same legal standing as a handwritten signature as long as it adheres to the requirements of the specific regulation under which it was created (e.g., eIDAS in the European Union, NIST-DSS in the USA or ZertES in Switzerland). Electronic signatures are a legal concept distinct from digital signatures, a cryptographic mechanism often used to implement electronic signatures. While an electronic signature can be as simple as a name entered in an electronic document, digital signatures are increasingly used in e-commerce and in regulatory filings to implement electronic signatures in a cryptographically protected way. Standardization agencies like NIST or ETSI provide standards for their implementation (e.g., NIST-DSS, XAdES or PAdES). The concept itself is not new, with common law jurisdictions having recognized telegraph signatures as far back as the mid-19th century and faxed signatures since the 1980s. Description The USA's E-Sign Act, signed June 30, 2000 by President Clinton was described months later as "more like a seal than a signature." An electronic signature is intended to provide a secure and accurate identification method for the signatory during a transaction. Definitions of electronic signatures vary depending on the applicable jurisdiction. A common denominator in most countries is the level of an advanced electronic signature requiring that: The signatory can be uniquely identified and linked to the signature The signatory must have sole control of the private key that was used to create the electronic signature The signature must be capable of identifying if its accompanying data has been tampered with after the message was signed In the event that the accompanying data has been changed, the signature must be invalidated Electronic signatures may be created with increasing levels of security, with each having its own set of requirements and means of creation on various levels that prove the validity of the signature. To provide an even stronger probative value than the above described advanced electronic signature, some countries like member states of the European Union or Switzerland introduced the qualified electronic signature. It is difficult to challenge the authorship of a statement signed with a qualified electronic signature - the statement is non-repudiable. Technically, a qualified electronic signature is implemented through an advanced electronic signature that utilizes a digital certificate, which has been encrypted through a security signature-creating device and which has been authenticated by a qualified trust service provider. In contract law Since well before the American Civil War began in 1861, morse code was used to send messages electrically via the telegraph. Some of these messages were agreements to terms that were intended as enforceable contracts. An early acceptance of the enforceability of telegraphic messages as electronic signatures came from a New Hampshire Supreme Court case, Howley v. Whipple, in 1869. In the 1980s, many companies and even some individuals began using fax machines for high-priority or time-sensitive delivery of documents. Although the original signature on the original document was on paper, the image of the signature and its transmission was electronic. Courts in various jurisdictions have decided that enforceable legality of electronic signatures can include agreements made by email, entering a personal identification number (PIN) into a bank ATM, signing a credit or debit slip with a digital pen pad device (an application of graphics tablet technology) at a point of sale, installing software with a clickwrap software license agreement on the package, and signing electronic documents online. The first agreement signed electronically by two sovereign nations was a Joint Communiqué recognizing the growing importance of the promotion of electronic commerce, signed by the United States and Ireland in 1998. Enforceability In 1996 the United Nations published the UNCITRAL Model Law on Electronic Commerce. Article 7 of the UNCITRAL Model Law on Electronic Commerce was highly influential in the development of electronic signature laws around the world, including in the US. In 2001, UNCITRAL concluded work on a dedicated text, the UNCITRAL Model Law on Electronic Signatures, which has been adopted in some 30 jurisdictions. Article 9, paragraph 3 of the United Nations Convention on the Use of Electronic Communications in International Contracts, 2005, which establishes a mechanism for functional equivalence between electronic and handwritten signatures at the international level as well as for the cross-border recognition. The latest UNCITRAL text dealing with electronic signatures is article 16 of the UNCITRAL Model Law on the Use and Cross-border Recognition of Identity Management and Trust Services (2022). Canadian law (PIPEDA) attempts to clarify the situation by first defining a generic electronic signature as "a signature that consists of one or more letters, characters, numbers or other symbols in digital form incorporated in, attached to or associated with an electronic document," then defining a secure electronic signature as an electronic signature with specific properties. PIPEDA's secure electronic signature regulations refine the definition as being a digital signature applied and verified in a specific manner. In the European Union, EU Regulation No 910/2014 on electronic identification and trust services for electronic transactions in the European internal market (eIDAS) sets the legal frame for electronic signatures. It repeals Directive 1999/93/EC. The current and applicable version of eIDAS was published by the European Parliament and the European Council on July 23, 2014. Following Article 25 (1) of the eIDAS regulation, an advanced electronic signature shall “not be denied legal effect and admissibility as evidence in legal proceedings". However it will reach a higher probative value when enhanced to the level of a qualified electronic signature. By requiring the use of a qualified electronic signature creation device and being based on a certificate that has been issued by a qualified trust service provider, the upgraded advanced signature then carries according to Article 25 (2) of the eIDAS Regulation the same legal value as a handwritten signature. However, this is only regulated in the European Union and similarly through ZertES in Switzerland. A qualified electronic signature is not defined in the United States. The U.S. Code defines an electronic signature for the purpose of US law as "an electronic sound, symbol, or process, attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record." It may be an electronic transmission of the document which contains the signature, as in the case of facsimile transmissions, or it may be encoded message, such as telegraphy using Morse code. In the United States, the definition of what qualifies as an electronic signature is wide and is set out in the Uniform Electronic Transactions Act ("UETA") released by the National Conference of Commissioners on Uniform State Laws (NCCUSL) in 1999. It was influenced by ABA committee white papers and the uniform law promulgated by NCCUSL. Under UETA, the term means "an electronic sound, symbol, or process, attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record." This definition and many other core concepts of UETA are echoed in the U.S. ESign Act of 2000. 48 US states, the District of Columbia, and the US Virgin Islands have enacted UETA. Only New York and Illinois have not enacted UETA, but each of those states has adopted its own electronic signatures statute. As of June 11, 2020, Washington State Office of CIO adopted UETA. In Australia, an electronic signature is recognised as "not necessarily the writing in of a name, but maybe any mark which identifies it as the act of the party.” Under the Electronic Transactions Acts in each Federal, State and Territory jurisdiction, an electronic signature may be considered enforceable if (a) there was a method used to identify the person and to indicate that person’s intention in respect of the information communicated and the method was either: (i) as reliable as appropriate for the purpose for which the electronic communication was generated or communicated, in light of all the circumstances, including the relevant agreement; or (ii) proven in fact to have fulfilled the functions above by itself or together with further evidence and the person to whom the signature is required to be given consents to that method. Legal definitions Various laws have been passed internationally to facilitate commerce by using electronic records and signatures in interstate and foreign commerce. The intent is to ensure the validity and legal effect of contracts entered electronically. For instance, PIPEDA (Canadian federal law) (1) An electronic signature is "a signature that consists of one or more letters, characters, numbers or other symbols in digital form incorporated in, attached to or associated with an electronic document"; (2) A secure electronic signature is an electronic signature that (a) is unique to the person making the signature; (b) the technology or process used to make the signature is under the sole control of the person making the signature; (c) the technology or process can be used to identify the person using the technology or process; and (d) the electronic signature can be linked with an electronic document in such a way that it can be used to determine whether the electronic document has been changed since the electronic signature was incorporated in, attached to, or associated with the electronic document. ESIGN Act Sec 106 (US federal law) (2) ELECTRONIC- The term 'electronic' means relating to technology having electrical, digital, magnetic, wireless, optical, electromagnetic, or similar capabilities. (4) ELECTRONIC RECORD- The term 'electronic record' means a contract or other record created, generated, sent, communicated, received, or stored by electronic means. (5) ELECTRONIC SIGNATURE- The term 'electronic signature' means an electronic sound, symbol, or process, attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record. Regulation No 910/2014 on electronic identification and trust services for electronic transactions in the internal market Art 3 (European Union regulation) (10) ‘electronic signature’ means data in electronic form which is attached to or logically associated with other data in electronic form and which is used by the signatory to sign; (11) ‘advanced electronic signature’ means an electronic signature which meets the requirements set out in Article 26; (12) ‘qualified electronic signature’ means an advanced electronic signature that is created by a qualified electronic signature creation device, and which is based on a qualified certificate for electronic signatures; GPEA Sec 1710 (US federal law) (1) ELECTRONIC SIGNATURE.—the term "electronic signature" means a method of signing an electronic message that— (A) identifies and authenticates a particular person as the source of the electronic message; and (B) indicates such person's approval of the information contained in the electronic message. UETA Sec 2 (US state law) (5) "Electronic" means relating to technology having electrical, digital, magnetic, wireless, optical, electromagnetic, or similar capabilities. (6) "Electronic agent" means a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual. (7) "Electronic record" means a record created, generated, sent, communicated, received, or stored by electronic means. (8) "Electronic signature" means an electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record. Federal Reserve 12 CFR 202 (US federal regulation) refers to the ESIGN Act Commodity Futures Trading Commission 17 CFR Part 1 Sec. 1.3 (US federal regulations) (tt) Electronic signature means an electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record. Food and Drug Administration 21 CFR Sec. 11.3 (US federal regulations) (5) Digital signature means an electronic signature based upon cryptographic methods of originator authentication, computed by using a set of rules and a set of parameters such that the signer's identity and the integrity of the data can be verified. (7) Electronic signature means a computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an individual to be the legally binding equivalent of the individual's handwritten signature. United States Patent and Trademark Office 37 CFR Sec. 1.4 (federal regulation) (d)(2) S-signature. An S-signature is a signature inserted between forwarding slash marks, but not a handwritten signature ... (i)The S-signature must consist only of letters, or Arabic numerals, or both, with appropriate spaces and commas, periods, apostrophes, or hyphens for punctuation... (e.g., /Dr. James T. Jones, Jr./)... (iii) The signer's name must be: (A) Presented in printed or typed form preferably immediately below or adjacent to the S-signature, and (B) Reasonably specific enough so that the identity of the signer can be readily recognized. Laws regarding their use Australia - Electronic Transactions Act 1999 (which incorporates amendments from Electronic Transactions Amendment Act 2011), Section 10 - Signatures specifically relates to electronic signatures. Azerbaijan - Electronic Signature and Electronic Document Law (2004) Brazil - 2020 Electronic signature Law (Lei de assinaturas eletrônicas); Brazil's National Public Key Certificate Infrastructure Act (Infraestrutura de Chaves Públicas Brasileira - ICP-Brasil) Bulgaria - Electronic Document and Electronic Certification Services Act Canada - PIPEDA, its regulations, and the Canada Evidence Act. China - Law of the People's Republic of China on Electronic Signature (effective April 1, 2005) Costa Rica - Digital Signature Law 8454 (2005) Croatia 2002, updated 2008 Czech Republic – currently directly applicable eIDAS and Zákona o službách vytvářejících důvěru pro elektronické transakce - 297/2016 Sb. (effective from 19 September 2016), formerly Zákon o elektronickém podpisu - 227/2000 Sb. (effective from 1 October 2000 until 19 September 2016 when it was derogated) Ecuador – Ley de Comercio Electronico Firmas y Mensajes de Datos European Union - eIDAS regulation on implementation within the EU is set out in the Digital Signatures and the Law. India - Information Technology Act Indonesia - Law No. 11/2008 on Information and Electronic Transactions Iraq - Electronic Transactions and Electronic Signature Act No 78 in 2012 Ireland - Electronic Commerce Act 2000 Japan - Law Concerning Electronic Signatures and Certification Services, 2000 Kazakhstan - Law on Electronic Document and Electronic Signature (07.01.2003) Lithuania - Law on Electronic Identification and Trust Services for Electronic Transactions Mexico - E-Commerce Act [2000] Malaysia - Digital Signature Act 1997 and Digital Signature Regulation 1998 (https://www.mcmc.gov.my/sectors/digital-signature) Moldova - Privind semnătura electronică şi documentul electronic (http://lex.justice.md/md/353612/) New Zealand - Contract and Commercial Law Act 2017 Paraguay - Ley 4017: De validez jurídica de la Firma Electrónica, la Firma Digital, los Mensajes de Datos y el Expediente Electrónico (12/23/2010) , Ley 4610: Que modifica y amplia la Ley 4017/10 (05/07/2012) Peru - Ley Nº 27269. Ley de Firmas y Certificados Digitales (28MAY2000) the Philippines - Electronic Commerce Act of 2000 Poland - Ustawa o podpisie elektronicznym (Dziennik Ustaw z 2001 r. Nr 130 poz. 1450) Romania - LEGE nr. 214 din 5 iulie 2024 privind utilizarea semnăturii electronice, a mărcii temporale și prestarea serviciilor de încredere bazate pe acestea Russian Federation - Federal Law of Russian Federation about Electronic Signature (06.04.2011) Singapore - Electronic Transactions Act (2010) (background information, differences between ETA 1998 and ETA 2010) Slovakia - Zákon č.215/2002 o elektronickom podpise Slovenia - Slovene Electronic Commerce and Electronic Signature Act South Africa - Electronic Communications and Transactions Act [No. 25 of 2002] Spain - Ley 6/2020, de 11 de noviembre, reguladora de determinados aspectos de los servicios electrónicos de confianza Switzerland - ZertES Republika Srpska (entity of the Bosnia and Herzegovina) 2005 Thailand - Electronic Transactions Act B.E.2544 (2001) Turkey - Electronic Signature Law Ukraine - Electronic Signature Law, 2003 UK - s.7 Electronic Communications Act 2000 U.S. - Electronic Signatures in Global and National Commerce Act U.S. - Uniform Electronic Transactions Act - adopted by 48 states U.S. - Government Paperwork Elimination Act (GPEA) U.S. - The Uniform Commercial Code (UCC) Usage In 2016, Aberdeen Strategy and Research reported that 73% of "best-in-class" and 34% of all other respondents surveyed made use of electronic signature processes in supply chain and procurement, delivering benefits in the speed and efficiency of key procurement activities. The percentages of their survey respondents using electronic signatures in accounts payable and accounts receivable processes were a little lower, 53% of "best-in-class" respondents in each case. Technological implementations (underlying technology) Digital signature Digital signatures are cryptographic implementations of electronic signatures used as a proof of authenticity, data integrity and non-repudiation of communications conducted over the Internet. When implemented in compliance to digital signature standards, digital signing should offer end-to-end privacy with the signing process being user-friendly and secure. Digital signatures are generated and verified through standardized frameworks such as the Digital Signature Algorithm (DSA) by NIST or in compliance to the XAdES, PAdES or CAdES standards, specified by the ETSI. There are typically three algorithms involved with the digital signature process: Key generation – This algorithm provides a private key along with its corresponding public key. Signing – This algorithm produces a signature upon receiving a private key and the message that is being signed. Verification – This algorithm checks for the message's authenticity by verifying it along with the signature and public key. The process of digital signing requires that its accompanying public key can then authenticate the signature generated by both the fixed message and private key. Using these cryptographic algorithms, the user's signature cannot be replicated without having access to their private key. A secure channel is not typically required. By applying asymmetric cryptography methods, the digital signature process prevents several common attacks where the attacker attempts to gain access through the following attack methods. The most relevant standards on digital signatures with respect to size of domestic markets are the Digital Signature Standard (DSS) by the National Institute of Standards and Technology (NIST) and the eIDAS Regulation enacted by the European Parliament. OpenPGP is a non-proprietary protocol for email encryption through public key cryptography. It is supported by PGP and GnuPG, and some of the S/MIME IETF standards and has evolved into the most popular email encryption standard in the world. Biometric signature An electronic signature may also refer to electronic forms of processing or verifying identity through the use of biometric "signatures" or biologically identifying qualities of an individual. Such signatures use the approach of attaching some biometric measurement to a document as evidence. Biometric signatures include fingerprints, hand geometry (finger lengths and palm size), iris patterns, voice characteristics, retinal patterns, or any other human body property. All of these are collected using electronic sensors of some kind. Biometric measurements of this type are useless as passwords because they can't be changed if compromised. However, they might be serviceable, except that to date, they have been so easily deceived that they can carry little assurance that the person who purportedly signed a document was actually the person who did. For example, a replay of the electronic signal produced and submitted to the computer system responsible for 'affixing' a signature to a document can be collected via wiretapping techniques. Many commercially available fingerprint sensors have low resolution and can be deceived with inexpensive household items (for example, gummy bear candy gel). In the case of a user's face image, researchers in Vietnam successfully demonstrated in late 2017 how a specially crafted mask could beat Apple's Face ID on iPhone X. See also Authentication Long-term validation UNCITRAL Model Law on Electronic Signatures (MLES) References External links E-Sign Final Report (2005, European Union) Judicial Studies Board Digital Signature Guidelines Dynamic signatures Authentication methods Biometrics Cryptography Computer law Electronic identification Signature Records management technology
Electronic signature
[ "Mathematics", "Technology", "Engineering" ]
4,347
[ "Cybersecurity engineering", "Cryptography", "Applied mathematics", "Computer law", "Computing and society", "Electronic identification" ]
627,206
https://en.wikipedia.org/wiki/Transhuman%20Space
Transhuman Space (THS) is a role-playing game by David Pulver, published by Steve Jackson Games as part of the "Powered by GURPS" (Generic Universal Role-Playing System) line. Set in the year 2100, humanity has begun to colonize the Solar System. The pursuit of transhumanism is now in full swing, as more and more people reach fully posthuman states. Transhuman Space was one of the first role-playing games to tackle postcyberpunk and transhumanist themes. In 2002, the Transhuman Space adventure "Orbital Decay" received an Origins Award nomination for Best Role-Playing Game Adventure. Transhuman Space won the 2003 Grog d'Or Award for Best Role-playing Game, Game Line or RPG Setting. Setting The game assumes that no cataclysm — natural or human-induced — swept Earth in the 21st century. Instead, constant developments in information technology, genetic engineering, nanotechnology and nuclear physics generally improved condition of the average human life. Plagues of the 20th century (like cancer or AIDS) have been suppressed, the ozone layer is being restored and Earth's ecosystems are recovering (although thermal emission by fusion power plants poses an environmental threat—albeit a much lesser one than previous sources of energy). Thanks to modern medicine humans live biblical timespans surrounded by various artificially intelligent helper applications and robots (cybershells), sensory experience broadcasts (future TV) and cyberspace telepresence. Thanks to cheap and clean fusion energy humanity has power to fuel all these wonders, restore and transform its home planet and finally settle on other heavenly bodies. Human genetic engineering has advanced to the point that anyone—single individuals, same-sex couples, groups of three or more—can reproduce. The embryos can be allowed to be developed naturally, or they can undergo three levels of tinkering: 1. Genefixing, which corrects defects; 2. Upgrades, which boost natural abilities (Ishtar Upgrades are slightly more attractive than usual, Metanoia Upgrades are more intelligent, etc.); and... 3. Full transition to parahuman status (Nyx Parahumans only need a few hours of sleep per week, Aquamorphs can live underwater, etc.) Another type of human genetic engineering, far more controversial, is the creation of bioroids, fully sentient slave races. People can "upload" by recording the simulation of their brains on computer disks. The emulated individual then becomes a ghost, an infomorph very easily confused with "sapient artificial intelligence". However, this technology has several problems as the solely available "brainpeeling" technique is fatal to the original biological lifeform being simulated, has a significant failure rate and the philosophical questions regarding personal identity remain equivocal. Any infomorph, regardless of its origin, can be plugged into a "cybershell" (robotic or cybernetic body), or a biological body, or "bioshell". Or, the individual can illegally make multiple "xoxes", or copies of themselves, and scatter them throughout the system, exponentially increasing the odds that at least one of them will live for centuries more, if not forever. This is also a time of space colonization. First, humanity (specifically China, followed by the United States and others) colonized Mars in a fashion resembling that outlined in the Mars Direct project. The Moon, Lagrangian points, inner planets and asteroids soon followed. In the late 21st century even some of Saturn's moons have been settled as a base for that planet's Helium-3 scooping operations. Transhuman Space's setting is neither utopia nor dystopia, however: several problems have arisen from these otherwise beneficial developments. The generation gap has become a chasm as lifespans increase. No longer do the elite fear death, and no longer can the young hope to replace them. While it seemed that outworld colonies would offer accommodation and work for those young ones, they are being replaced by genetically tailored bioroids and AI-powered cybershells. The concept of humanity is no longer clear in a world where even some animals speak of their rights and the dead haunt both cyberspace and reality (in form of infomorph-controlled bioshells or cybershells). And the wonders of high science are not universally shared — some countries merely struggle with informatization while others suffer from nanoplagues, defective drugs, implants and software tested on their populace. In some poor countries high-tech tyrants oppress their backward people. And in outer space all sort of modern crime thrives, barely suppressed by military forces. Publication history After the initial set of GURPS books that were published using the GURPS Lite, later publications such as Transhuman Space by David Pulver were labelled simply "Powered by GURPS" without using the name "GURPS" in the book title. Transhuman Space received a significant amount of supporting publications, and was the largest original background setting that Steve Jackson Games produced in 15 years. Shannon Appelcline noted that by its inclusion of posthuman characters, the book began to show the limits of the GURPS system as it was, which is something that Pulver would address soon thereafter. Steve Jackson Games has not updated the core book (GURPS Transhuman Space) to 4th edition, although the supplement Transhuman Space: Changing Times provides a path for migrating to 4th edition. It has produced several 4th edition supplements for the setting: Transhuman Space: Bioroid Bazaar, Transhuman Space: Cities on the Edge, Transhuman Space: Martial Arts 2100, Transhuman Space: Personnel Files 2-5, Transhuman Space: Shell-Tech, GURPS Spaceships 8: Transhuman Spacecraft, Transhuman Space: Transhuman Mysteries, and Transhuman Space: Wings of the Rising Sun. Reception In a review of Transhuman Space in Black Gate, William Stoddard said "Transhuman Space was a richly detailed setting; if it had imperfections, it had enough depth to make up for them. I think it has the potential to become a classic in its field. Perhaps a campaign set in its default start year of 2100 could leave the early twenty-first century blurry enough to avoid obvious incongruities." Reviews Review in Vol. 20, No. 1 of Prometheus, the journal of the Libertarian Futurist Society. See also Eclipse Phase Orion's Arm Hard science fiction List of GURPS books GURPS Basic Set Pyramid, a monthly online magazine with GURPS support References External links Transhuman Space Official web site Review of Transhuman Space at RPGnet 2003 Grog d'Or Announcement Fiction about artificial intelligence Fiction about augmented reality Biopunk Biorobotics in fiction Fiction about brain–computer interface Fiction about cyborgs Fiction about consciousness transfer Fiction about robots Fiction about the Solar System Fiction about genetic engineering GURPS 3rd edition GURPS 4th edition GURPS books Nanopunk Fiction about nanotechnology Postcyberpunk Fiction about prosthetics Role-playing game supplements introduced in 2002 Science fiction role-playing games Steve Jackson Games games Transhumanism in fiction Fiction about virtual reality
Transhuman Space
[ "Materials_science", "Engineering", "Biology" ]
1,530
[ "Fiction about cyborgs", "Genetic engineering", "Fiction about genetic engineering", "Fiction about nanotechnology", "Cyborgs", "Nanotechnology" ]
627,501
https://en.wikipedia.org/wiki/Littlewood%20conjecture
In mathematics, the Littlewood conjecture is an open problem () in Diophantine approximation, proposed by John Edensor Littlewood around 1930. It states that for any two real numbers α and β, where is the distance to the nearest integer. Formulation and explanation This means the following: take a point (α, β) in the plane, and then consider the sequence of points (2α, 2β), (3α, 3β), ... . For each of these, multiply the distance to the closest line with integer x-coordinate by the distance to the closest line with integer y-coordinate. This product will certainly be at most 1/4. The conjecture makes no statement about whether this sequence of values will converge; it typically does not, in fact. The conjecture states something about the limit inferior, and says that there is a subsequence for which the distances decay faster than the reciprocal, i.e. o(1/n) in the little-o notation. Connection to further conjectures It is known that this would follow from a result in the geometry of numbers, about the minimum on a non-zero lattice point of a product of three linear forms in three real variables: the implication was shown in 1955 by Cassels and Swinnerton-Dyer. This can be formulated another way, in group-theoretic terms. There is now another conjecture, expected to hold for n ≥ 3: it is stated in terms of G = SLn(R), Γ = SLn(Z), and the subgroup D of diagonal matrices in G. Conjecture: for any g in G/Γ such that Dg is relatively compact (in G/Γ), then Dg is closed. This in turn is a special case of a general conjecture of Margulis on Lie groups. Partial results Borel showed in 1909 that the exceptional set of real pairs (α,β) violating the statement of the conjecture is of Lebesgue measure zero. Manfred Einsiedler, Anatole Katok and Elon Lindenstrauss have shown that it must have Hausdorff dimension zero; and in fact is a union of countably many compact sets of box-counting dimension zero. The result was proved by using a measure classification theorem for diagonalizable actions of higher-rank groups, and an isolation theorem proved by Lindenstrauss and Barak Weiss. These results imply that non-trivial pairs satisfying the conjecture exist: indeed, given a real number α such that , it is possible to construct an explicit β such that (α,β) satisfies the conjecture. See also Littlewood polynomial References Further reading Diophantine approximation Conjectures Unsolved problems in mathematics
Littlewood conjecture
[ "Mathematics" ]
557
[ "Unsolved problems in mathematics", "Diophantine approximation", "Conjectures", "Mathematical relations", "Mathematical problems", "Approximations", "Number theory" ]
2,270,854
https://en.wikipedia.org/wiki/Polyethylene%20naphthalate
Polyethylene naphthalate (poly(ethylene 2,6-naphthalate) or PEN) is a polyester derived from naphthalene-2,6-dicarboxylic acid and ethylene glycol. As such it is related to poly(ethylene terephthalate), but with superior barrier properties. Production Two major manufacturing routes exist for polyethylene naphthalate (PEN), i.e. an ester or an acid process, named according to whether the starting monomer is a diester or a diacid derivative, respectively. In both cases for PEN, the glycol monomer is ethylene glycol. Solid-state polymerization (SSP) of the melt-produced resin pellets is the preferred process to increase the average molecular weight of PEN. Applications Because it provides a very good oxygen barrier, it is well-suited for bottling beverages that are susceptible to oxidation, such as beer. It is also used in making high performance sailcloth. Significant commercial markets have been developed for its application in textile and industrial fibers, films, and foamed articles, containers for carbonated beverages, water and other liquids, and thermoformed applications. It is also an emerging material for modern electronic devices. PEN was the medium for Advanced Photo System film (discontinued in 2011). PEN is used for manufacturing high performance fibers that have very high modulus and better dimensional stability than PET or Nylon fibers. PEN is used as the substrate for most Linear Tape-Open (LTO) cartridges. It also has been found to show excellent scintillation properties and is expected to replace classic plastic scintillators. Benefits when compared to polyethylene terephthalate The two condensed aromatic rings of PEN confer on it improvements in strength and modulus, chemical and hydrolytic resistance, gaseous barrier, thermal and thermo-oxidative resistance and ultraviolet (UV) light barrier resistance compared to polyethylene terephthalate (PET). PEN is intended as a PET replacement, especially when used as a substrate for flexible integrated circuits. References Polyesters Plastics ja:ポリエステル#ポリエチレンナフタレート
Polyethylene naphthalate
[ "Physics" ]
477
[ "Amorphous solids", "Unsolved problems in physics", "Plastics" ]
2,270,916
https://en.wikipedia.org/wiki/Duru%E2%80%93Kleinert%20transformation
The Duru–Kleinert transformation, named after İsmail Hakkı Duru and Hagen Kleinert, is a mathematical method for solving path integrals of physical systems with singular potentials, which is necessary for the solution of all atomic path integrals due to the presence of Coulomb potentials (singular like ). The Duru–Kleinert transformation replaces the diverging time-sliced path integral of Richard Feynman (which thus does not exist) by a well-defined convergent one. Papers H. Duru and H. Kleinert, Solution of the Path Integral for the H-Atom, Phys. Letters B 84, 185 (1979) H. Duru and H. Kleinert, Quantum Mechanics of H-Atom from Path Integrals, Fortschr. d. Phys. 30, 401 (1982) H. Kleinert, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets 3. ed., World Scientific (Singapore, 2004) (read book here) Quantum mechanics
Duru–Kleinert transformation
[ "Physics" ]
212
[ "Theoretical physics", "Quantum mechanics", "Quantum physics stubs" ]
2,272,185
https://en.wikipedia.org/wiki/Millman%27s%20theorem
In electrical engineering, Millman's theorem (or the parallel generator theorem) is a method to simplify the solution of a circuit. Specifically, Millman's theorem is used to compute the voltage at the ends of a circuit made up of only branches in parallel. It is named after Jacob Millman, who proved the theorem. Explanation Let be the generators' voltages. Let be the resistances on the branches with voltage generators . Then Millman states that the voltage at the ends of the circuit is given by: That is, the sum of the short circuit currents in branch divided by the sum of the conductances in each branch. It can be proved by considering the circuit as a single supernode. Then, according to Ohm and Kirchhoff, the voltage between the ends of the circuit is equal to the total current entering the supernode divided by the total equivalent conductance of the supernode. The total current is the sum of the currents in each branch. The total equivalent conductance of the supernode is the sum of the conductance of each branch, since all the branches are in parallel. Branch variations Current sources One method of deriving Millman's theorem starts by converting all the branches to current sources (which can be done using Norton's theorem). A branch that is already a current source is simply not converted. In the expression above, this is equivalent to replacing the term in the numerator of the expression above with the current of the current generator, where the kth branch is the branch with the current generator. The parallel conductance of the current source is added to the denominator as for the series conductance of the voltage sources. An ideal current source has zero conductance (infinite resistance) and so adds nothing to the denominator. Ideal voltage sources If one of the branches is an ideal voltage source, Millman's theorem cannot be used, but in this case the solution is trivial, the voltage at the output is forced to the voltage of the ideal voltage source. The theorem does not work with ideal voltage sources because such sources have zero resistance (infinite conductance) so the summation of both the numerator and denominator are infinite and the result is indeterminate. See also Analysis of resistive circuits References Bakshi, U.A.; Bakshi, A.V., Network Analysis, Technical Publications, 2009 . Ghosh, S.P.; Chakraborty, A.K., Network Analysis and Synthesis, Tata McGraw-Hill, 2010 . Singh, S.N., Basic Electrical Engineering, PHI Learning, 2010 . Wadhwa, C.L., Network Analysis and Synthesis, New Age International ' Electrical engineering Circuit theorems
Millman's theorem
[ "Physics", "Engineering" ]
563
[ "Electrical engineering", "Equations of physics", "Circuit theorems", "Physics theorems" ]
2,272,402
https://en.wikipedia.org/wiki/Natural%20region
A natural region (landscape unit) is a basic geographic unit. Usually, it is a region which is distinguished by its common natural features of geography, geology, and climate. From the ecological point of view, the naturally occurring flora and fauna of the region are likely to be influenced by its geographical and geological factors, such as soil and water availability, in a significant manner. Thus most natural regions are homogeneous ecosystems. Human impact can be an important factor in the shaping and destiny of a particular natural region. Main terms The concept "natural region" is a large basic geographical unit, like the vast boreal forest region. The term may also be used generically, like in alpine tundra, or specifically to refer to a particular place. The term is particularly useful where there is no corresponding or coterminous official region. The Fens of eastern England, the Thai highlands, and the Pays de Bray in Normandy, are examples of this. Others might include regions with particular geological characteristics, like badlands, such as the Bardenas Reales, an upland massif of acidic rock, or The Burren, in Ireland. By Country Natural regions of Burundi Natural regions of Chile Natural regions of Colombia Natural regions of Germany Natural regions of Venezuela References External links Natural regions of Texas Alberta's Natural Regions Natural regions in Valencia Ecology
Natural region
[ "Biology" ]
266
[ "Ecology" ]
2,273,689
https://en.wikipedia.org/wiki/Commons
The commons is the cultural and natural resources accessible to all members of a society, including natural materials such as air, water, and a habitable Earth. These resources are held in common even when owned privately or publicly. Commons can also be understood as natural resources that groups of people (communities, user groups) manage for individual and collective benefit. Characteristically, this involves a variety of informal norms and values (social practice) employed for a governance mechanism. Commons can also be defined as a social practice of governing a resource not by state or market but by a community of users that self-governs the resource through institutions that it creates. Definition and modern use The Digital Library of the Commons defines "commons" as "a general term for shared resources in which each stakeholder has an equal interest". The term "commons" derives from the traditional English legal term for common land, which are also known as "commons", and was popularised in the modern sense as a shared resource term by the ecologist Garrett Hardin in an influential 1968 article called "The Tragedy of the Commons". As Frank van Laerhoven and Elinor Ostrom have stated; "Prior to the publication of Hardin's article on the tragedy of the commons (1968), titles containing the words 'the commons', 'common pool resources', or 'common property' were very rare in the academic literature." Some texts make a distinction in usage between common ownership of the commons and collective ownership among a group of colleagues, such as in a producers' cooperative. The precision of this distinction is not always maintained. Others conflate open access areas with commons; however, open access areas can be used by anybody while the commons has a defined set of users. The use of "commons" for natural resources has its roots in European intellectual history, where it referred to shared agricultural fields, grazing lands and forests that were, over a period of several hundred years, enclosed, claimed as private property for private use. In European political texts, the common wealth was the totality of the material riches of the world, such as the air, the water, the soil and the seed, all nature's bounty regarded as the inheritance of humanity as a whole, to be shared together. In this context, one may go back further, to the Roman legal category res communis, applied to things common to all to be used and enjoyed by everyone, as opposed to res publica, applied to public property managed by the government. Types Environmental resource The examples below illustrate types of environmental commons. European land use Originally in medieval England the common was an integral part of the manor, and was thus legally part of the estate in land owned by the lord of the manor, but over which certain classes of manorial tenants and others held certain rights. By extension, the term "commons" has come to be applied to other resources which a community has rights or access to. The older texts use the word "common" to denote any such right, but more modern usage is to refer to particular rights of common, and to reserve the name "common" for the land over which the rights are exercised. A person who has a right in, or over, common land jointly with another or others is called a commoner. In middle Europe, commons (relatively small-scale agriculture in, especially, southern Germany, Austria, and the alpine countries) were kept, in some parts, till the present. Some studies have compared the German and English dealings with the commons between late medieval times and the agrarian reforms of the 18th and 19th centuries. The UK was quite radical in doing away with and enclosing former commons, while southwestern Germany (and the alpine countries as e.g. Switzerland) had the most advanced commons structures, and were more inclined to keep them. The Lower Rhine region took an intermediate position. However, the UK and the former dominions have till today a large amount of Crown land which often is used for community or conservation purposes. Mongolian grasslands Based on a research project by the Environmental and Cultural Conservation in Inner Asia (ECCIA) from 1992 to 1995, satellite images were used to compare the amount of land degradation due to livestock grazing in the regions of Mongolia, Russia, and China. In Mongolia, where shepherds were permitted to move collectively between seasonal grazing pastures, degradation remained relatively low at approximately 9%. Comparatively, Russia and China, which mandated state-owned pastures involving immobile settlements and in some cases privatization by household, had much higher degradation, at around 75% and 33% respectively. A collaborative effort on the part of Mongolians proved much more efficient in preserving grazing land. United States Trawl fisheries of New York A trawl fishery in the Blight region, located in New York, provides a completely different example of a type of community-based solution to what is sometimes referred to as the dilemma or "tragedy of the commons". The multitude of fishermen in the region make up a fishing cooperative who specialize in harvesting whiting. Being a part of the cooperative gives them consistent access to the best whiting grounds in the area which allows them to be highly successful, sometimes even dominating regional whiting markets during the winter season. It is a relatively high price to be a member of the collective, which limits entry, while also establishing catching quotas for members. They prevent unlimited entry or access to cap the number of members that are allowed in the club. This is done through a closed membership policy, as well as having control over the docking spaces. This leads to exclude outsiders from entering the regional whiting market. The "quotas" are established based on what they estimate can be sold to the regional markets. It directly contrasts government imposed regulations which are typically considered to be inflexible by the fisherman in the local area. The cooperative on the other hand is considered to be effective and flexible in their sustainable use of the resources in the region. Lobster fishery of Maine The widespread success of the Maine lobster industry is often attributed to the willingness of Maine's lobstermen to uphold and support lobster conservation rules. These rules include harbor territories not recognized by the state, informal trap limits, and laws imposed by the state of Maine (which are largely influenced by lobbying from lobster industry itself). Lobster is another resource that is sometimes considered vulnerable to overharvesting, and many people within the industry itself have been predicting a collapse for years. Nonetheless, the lobster industry has remained relatively unscathed by resource depletion. The state government of Maine establishes certain regulations, but they do not limit the number of licenses themselves. In practice there are many restrictive exclusionary systems that are generated, dictated, and upheld by the community through a series of "traditional fishing rights" that have been locally grandfathered in. One must obtain confirmation to fish from the community to actually be granted access. Once an individual is granted access, they are still only able to access the territories held by that community. Outsiders may be persuaded by threats of violence even. It is impossible to know if the lobster resource would have been sustainably used if there was more regulation, or without the internal regulation, but it is certainly being used sustainably in its current state of affairs. It also seems to be run relatively efficiently. This case study of Maine lobster fisheries reflects how a group was able to restrain access to a resource from outsiders, while regulating the communal use in an effective manner. This has allowed the local communities to reap the benefits of the rewards of their restrain for decades. Essentially, the local lobster fishers collaborate without much government intervention to sustain their common-pool resource. Irrigation systems of New Mexico Acequia is a method of collective responsibility and management for irrigation systems in desert areas. In New Mexico, a community-run organization known as Acequia Associations supervises water in terms of diversion, distribution, utilization, and recycling, in order to reinforce agricultural traditions and preserve water as a common resource for future generations. The Congreso de las Acequias has since 1990s, is a statewide federation that represents several hundred acequia systems in New Mexico. Community forests in Nepal In the late 1980s, Nepal chose to decentralize government control over forests. Community forest programs work by giving local areas a financial stake in nearby woodlands, and thereby increasing the incentive to protect them from overuse. Local institutions regulate harvesting and selling of timber and land, and must use any profit towards community development and preservation of the forests. In twenty years, some locals, especially in the middle hills, have noticed a visible increase in the number of trees, although other places have not seen tangible results, especially where opportunity costs to land are high. Community forestry may also contribute to community development in rural areas – for instance school construction, irrigation and drinking water channel construction, and road construction. Community forestry has proven conducive to democratic practices at grass roots level. Many Nepalese forest user groups generate income from the community forests, although the amount can vary widely among groups and is often invested in the community rather than flowing directly to individual households. Such income is generated from external sources involving the sales of timber from thinned pine plantations such as in the community forest user groups of Sindhu Palchok and Rachma, and internally in Nepal's mid-hills' broad leaf forests from membership fees, penalties and fines on rule-breakers, in addition to the sales of forest products. Some of the most significant benefits are that locals are able to use the products they gather directly in their own homes for subsistence use. Beaver hunting in James Bay, Quebec, Canada Hunting wildlife territories in James Bay, Quebec; located in the northeastern part of Canada, provide an example of resources being effectively shared by a community. There is an extensive heritage of local customaries that are used to effectively regulate beaver hunting in the region. Beaver has been an important source of food and commerce for the area since the fur trade began in 1670. Unfortunately, beavers are an easy target for resource degradation and depletion due to their colonies being easily spotted. Luckily, the area has grandfathered in many traditions, and stewards of the land to safeguard certain territories populations. In the 1920s there was a massive influx of non-native trappers in the region due to a new railroad coming to the area, as well as an increase in fur prices at the time. The Amerindian communities' lost control over these territories for a short time during this period which helped to eventually lead to what is known as a "tragedy of the commons". In the 1930s conservation laws were enacted which prohibited outsiders from trapping in the area, and reinforced locals' customary laws. This led to a restoration of the population and commerce that beavers provided by the 1950s. The experience of the 1920s is not an isolated incident in the community either. Business conflicts among fur trading companies has led to a couple other times of resource overuse, but gradually resource use was restored to a proper balance once local control was restored. This case study reflects how communal resource sharing can be effectively propagated by a community. Free drinkable water fountains in Paris In Paris, France, there are over 1,200 free drinkable water fountains distributed throughout the city. The first 100 were donated by Englishman Sir Richard Wallace (1818–1890) in 1872, called the Wallace fountains, and since then the Parisian water company "Eau du Paris" have put more of them around the city, this give people living Paris and tourists all around the world access to free drinkable fresh water in Paris. Since then, many other countries like Spain, Brazil, Italy or Portugal have put these fountains on a lower scale. Allotment gardens in Stockholm In the Stockholm region, green spaces are predominantly owned and managed in either private or municipal forms, allotment gardens being the most common form. The system provide cultural ecosystem services to lot holders, as well as the offer of vegetables, fruits, and ornamental flowers. The majority of allotment land in Stockholm is owned by the local municipality, and leaseholds are set for extended periods of time (up to 25 years). The local allotment association makes the decisions about who gets land rights. Only residents of multifamily homes inside the municipality were permitted to sign contracts, signifying a commitment to the original goals of allotments, which were to enhance the health of city dwellers in outdoor settings. Land is organised and managed cooperatively; outside enterprises are not involved in any way. The allotment association recognises lot holders as official members, granting them equal voting rights and shares. In turn, the association represents the land owners in various administrative proceedings. Urban green commons in Cape Town In the post-apartheid metropolis of Cape Town, South Africa, the history of land rights is particularly noticeable since a large number of residents have vivid memories of being forcibly evacuated from their homes or of being assigned to live in specific regions. In 2005, the city re-zoned the Northern shore of Zeekoevlei – a seasonal lake and wetland area – into smaller parcels of land that were bought by people from Grassy Park who shared experiences of oppression and marginalization during apartheid. After 10 years of being utilised as a landfill, the area was covered in "non-indigenous" plants. While constructing their homes, the locals decided to do something different: rather than erecting security walls to demarcate and guard their individual property, they would restore the fynbos and wetland ecology and establish a public communal garden. As stated by the locals, the initial plan was to build a "blueprint" for communal gardening that would serve as an example for other abandoned green areas, with the goal of "correcting the imbalances of apartheid" and "beautifying and dignifying". The nine residents and the city's conservation managers signed an agreement that allowed the residents to incorporate the public shoreline area into the rehabilitation project, even though the city had retained the area closest to the shoreline as public property. Meanwhile, the city saw an opportunity to restore the fynbos and provided labour and plants for clearing and planting. About 50,000 plants were planted (and "weeds" eradicated) along Bottom Road over the course of four years, drawing bees, birds, dragonflies, and toads in addition to humans through the addition of walkways, benches, and areas for barbecues. Here, management is done by the locals themselves, often with assistance from the local government, through paid employees and voluntary labour. Because of its immense size, governance is extremely difficult. Currently, the project spans 6-7 ha, potentially even more. Its proximity to a busy road and hundreds of residential homes exacerbates the traffic issue. In addition to the disregard shown by the city administration, the neighbourhood has deteriorated as a result of people setting up barbecues at random and cars driving around freely, both of which have been linked to criminal activity. Cultural and intellectual commons Today, the commons are also understood within a cultural sphere. These commons include literature, music, arts, design, film, video, television, radio, information, software and sites of heritage. Wikipedia is an example of the production and maintenance of common goods by a contributor community in the form of encyclopedic knowledge that can be freely accessed by anyone without a central authority. Tragedy of the commons in the Wiki-Commons is avoided by community control by individual authors within the Wikipedia community. The information commons may help protect users of commons. Companies that pollute the environment release information about what they are doing. The Corporate Toxics Information Project and information like the Toxic 100, a list of the top 100 polluters, helps people know what these corporations are doing to the environment. Digital commons Mayo Fuster Morell proposed a definition of digital commons as "information and knowledge resources that are collectively created and owned or shared between or among a community and that tend to be non-exclusive, that is, be (generally freely) available to third parties. Thus, they are oriented to favor use and reuse, rather than to exchange as a commodity. Additionally, the community of people building them can intervene in the governing of their interaction processes and of their shared resources." Examples of digital commons are Wikipedia, free software and open-source hardware projects. Following the narrative of post-growth, the digital commons can present a model of progress that guide commoners to build counter-power in the economic and political field. Being able to digitally share knowledge and resources through internet platforms is a new capacity that challenges the traditional hierarchical structures of production, allowing for a higher collective benefit and a sustainable management of resources. Non-material resources are digitally reproducible and therefore can be shared at a low cost, contrary to physical resources which are quite limited. Shared resources represent in this context data, information, culture and knowledge which are produced and accessible online. In accordance with the "design global, manufacture local" approach digital commons may link the traditional commons theory with existing physical infrastructures. It further connects with the degrowth communities since transformations in use-value creation by employing new technologies, decoupling society from GDP growth and lower CO2 emissions, are envisioned. Moreover, as a decentralized approach, there is a strong emphasis on inclusion and democratic regulation which has led Commons as an alternative, emancipatory and emerging form of social organization that goes beyond democratic capitalism. Accordingly, through the cooperation of diverse stakeholders and the equitable distribution of means of production, technological development becomes more accessible and bottom-up projects are fostered in communities. Urban commons Urban commons present the opportunity for the citizens to gain power upon the management of the urban resources and reframe city-life costs based on their use value and maintenance costs, rather than the market-driven value. Urban commons situates citizens as key players rather than public authorities, private markets and technologies. David Harvey (2012) defines the distinction between public spaces and urban commons. He highlights that the former is not to be equated automatically with urban commons. Public spaces and goods in the city make a commons when part of the citizens take political action. Syntagma Square in Athens, Tahrir Square in Cairo, Maidan Nezalezhnosti in Kyiv, and the Plaza de Catalunya in Barcelona were public spaces that transformed to an urban commons as people protested there to support their political statements. Streets are public spaces that have often become an urban commons by social action and revolutionary protests. Urban commons are operating in the cities in a complementary way with the state and the market. Some examples are community gardening, urban farms on the rooftops and cultural spaces. More recently participatory studies of commons and infrastructures under the conditions of the financial crisis have emerged. Knowledge commons In 2007, Elinor Ostrom along with her colleague Charlotte Hess, did succeed in extending the commons debate to knowledge, approaching knowledge as a complex ecosystem that operates as a common – a shared resource that is subject to social dilemmas and political debates. The focus here was on the ready availability of digital forms of knowledge and associated possibilities to store, access and share it as a common. The connection between knowledge and commons may be made through identifying typical problems associated with natural resource commons, such as congestion, overharvesting, pollution and inequities, which also apply to knowledge. Then, effective alternatives (community-based, non-private, non-state), in line with those of natural commons (involving social rules, appropriate property rights and management structures), solutions are proposed. Thus, the commons metaphor is applied to social practice around knowledge. It is in this context that the present work proceeds, discussing the creation of depositories of knowledge through the organised, voluntary contributions of scholars (the research community, itself a social common), the problems that such knowledge commons might face (such as free-riding or disappearing assets), and the protection of knowledge commons from enclosure and commodification (in the form of intellectual property legislation, patenting, licensing and overpricing). At this point, it is important to note the nature of knowledge and its complex and multi-layered qualities of non-rivalry and non-excludability. Unlike natural commons – which are both rival and excludable (only one person can use any one item or portion at a time and in so doing they use it up, it is consumed) and characterised by scarcity (they can be replenished but there are limits to this, such that consumption/destruction may overtake production/creation) – knowledge commons are characterised by abundance (they are non-rival and non-excludable and thus, in principle, not scarce, so not impelling competition and compelling governance). This abundance of knowledge commons has been celebrated through alternative models of knowledge production, such as Commons-based peer production (CBPP), and embodied in the free software movement. The CBPP model showed the power of networked, open collaboration and non-material incentives to produce better quality products (mainly software). Kopli 93 Community Center in Tallinn, Estonia Kopli 93 is a historic building in Tallinn that has undergone several transformations, notably evolving from a soviet army sailor's club into a vibrant community center equipped with a garden, an apiary, and a workshop. Originally opened in 1937 as a cultural and educational hub, it was repurposed by the Soviet army in 1940 to serve as a sailor's club. In the 1990s, the building transitioned to private ownership, housing a university until 2011. In 2019, it was acquired by the Salme Cultural Center, ushering in its current chapter as a community hub. The transformation of Kopli 93 into a community center was driven by a commitment to sustainable development and fostering community engagement. The center organizes workshops, training sessions, and events that encourage community members to collaborate and contribute to a sustainable future for the building and its garden. One notable initiative, "Community Wednesday," takes place every Wednesday at 6:00 PM, inviting locals to participate in gardening and connect with one another. Furthermore, the workshop operates on Mondays, Wednesdays, and Thursdays from 5:00 PM to 9:00 PM, providing free access to community members who bring their own materials. Participants receive guidance from the in-house foreman and are required to clean up after their sessions, including dedicating 15 minutes to tidying the workshop. This initiative is part of the CENTRINNO 2020–2024 project, supported by the Horizon 2020 innovation funding program of the European Commission (H2020 grant. 869595). The project began with the establishment of a community garden and later expanded to include a workshop and cultural events. Commoning as a process Scholars such as David Harvey have adopted the term commoning, which as a verb serves to emphasize an understanding of the commons as a process and a practice rather than as "a particular kind of thing" or static entity."The common is not to be construed, therefore, as a particular kind of thing, asset or even social process, but as an unstable and malleable social relation between a particular self-defined social group and those aspects of its actually existing or yet-to-be-created social and/or physical environment deemed crucial to its life and livelihood. There is, in effect, a social practice of commoning. This practice produces or establishes a social relation with a common whose uses are either exclusive to a social group or partially or fully open to all and sundry. At the heart of the practice of commoning lies the principle that the relation between the social group and that aspect of the environment being treated as a common shall be both collective and non-commodified-off-limits to the logic of market exchange and market valuations."Some authors distinguish between the resources shared (the common-pool resources), the community who governs it, and commoning, that is, the process of coming together to manage such resources. Commoning thus adds another dimension to the commons, acknowledging the social practices entailed in the process of establishing and governing a commons. These practices entail, for the community of commoners, the creation of a new way of living and acting together, thus involving a collective psychological shift: it also entails a process of subjectivization, where the commoners produce themselves as common subjects. Economic theories Tragedy of the commons A commons failure theory, now called tragedy of the commons, originated in the 18th century. In 1833 William Forster Lloyd introduced the concept by a hypothetical example of herders overusing a shared parcel of land on which they are each entitled to let their cows graze, to the detriment of all users of the common land. The same concept has been called the "tragedy of the fishers", when over-fishing could cause stocks to plummet. Forster's pamphlet was little known, and it wasn't until 1968, with the publication by the ecologist Garrett Hardin of the article "The Tragedy of the Commons", that the term gained relevance. Hardin introduced this tragedy as a social dilemma, and aimed at exposing the inevitability of failure that he saw in the commons. However, Hardin's (1968) argument has been widely criticized, since he is accused of having mistaken the commons, that is, resources held and managed in common by a community, with open access, that is, resources that are open to everyone but where it is difficult to restrict access or to establish rules. In the case of the commons, the community manages and sets the rules of access and use of the resource held in common: the fact of having a commons, then, does not mean that anyone is free to use the resource as they like. Studies by Ostrom and others have shown that managing a resource as a commons often has positive outcomes and avoids the so-called tragedy of the commons, a fact that Hardin overlooked. It has been said the dissolution of the traditional land commons played a watershed role in landscape development and cooperative land use patterns and property rights. However, as in the British Isles, such changes took place over several centuries as a result of land enclosure. Economist Peter Barnes has proposed a 'sky trust' to fix this tragedic problem in worldwide generic commons. He claims that the sky belongs to all the people, and companies do not have a right to over pollute. It is a type of cap and dividend program. Ultimately the goal would be to make polluting excessively more expensive than cleaning what is being put into the atmosphere. Successful commons While the original work on the tragedy of the commons concept suggested that all commons were doomed to failure, they remain important in the modern world. Work by later economists has found many examples of successful commons, and Elinor Ostrom won the Nobel Prize for analysing situations where they operate successfully. For example, Ostrom found that grazing commons in the Swiss Alps have been run successfully for many hundreds of years by the farmers there. Allied to this is the "comedy of the commons" concept, where users of the commons are able to develop mechanisms to police their use to maintain, and possibly improve, the state of the commons. This term was coined in an essay by legal scholar, Carol M. Rose, in 1986. Notable theorists Peter Barnes Yochai Benkler David Bollier Murray Bookchin Iain Boal George Caffentzis Barry Commoner Silvia Federici Henry George Garrett Hardin Michael Hardt David Harvey Silke Helfrich Lewis Hyde Lawrence Lessig Peter Linebaugh Karl Linn Vasilis Kostakis William Forster Lloyd William Morris Fred Moten Antonio Negri Elinor Ostrom Raj Patel John Platt (see Social trap) Joachim Radkau Kenneth Rexroth Gerrard Winstanley Monica White Michel Bauwens Feminist perspectives Silvia Federici articulates a feminist perspective of the commons in her essay "Feminism and the Politics of the Commons". Since the language around the commons has been largely appropriated by the World Bank as it sought to re-brand itself "the environmental guardian of the planet", she argues that it is important to adopt a commons discourse that actively resists this re-branding. Secondly, articulations of the commons, although historically present and multiple have struggled to come together as a unified front. For the latter to happen she argues that a "commoning" or "commons" movement that is effectively able to resist capitalist forms of organizing labour and our livelihoods must look to women to take the lead in organizing the collectivization of our daily lives and the means of production. Women and the struggle for the Commons Women have traditionally been at the forefront of struggles for commoning "as primary subjects of reproductive work". This proximity and dependence on communal natural resources has made women the most vulnerable by their privatization, and made them their most staunch defendants. Examples include: subsistence agriculture, credit associations such as tontine (money commons) and collectivizing reproductive labor. In "Caliban and the Witch", Federici interprets the ascent of capitalism as a reactionary move to subvert the rising tide of communalism and to retain the basic social contract. "Feminist Reconstructions" of the Commons The process of commoning the material means of reproduction of human life is most promising in the struggle to "disentangle our livelihoods not only from the world market but also from the war machine and prison system". One of the main aims of the process of commoning is to create "common subjects" that are responsible to their communities. The notion of community is not understood as a "gated community", but as "a quality of relations, a principle of cooperation and responsibility to each other and the earth, the forests, the seas, the animals. In communalizing housework, one of the supporting pillars of human activity, it is imperative that this sphere is "not negated but revolutionized". Communalizing housework also serves to de-naturalize it as women's labour, which has been an important part of the feminist struggle. Feminist Commons Movement Abortion and Birth Control As reproductive rights over unwanted pregnancies have been denied in many countries for many years, several resistance groups used diverse commoning strategies in order to provide women safe and affordable abortion. Care, knowledge, and pills have been made commons against abortion restriction. In New York, U.S., the group Haven Coalition volunteer provide pre and post abortion care for people who have to travel for abortion which is considered illegal in their places of origins, and with New York Abortion Access Fund, they are able to provide them with medical and financial assistance. Underground networks outside medical service establishments are where women's networks oversee the abortion and assist each other physically or emotionally by sharing the knowledge of herbalism or home abortion. These underground groups operate under codenames like Jane Collective in Chicago or Renata in Arizona. Some groups like Women on Waves from Netherlands use international waters to conduct abortion. Also, in Italy, Obiezione Respinta movement collaboratively map spaces related to birth control such as pharmacies, consultors, hospitals, etc., through which users share their knowledge and experience of the place and provide access to information that is difficult to obtain. Historical land commons movements The Carlist Wars The Diggers Kett's Rebellion Contemporary commons movements Abahlali baseMjondolo in South Africa The Bhumi Uchhed Pratirodh Committee in India Electronic Frontier Foundation The EZLN in Mexico Fanmi Lavalas in Haiti Geolibertarianism primarily in the US The Homeless Workers' Movement in Brazil The Land is Ours in the UK The Landless Workers' Movement in Brazil Movement for Justice en el Barrio in the United States of America Narmada Bachao Andolan in India Take Back the Land in the US Cosmopolitan localism or cosmolocalism See also Citizen's dividend Common good Common ownership Creative Commons Copyleft Common land – Account of historical and present common land use, mainly British Isles. Enclosure Global commons Game theory Homo reciprocans Network effect "The Magic Cauldron" – essay on the open source economic model Tragedy of the anticommons International Association for the Study of the Commons Municipalization Nationalization Patentleft Public good (economics) Public land Reproductive labor Social ownership State ownership Tyranny of small decisions The Goose and the Common References Further reading Basu, S (2016). Knowledge production, Agriculture and Commons: The case of Generation Challenge Programme. (PhD Thesis). Netherlands: Wageningen University. Retrieved from . Basu, S (2014). An alternative imagination to study commons: beyond state and beyond scientific establishment. Paper presented at the 2nd International Conference on Knowledge Commons for Sustainable Agricultural Innovations. Maringá, Brazil: Maringá State University. Bowers, Chet. (2006). Revitalizing the Commons: Cultural and Educational Sites of Resistance and Affirmation. Lexington Books. Bowers, Chet. (2012). The Way Forward: Educational Reforms that Focus on the Cultural Commons and the Linguistic Roots of the Ecological Crisis. Eco-Justice Press. Bresnihan, P. et Byrne, M. (2015). Escape into the city: Everyday practices of communing and the production of urban space in Dublin. Antipode 47(1), pp. 36–54. Dalakoglou, Dimitris "Infrastructural gap: Commons, State and Anthropology". City 20(6). Dellenbaugh, et al. (2015). Urban Commons: Moving beyond State and Market. Birkhäuser. Fourier, Charles. (1996). The Theory of the Four Movements (Cambridge University Press) Gregg, Pauline. (2001). Free-Born John: A Biography of John Lilburne (Phoenix Press) Harvey, Neil. (1998). The Chiapas Rebellion: The Struggle for Land and Democracy (Duke University Press) Hill, Christopher. (1984). The World Turned Upside Down: Radical Ideas During the English Revolution (Penguin) Hill, Christopher. (2006). Winstanley 'The Law of Freedom' and other Writings (Cambridge University Press) Hyde, Lewis. (2010). Common as Air: Revolution, Art and Ownership (Farrar, Straus and Giroux) Kennedy, Kennedy. (2008). Diggers, Levellers, and Agrarian Capitalism: Radical Political Thought in 17th Century England (Lexington Books) Kostakis, Vasilis and Bauwens, Michel. (2014). Network Society and Future Scenarios for a Collaborative Economy. (Basingstoke, UK: Palgrave Macmillan). (wiki) Leaming, Hugo P. (1995). Hidden Americans: Maroons of Virginia and the Carolinas (Routledge) Linebaugh, Peter, and Marcus Rediker. (2000). The Many-Headed Hydra: Sailors, Slaves, Commoners, and the Hidden History of the Revolutionary Atlantic (Boston: Beacon Press) Linebaugh, Peter. (2008). The Magna Carta Manifesto: Liberties and Commons for All (University of California Press) Lummis, Douglas. (1997). Radical Democracy (Cornell University Press) Mitchel, John Hanson. (1998). Trespassing: An Inquiry into the Private Ownership of Land (Perseus Books) Neeson, J. M. (1996). Commoners: Common Right, Enclosure and Social Change in England, 1700–1820 (Cambridge University Press) Negri, Antonio, and Michael Hardt. (2009). Commonwealth. Harvard University Press. Newfont, Kathyn. (2012). Blue Ridge Commons: Environmental Activism and Forest History in Western North Carolina (The University of Georgia Press) Patel, Raj. (2010). The Value of Nothing (Portobello Books) Price, Richard, ed. (1979). Maroon Societies: Rebel Slave Communities in the Americas (The Johns Hopkins University Press) Proudhon, Pierre-Joseph. (1994). What is Property? (Cambridge University Press) Rexroth, Kenneth. (1974). Communalism: From Its Origins to the Twentieth Century (Seabury Press) Rowe, Jonathan. (2013). Our Common Wealth: The Hidden Economy That Makes Everything Else Work (Berrett-Koehler) Shantz, Jeff. (2013). Commonist Tendencies: Mutual Aid Beyond Communism. (Punctum) Simon, Martin. (2014). Your Money or Your Life: time for both. Social Commons. (Freedom Favours) External links IASC - The International Association for the Study of the Commons - an international association dedicated to the international and interdisciplinary study of commons and commons issues Foundation for common land – A gathering of those across Great Britain and beyond with a stake in pastoral commons and their future International Journal of the Commons – an interdisciplinary peer-reviewed open-access journal dedicated to furthering the understanding of institutions for use and management of resources that are (or could be) enjoyed collectively. Infrastructuring the Commons – Aalto University Special Interest Group SIG in the Commons (peer-production, co-production, co-governance, co-creation) and Public(s)services. The SIG addresses the relevance of the Commons as a framework to expand the understanding of emerging considerations for the design, provision and maintenance of public services and urban space. Helsinki, Finland. On the Commons – dedicated to exploring ideas and action about the commons—which encompasses natural assets such as oceans and clean air as well as cultural endowments like the Internet, scientific research and the arts. The Peer to Peer Foundation and the Economics and the Commons Conference. The Commons Strategies Group. The Commons Transition Primer. P2P Lab Environmental social science concepts
Commons
[ "Environmental_science" ]
7,690
[ "Environmental social science concepts", "Environmental social science" ]
16,364,229
https://en.wikipedia.org/wiki/Arakelov%20theory
In mathematics, Arakelov theory (or Arakelov geometry) is an approach to Diophantine geometry, named for Suren Arakelov. It is used to study Diophantine equations in higher dimensions. Background The main motivation behind Arakelov geometry is that there is a correspondence between prime ideals and finite places , but there also exists a place at infinity , given by the Archimedean valuation, which doesn't have a corresponding prime ideal. Arakelov geometry gives a technique for compactifying into a complete space which has a prime lying at infinity. Arakelov's original construction studies one such theory, where a definition of divisors is constructor for a scheme of relative dimension 1 over such that it extends to a Riemann surface for every valuation at infinity. In addition, he equips these Riemann surfaces with Hermitian metrics on holomorphic vector bundles over X(C), the complex points of . This extra Hermitian structure is applied as a substitute for the failure of the scheme Spec(Z) to be a complete variety. Note that other techniques exist for constructing a complete space extending , which is the basis of F1 geometry. Original definition of divisors Let be a field, its ring of integers, and a genus curve over with a non-singular model , called an arithmetic surface. Also, let be an inclusion of fields (which is supposed to represent a place at infinity). Also, let be the associated Riemann surface from the base change to . Using this data, one can define a c-divisor as a formal linear combination where is an irreducible closed subset of of codimension 1, , and , and the sum represents the sum over every real embedding of and over one embedding for each pair of complex embeddings . The set of c-divisors forms a group . Results defined an intersection theory on the arithmetic surfaces attached to smooth projective curves over number fields, with the aim of proving certain results, known in the case of function fields, in the case of number fields. extended Arakelov's work by establishing results such as a Riemann-Roch theorem, a Noether formula, a Hodge index theorem and the nonnegativity of the self-intersection of the dualizing sheaf in this context. Arakelov theory was used by Paul Vojta (1991) to give a new proof of the Mordell conjecture, and by in his proof of Serge Lang's generalization of the Mordell conjecture. developed a more general framework to define the intersection pairing defined on an arithmetic surface over the spectrum of a ring of integers by Arakelov. developed a theory of positive line bundles and proved a Nakai–Moishezon type theorem for arithmetic surfaces. Further developments in the theory of positive line bundles by and culminated in a proof of the Bogomolov conjecture by and . Arakelov's theory was generalized by Henri Gillet and Christophe Soulé to higher dimensions. That is, Gillet and Soulé defined an intersection pairing on an arithmetic variety. One of the main results of Gillet and Soulé is the arithmetic Riemann–Roch theorem of , an extension of the Grothendieck–Riemann–Roch theorem to arithmetic varieties. For this one defines arithmetic Chow groups CHp(X) of an arithmetic variety X, and defines Chern classes for Hermitian vector bundles over X taking values in the arithmetic Chow groups. The arithmetic Riemann–Roch theorem then describes how the Chern class behaves under pushforward of vector bundles under a proper map of arithmetic varieties. A complete proof of this theorem was only published recently by Gillet, Rössler and Soulé. Arakelov's intersection theory for arithmetic surfaces was developed further by . The theory of Bost is based on the use of Green functions which, up to logarithmic singularities, belong to the Sobolev space . In this context, Bost obtains an arithmetic Hodge index theorem and uses this to obtain Lefschetz theorems for arithmetic surfaces. Arithmetic Chow groups An arithmetic cycle of codimension p is a pair (Z, g) where Z ∈ Zp(X) is a p-cycle on X and g is a Green current for Z, a higher-dimensional generalization of a Green function. The arithmetic Chow group of codimension p is the quotient of this group by the subgroup generated by certain "trivial" cycles. The arithmetic Riemann–Roch theorem The usual Grothendieck–Riemann–Roch theorem describes how the Chern character ch behaves under pushforward of sheaves, and states that ch(f*(E))= f*(ch(E)TdX/Y), where f is a proper morphism from X to Y and E is a vector bundle over f. The arithmetic Riemann–Roch theorem is similar, except that the Todd class gets multiplied by a certain power series. The arithmetic Riemann–Roch theorem states where X and Y are regular projective arithmetic schemes. f is a smooth proper map from X to Y E is an arithmetic vector bundle over X. is the arithmetic Chern character. TX/Y is the relative tangent bundle is the arithmetic Todd class is R(X) is the additive characteristic class associated to the formal power series See also Hodge–Arakelov theory Hodge theory P-adic Hodge theory Adelic group Notes References . . . . . . . External links Original paper Arakelov geometry preprint archive Algebraic geometry Diophantine geometry
Arakelov theory
[ "Mathematics" ]
1,160
[ "Fields of abstract algebra", "Algebraic geometry" ]
16,364,924
https://en.wikipedia.org/wiki/Chemical%20bonding%20model
A chemical bonding model is a theoretical model used to explain atomic bonding structure, molecular geometry, properties, and reactivity of physical matter. This can refer to: VSEPR theory, a model of molecular geometry. Valence bond theory, which describes molecular electronic structure with localized bonds and lone pairs. Molecular orbital theory, which describes molecular electronic structure with delocalized molecular orbitals. Crystal field theory, an electrostatic model for transition metal complexes. Ligand field theory, the application of molecular orbital theory to transition metal complexes. Chemical bonding
Chemical bonding model
[ "Physics", "Chemistry", "Materials_science" ]
109
[ "Chemical bonding", "Condensed matter physics", "nan" ]
16,367,693
https://en.wikipedia.org/wiki/Forensic%20polymer%20engineering
Forensic polymer engineering is the study of failure in polymeric products. The topic includes the fracture of plastic products, or any other reason why such a product fails in service, or fails to meet its specification. The subject focuses on the material evidence from crime or accident scenes, seeking defects in those materials that might explain why an accident occurred, or the source of a specific material to identify a criminal. Many analytical methods used for polymer identification may be used in investigations, the exact set being determined by the nature of the polymer in question, be it thermoset, thermoplastic, elastomeric or composite in nature. One aspect is the analysis of trace evidence such as skid marks on exposed surfaces, where contact between dissimilar materials leaves material traces of one left on the other. Provided the traces can be analyzed successfully, then an accident or crime can often be reconstructed. Methods of analysis Thermoplastics can be analysed using infra-red spectroscopy, ultraviolet–visible spectroscopy, nuclear magnetic resonance spectroscopy and the environmental scanning electron microscope. Failed samples can either be dissolved in a suitable solvent and examined directly (UV, IR and NMR spectroscopy) or be a thin film cast from solvent or cut using microtomy from the solid product. Infra-red spectroscopy is especially useful for assessing oxidation of polymers, such as the polymer degradation caused by faulty injection moulding. The spectrum shows the characteristic carbonyl group produced by oxidation of polypropylene, which made the product brittle. It was a critical part of a crutch, and when it failed, the user fell and injured herself very seriously. The spectrum was obtained from a thin film cast from a solution of a sample of the plastic taken from the failed forearm crutch. Microtomy is preferable since there are no complications from solvent absorption, and the integrity of the sample is partly preserved. Thermosets, composites and elastomers can often be examined using only microtomy owing to the insoluble nature of these materials. Fracture Fractured products can be examined using fractography, an especially useful method for all broken components using macrophotography and optical microscopy. Although polymers usually possess quite different properties to metals, ceramics and glasses, they are just as susceptible to failure from mechanical overload, fatigue and stress corrosion cracking if products are poorly designed or manufactured. Scanning electron microscopy or ESEM is especially useful for examining fracture surfaces and can also provide elemental analysis of viewed parts of the sample being investigated. It is effectively a technique of microanalysis and valuable for examination of trace evidence. On the other hand, colour rendition is absent in ESEM, and there is no information provided about the way in which those elements are bonded to one another. Specimens will be exposed to a partial vacuum, so any volatiles may be removed, and surfaces may be contaminated by substances used to attach the sample to the mount. Examples Many polymers are attacked by specific chemicals in the environment, and serious problems can arise, including road accidents and personal injury. Polymer degradation leads to sample embrittlement, and fracture under low applied loads. Ozone cracking Polymers for example, can be attacked by aggressive chemicals, and if under load, then cracks will grow by the mechanism of stress corrosion cracking. Perhaps the oldest known example is the ozone cracking of rubbers, where traces of ozone in the atmosphere attack double bonds in the chains of the materials. Elastomers with double bonds in their chains include natural rubber, nitrile rubber and styrene-butadiene rubber. They are all highly susceptible to ozone attack, and can cause problems like vehicle fires (from rubber fuel lines) and tyre blow-outs. Nowadays, anti-ozonants are widely added to these polymers, so the incidence of cracking has dropped. However, not all safety-critical rubber products are protected, and, since only ppb of ozone will start attack, failures are still occurring. Chlorine-induced cracking Another highly reactive gas is chlorine, which will attack susceptible polymers such as acetal resin and polybutylene pipework. There have been many examples of such pipes and acetal fittings failing in properties in the US as a result of chlorine-induced cracking. Essentially the gas attacks sensitive parts of the chain molecules (especially secondary, tertiary or allylic carbon atoms), oxidising the chains and ultimately causing chain cleavage. The root cause is traces of chlorine in the water supply, added for its anti-bacterial action, attack occurring even at parts per million traces of the dissolved gas. The chlorine attacks weak parts of a product, and, in the case of an acetal resin junction in a water supply system, it is the thread roots that were attacked first, causing a brittle crack to grow. The discoloration on the fracture surface was caused by deposition of carbonates from the hard water supply, so the joint had been in a critical state for many months. Hydrolysis Most step-growth polymers can suffer hydrolysis in the presence of water, often a reaction catalysed by acid or alkali. Nylon for example, will degrade and crack rapidly if exposed to strong acids, a phenomenon well known to people who accidentally spill acid onto their tights. The broken fuel pipe caused a serious accident when diesel fuel poured out from a van onto the road. A following car skidded and the driver was seriously injured when she collided with an oncoming lorry. Scanning electron microscopy or SEM showed that the nylon connector had fractured by stress corrosion cracking due to a small leak of battery acid. Nylon is susceptible to hydrolysis in contact with sulfuric acid, and only a small leak of acid would have sufficed to start a brittle crack in the injection moulded connector by a mechanism known as stress corrosion cracking, or SCC. The crack took about 7 days to grow across the diameter of the tube, hence the van driver should have seen the leak well before the crack grew to a critical size. He did not, therefore resulting in the accident. The fracture surface showed a mainly brittle surface with striations indicating progressive growth of the crack across the diameter of the pipe. Once the crack had penetrated the inner bore, fuel started leaking onto the road. Diesel is especially hazardous on road surfaces because it forms a thin oily film that cannot be seen easily by drivers. It is akin to black ice in lubricity, so skids are common when diesel leaks occur. The insurers of the van driver admitted liability and the injured driver was compensated. Polycarbonate is susceptible to alkali hydrolysis, the reaction simply depolymerising the material. Polyesters are prone to degrade when treated with strong acids, and in all these cases, care must be taken to dry the raw materials for processing at high temperatures to prevent the problem occurring. UV degradation Many polymers are also attacked by UV radiation at vulnerable points in their chain structures. Thus polypropylene suffers severe cracking in sunlight unless anti-oxidants are added. The point of attack occurs at the tertiary carbon atom present in every repeat unit, causing oxidation and finally chain breakage. Polyethylene is also susceptible to UV degradation, especially those variants that are branched polymers such as LDPE. The branch points are tertiary carbon atoms, so polymer degradation starts there and results in chain cleavage, and embrittlement. In the example shown at right, carbonyl groups were easily detected by IR spectroscopy from a cast thin film. The product was a road cone that had cracked in service, and many similar cones also failed because an anti-UV additive had not been used. See also Applied spectroscopy Catastrophic failure Circumstantial evidence Environmental stress cracking Forensic chemistry Forensic electrical engineering Forensic evidence Forensic photography Forensic engineering Forensic materials engineering Forensic science Fractography Ozone cracking Polymer degradation Skid mark Stress corrosion cracking Trace evidence UV degradation References Peter R Lewis and Sarah Hainsworth, Fuel Line Failure from stress corrosion cracking, Engineering Failure Analysis,13 (2006) 946–962. Lewis, Peter Rhys, Reynolds, K, Gagg, C, Forensic Materials Engineering: Case studies, CRC Press (2004). Wright, D.C., Environmental Stress Cracking of Plastics RAPRA (2001). Ezrin, Meyer, Plastics Failure Guide: Cause and Prevention, Hanser-SPE (1996). Lewis, Peter Rhys, Forensic Polymer Engineering: Why polymer products fail in service, 2nd Edition, Elsevier-Woodhead (2016). Polymers Polymer engineering Materials degradation
Forensic polymer engineering
[ "Chemistry", "Materials_science", "Engineering" ]
1,741
[ "Polymers", "Materials degradation", "Materials science", "Polymer chemistry" ]
16,372,307
https://en.wikipedia.org/wiki/Combinatory%20categorial%20grammar
Combinatory categorial grammar (CCG) is an efficiently parsable, yet linguistically expressive grammar formalism. It has a transparent interface between surface syntax and underlying semantic representation, including predicate–argument structure, quantification and information structure. The formalism generates constituency-based structures (as opposed to dependency-based ones) and is therefore a type of phrase structure grammar (as opposed to a dependency grammar). CCG relies on combinatory logic, which has the same expressive power as the lambda calculus, but builds its expressions differently. The first linguistic and psycholinguistic arguments for basing the grammar on combinators were put forth by Steedman and Szabolcsi. More recent prominent proponents of the approach are Pauline Jacobson and Jason Baldridge. In these new approaches, the combinator B (the compositor) is useful in creating long-distance dependencies, as in "Who do you think Mary is talking about?" and the combinator W (the duplicator) is useful as the lexical interpretation of reflexive pronouns, as in "Mary talks about herself". Together with I (the identity mapping) and C (the permutator) these form a set of primitive, non-interdefinable combinators. Jacobson interprets personal pronouns as the combinator I, and their binding is aided by a complex combinator Z, as in "Mary lost her way". Z is definable using W and B. Parts of the formalism The CCG formalism defines a number of combinators (application, composition, and type-raising being the most common). These operate on syntactically-typed lexical items, by means of Natural deduction style proofs. The goal of the proof is to find some way of applying the combinators to a sequence of lexical items until no lexical item is unused in the proof. The resulting type after the proof is complete is the type of the whole expression. Thus, proving that some sequence of words is a sentence of some language amounts to proving that the words reduce to the type S. Syntactic types The syntactic type of a lexical item can be either a primitive type, such as S, N, or NP, or complex, such as , or . The complex types, schematizable as and , denote functor types that take an argument of type Y and return an object of type X. A forward slash denotes that the argument should appear to the right, while a backslash denotes that the argument should appear on the left. Any type can stand in for the X and Y here, making syntactic types in CCG a recursive type system. Application combinators The application combinators, often denoted by > for forward application and < for backward application, apply a lexical item with a functor type to an argument with an appropriate type. The definition of application is given as: Composition combinators The composition combinators, often denoted by for forward composition and for backward composition, are similar to function composition from mathematics, and can be defined as follows: Type-raising combinators The type-raising combinators, often denoted as for forward type-raising and for backward type-raising, take argument types (usually primitive types) to functor types, which take as their argument the functors that, before type-raising, would have taken them as arguments. Example The sentence "the dog bit John" has a number of different possible proofs. Below are a few of them. The variety of proofs demonstrates the fact that in CCG, sentences don't have a single structure, as in other models of grammar. Let the types of these lexical items be We can perform the simplest proof (changing notation slightly for brevity) as: Opting to type-raise and compose some, we could get a fully incremental, left-to-right proof. The ability to construct such a proof is an argument for the psycholinguistic plausibility of CCG, because listeners do in fact construct partial interpretations (syntactic and semantic) of utterances before they have been completed. Formal properties CCGs are known to be able to generate the language (which is a non-context-free indexed language). A grammar for this language can be found in Vijay-Shanker and Weir (1994). Vijay-Shanker and Weir (1994) demonstrates that Linear Indexed Grammars, Combinatory Categorial Grammars, Tree-adjoining Grammars, and Head Grammars are weakly equivalent formalisms, in that they all define the same string languages. Kuhlmann et al. (2015) show that this equivalence, and the ability of CCG to describe , rely crucially on the ability to restrict the use of the combinatory rules to certain categories, in ways not explained above. See also Categorial grammar Combinatory logic Embedded pushdown automaton Link grammar Type shifter References Baldridge, Jason (2002), "Lexically Specified Derivational Control in Combinatory Categorial Grammar." PhD Dissertation. Univ. of Edinburgh. Curry, Haskell B. and Richard Feys (1958), Combinatory Logic, Vol. 1. North-Holland. Jacobson, Pauline (1999), “Towards a variable-free semantics.” Linguistics and Philosophy 22, 1999. 117–184 Steedman, Mark (1987), “Combinatory grammars and parasitic gaps”. Natural Language and Linguistic Theory 5, 403–439. Steedman, Mark (1996), Surface Structure and Interpretation. The MIT Press. Steedman, Mark (2000), The Syntactic Process. The MIT Press. Szabolcsi, Anna (1989), "Bound variables in syntax (are there any?)." Semantics and Contextual Expression, ed. by Bartsch, van Benthem, and van Emde Boas. Foris, 294–318. Szabolcsi, Anna (1992), "Combinatory grammar and projection from the lexicon." Lexical Matters. CSLI Lecture Notes 24, ed. by Sag and Szabolcsi. Stanford, CSLI Publications. 241–269. Szabolcsi, Anna (2003), “Binding on the fly: Cross-sentential anaphora in variable-free semantics”. Resource Sensitivity in Binding and Anaphora, ed. by Kruijff and Oehrle. Kluwer, 215–229. Further reading Michael Moortgat, Categorial Type Logics, Chapter Two in J. van Benthem and A. ter Meulen (eds.) Handbook of Logic and Language. Elsevier, 1997, homepages.inf.ed.ac.uk External links The Combinatory Categorial Grammar Site The ACL CCG wiki page (likely to be more up-to-date than this one) Semantic Parsing with Combinatory Categorial Grammars – Tutorial describing general principles for building semantic parsers Grammar frameworks Combinatory logic Type theory
Combinatory categorial grammar
[ "Mathematics" ]
1,470
[ "Type theory", "Mathematical logic", "Mathematical structures", "Mathematical objects" ]
16,373,249
https://en.wikipedia.org/wiki/Structured-light%203D%20scanner
A structured-light 3D scanner is a device that measures the three-dimensional shape of an object by projecting light patterns—such as grids or stripes—onto it and capturing their deformation with cameras. This technique allows for precise surface reconstruction by analyzing the displacement of the projected patterns, which are processed into detailed 3D models using specialized algorithms. Due to their high resolution and rapid scanning capabilities, structured-light 3D scanners are utilized in various fields, including industrial design, quality control, cultural heritage preservation, augmented reality gaming, and medical imaging. Compared to 3D laser scanning, structured-light scanners can offer advantages in speed and safety by using non-coherent light sources like LEDs or projectors instead of lasers. This approach allows for relatively quick data capture over large areas and reduces potential safety concerns associated with laser use. However, structured-light scanners can be affected by ambient lighting conditions and the reflective properties of the scanned objects. Principle Projecting a narrow band of light onto a three-dimensionally shaped surface produces a line of illumination that appears distorted from other perspectives than that of the projector, and can be used for geometric reconstruction of the surface shape (light section). A faster and more versatile method is the projection of patterns consisting of many stripes at once, or of arbitrary fringes, as this allows for the acquisition of a multitude of samples simultaneously. Seen from different viewpoints, the pattern appears geometrically distorted due to the surface shape of the object. Although many other variants of structured light projection are possible, patterns of parallel stripes are widely used. The picture shows the geometrical deformation of a single stripe projected onto a simple 3D surface. The displacement of the stripes allows for an exact retrieval of the 3D coordinates of any details on the object's surface. Generation of light patterns Two major methods of stripe pattern generation have been established: Laser interference and projection. The laser interference method works with two wide planar laser beam fronts. Their interference results in regular, equidistant line patterns. Different pattern sizes can be obtained by changing the angle between these beams. The method allows for the exact and easy generation of very fine patterns with unlimited depth of field. Disadvantages are high cost of implementation, difficulties providing the ideal beam geometry, and laser typical effects like speckle noise and the possible self interference with beam parts reflected from objects. Typically, there is no means of modulating individual stripes, such as with Gray codes. The projection method uses incoherent light and basically works like a video projector. Patterns are usually generated by passing light through a digital spatial light modulator, typically based on one of the three currently most widespread digital projection technologies, transmissive liquid crystal, reflective liquid crystal on silicon (LCOS) or digital light processing (DLP; moving micro mirror) modulators, which have various comparative advantages and disadvantages for this application. Other methods of projection could be and have been used, however. Patterns generated by digital display projectors have small discontinuities due to the pixel boundaries in the displays. Sufficiently small boundaries however can practically be neglected as they are evened out by the slightest defocus. A typical measuring assembly consists of one projector and at least one camera. For many applications, two cameras on opposite sides of the projector have been established as useful. Invisible (or imperceptible) structured light uses structured light without interfering with other computer vision tasks for which the projected pattern will be confusing. Example methods include the use of infrared light or of extremely high framerates alternating between two exact opposite patterns. Calibration Geometric distortions by optics and perspective must be compensated by a calibration of the measuring equipment, using special calibration patterns and surfaces. A mathematical model is used for describing the imaging properties of projector and cameras. Essentially based on the simple geometric properties of a pinhole camera, the model also has to take into account the geometric distortions and optical aberration of projector and camera lenses. The parameters of the camera as well as its orientation in space can be determined by a series of calibration measurements, using photogrammetric bundle adjustment. Analysis of stripe patterns There are several depth cues contained in the observed stripe patterns. The displacement of any single stripe can directly be converted into 3D coordinates. For this purpose, the individual stripe has to be identified, which can for example be accomplished by tracing or counting stripes (pattern recognition method). Another common method projects alternating stripe patterns, resulting in binary Gray code sequences identifying the number of each individual stripe hitting the object. An important depth cue also results from the varying stripe widths along the object surface. Stripe width is a function of the steepness of a surface part, i.e. the first derivative of the elevation. Stripe frequency and phase deliver similar cues and can be analyzed by a Fourier transform. Finally, the wavelet transform has recently been discussed for the same purpose. In many practical implementations, series of measurements combining pattern recognition, Gray codes and Fourier transform are obtained for a complete and unambiguous reconstruction of shapes. Another method also belonging to the area of fringe projection has been demonstrated, utilizing the depth of field of the camera. It is also possible to use projected patterns primarily as a means of structure insertion into scenes, for an essentially photogrammetric acquisition. Precision and range The optical resolution of fringe projection methods depends on the width of the stripes used and their optical quality. It is also limited by the wavelength of light. An extreme reduction of stripe width proves inefficient due to limitations in depth of field, camera resolution and display resolution. Therefore, the phase shift method has been widely established: A number of at least 3, typically about 10 exposures are taken with slightly shifted stripes. The first theoretical deductions of this method relied on stripes with a sine wave shaped intensity modulation, but the methods work with "rectangular" modulated stripes, as delivered from LCD or DLP displays as well. By phase shifting, surface detail of e.g. 1/10 the stripe pitch can be resolved. Current optical stripe pattern profilometry hence allows for detail resolutions down to the wavelength of light, below 1 micrometer in practice or, with larger stripe patterns, to approx. 1/10 of the stripe width. Concerning level accuracy, interpolating over several pixels of the acquired camera image can yield a reliable height resolution and also accuracy, down to 1/50 pixel. Arbitrarily large objects can be measured with accordingly large stripe patterns and setups. Practical applications are documented involving objects several meters in size. Typical accuracy figures are: Planarity of a wide surface, to . Shape of a motor combustion chamber to (elevation), yielding a volume accuracy 10 times better than with volumetric dosing. Shape of an object large, to about Radius of a blade edge of e.g. , to ±0.4 μm Navigation As the method can measure shapes from only one perspective at a time, complete 3D shapes have to be combined from different measurements in different angles. This can be accomplished by attaching marker points to the object and combining perspectives afterwards by matching these markers. The process can be automated, by mounting the object on a motorized turntable or CNC positioning device. Markers can as well be applied on a positioning device instead of the object itself. The 3D data gathered can be used to retrieve CAD (computer aided design) data and models from existing components (reverse engineering), hand formed samples or sculptures, natural objects or artifacts. Challenges As with all optical methods, reflective or transparent surfaces raise difficulties. Reflections cause light to be reflected either away from the camera or right into its optics. In both cases, the dynamic range of the camera can be exceeded. Transparent or semi-transparent surfaces also cause major difficulties. In these cases, coating the surfaces with a thin opaque lacquer just for measuring purposes is a common practice. A recent method handles highly reflective and specular objects by inserting a 1-dimensional diffuser between the light source (e.g., projector) and the object to be scanned. Alternative optical techniques have been proposed for handling perfectly transparent and specular objects. Double reflections and inter-reflections can cause the stripe pattern to be overlaid with unwanted light, entirely eliminating the chance for proper detection. Reflective cavities and concave objects are therefore difficult to handle. It is also hard to handle translucent materials, such as skin, marble, wax, plants and human tissue because of the phenomenon of sub-surface scattering. Recently, there has been an effort in the computer vision community to handle such optically complex scenes by re-designing the illumination patterns. These methods have shown promising 3D scanning results for traditionally difficult objects, such as highly specular metal concavities and translucent wax candles. Speed Although several patterns have to be taken per picture in most structured light variants, high-speed implementations are available for a number of applications, for example: Inline precision inspection of components during the production process. Health care applications, such as live measuring of human body shapes or the micro structures of human skin. Motion picture applications have been proposed, for example the acquisition of spatial scene data for three-dimensional television. Applications Industrial Optical Metrology Systems (ATOS) from GOM GmbH utilize Structured Light technology to achieve high accuracy and scalability in measurements. These systems feature self-monitoring for calibration status, transformation accuracy, environmental changes, and part movement to ensure high-quality measuring data. Google Project Tango SLAM (Simultaneous localization and mapping) using depth technologies, including Structured Light, Time of Flight, and Stereo. Time of Flight require the use of an infrared (IR) projector and IR sensor; Stereo does not. MainAxis srl produces a 3D Scanner utilizing an advanced patented technology that enables 3d scanning in full color and with an acquisition time of a few microseconds, used in medical and other applications. A technology by PrimeSense, used in an early version of Microsoft Kinect, used a pattern of projected infrared points to generate a dense 3D image. (Later on, the Microsoft Kinect switched to using a time-of-flight camera instead of structured light.) Occipital Structure Sensor uses a pattern of projected infrared points, calibrated to minimize distortion to generate a dense 3D image. Structure Core uses a stereo camera that matches against a random pattern of projected infrared points to generate a dense 3D image. Intel RealSense camera projects a series of infrared patterns to obtain the 3D structure. Face ID system works by projecting more than 30,000 infrared dots onto a face and producing a 3D facial map. VicoVR sensor uses a pattern of infrared points for skeletal tracking. Chiaro Technologies uses a single engineered pattern of infrared points called Symbolic Light to stream 3D point clouds for industrial applications Made to measure fashion retailing 3D-Automated optical inspection Precision shape measurement for production control (e.g. turbine blades) Reverse engineering (obtaining precision CAD data from existing objects) Volume measurement (e.g. combustion chamber volume in motors) Classification of grinding materials and tools Precision structure measurement of ground surfaces Radius determination of cutting tool blades Precision measurement of planarity Documenting objects of cultural heritage Capturing environments for augmented reality gaming Skin surface measurement for cosmetics and medicine Body shape measurement Forensic science inspections Road pavement structure and roughness Wrinkle measurement on cloth and leather Structured Illumination Microscopy Measurement of topography of solar cells 3D vision system enables DHL's e-fulfillment robot Software 3DUNDERWORLD SLS – OPEN SOURCE DIY 3D scanner based on structured light and stereo vision in Python language SLStudio—Open Source Real Time Structured Light See also Depth map Kinect Laser Dynamic Range Imager (LDRI) Lidar Light stage Range imaging Virtual cinematography References Sources Fechteler, P., Eisert, P., Rurainsky, J.: Fast and High Resolution 3D Face Scanning Proc. of ICIP 2007 Fechteler, P., Eisert, P.: Adaptive Color Classification for Structured Light Systems Proc. of CVPR 2008 Kai Liu, Yongchang Wang, Daniel L. Lau, Qi Hao, Laurence G. Hassebrook: Gamma Model and its Analysis for Phase Measuring Profilometry. J. Opt. Soc. Am. A, 27: 553–562, 2010 Yongchang Wang, Kai Liu, Daniel L. Lau, Qi Hao, Laurence G. Hassebrook: Maximum SNR Pattern Strategy for Phase Shifting Methods in Structured Light Illumination, J. Opt. Soc. Am. A, 27(9), pp. 1962–1971, 2010 Hof, C., Hopermann, H.: Comparison of Replica- and In Vivo-Measurement of the Microtopography of Human Skin University of the Federal Armed Forces, Hamburg Frankowski, G., Chen, M., Huth, T.: Real-time 3D Shape Measurement with Digital Stripe Projection by Texas Instruments Micromirror Devices (DMD) Proc. SPIE-Vol. 3958(2000), pp. 90–106 Frankowski, G., Chen, M., Huth, T.: Optical Measurement of the 3D-Coordinates and the Combustion Chamber Volume of Engine Cylinder Heads Proc. Of "Fringe 2001", pp. 593–598 Elena Stoykova, Jana Harizanova, Venteslav Sainov: Pattern Projection Profilometry for 3D Coordinates Measurement of Dynamic Scenes. In: Three Dimensional Television, Springer, 2008, Song Zhang, Peisen Huang: High-resolution, Real-time 3-D Shape Measurement (PhD Dissertation, Stony Brook Univ., 2005) Tao Peng: Algorithms and models for 3-D shape measurement using digital fringe projections (Ph.D. Dissertation, University of Maryland, USA. 2007) W. Wilke: Segmentierung und Approximation großer Punktwolken (Dissertation Univ. Darmstadt, 2000) G. Wiora: Optische 3D-Messtechnik Präzise Gestaltvermessung mit einem erweiterten Streifenprojektionsverfahren (Dissertation Univ. Heidelberg, 2001) Klaus Körner, Ulrich Droste: Tiefenscannende Streifenprojektion (DSFP) University of Stuttgart (further English references on the site) R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, J. Nissano, "Structured light using pseudorandom codes", IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (3)(1998)322–327 Further reading Fringe, 2005, The 5th International Workshop on Automatic Processing of Fringe Patterns, Berlin: Springer, 2006. 3D imaging Computer vision
Structured-light 3D scanner
[ "Engineering" ]
3,007
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
16,376,341
https://en.wikipedia.org/wiki/Gas%20diffusion%20electrode
Gas diffusion electrodes (GDE) are electrodes with a conjunction of a solid, liquid and gaseous interface, and an electrical conducting catalyst supporting an electrochemical reaction between the liquid and the gaseous phase. Principle GDEs are used in fuel cells, where oxygen and hydrogen react at the gas diffusion electrodes, to form water, while converting the chemical bond energy into electrical energy. Usually the catalyst is fixed in a porous foil, so that the liquid and the gas can interact. Besides these wetting characteristics, the gas diffusion electrode must, of course, offer an optimal electric conductivity, in order to enable an electron transport with low ohmic resistance. An important prerequisite for the operation of gas diffusion electrodes is that both the liquid and the gaseous phase coexist in the pore system of the electrodes which can be demonstrated with the Young–Laplace equation: The gas pressure p is in relation to the liquid in the pore system over the pore radius r, the surface tension γ of the liquid and the contact angle θ. This equation is to be taken as a guide for determination because there are too many unknown, or difficult to achieve, parameters. When the surface tension is considered, the difference in surface tension between the solid and the liquid has to be taken into account. But the surface tension of catalysts such as platinum on carbon or silver are hardly measurable. The contact angle on a flat surface can be determined with a microscope. A single pore, however, cannot be examined, so it is necessary to determine the pore system of an entire electrode. Thus in order to create an electrode area for liquid and gas, the path can be chosen to create different pore radii r, or to create different wetting angles θ. Sintered electrode In this image of a sintered electrode it can be seen that three different grain sizes were used. The different layers were: top layer of fine-grained material layer from different groups gas distribution layer of coarse-grained material Most of the electrodes that were manufactured from 1950 to 1970 with the sintered method were for use in fuel cells. This type of production was dropped for economic reasons because the electrodes were thick and heavy, with a common thickness of 2 mm, while the individual layers had to be very thin and without defects. The sales price was too high and the electrodes could not be produced continuously. Principle of operation The principle of gas diffusion is illustrated in this diagram. The so-called gas distribution layer is located in the middle of the electrode. With only a small gas pressure, the electrolyte is displaced from this pore system. A small flow resistance ensures that the gas can freely flow inside the electrode. At a slightly higher gas pressure the electrolyte in the pore system is restricted to the work layer. The surface layer itself has such fine pores that, even when the pressure peaks, gas cannot flow through the electrode into the electrolyte. Such electrodes were produced by scattering and subsequent sintering or hot pressing. To produce multi-layered electrodes a fine-grained material was scattered in a mold and smoothed. Then, the other materials were applied in multiple layers and put under pressure. The production was not only error-prone but also time-consuming and difficult to automate. Bonded electrode Since about 1970, PTFEs are used to produce an electrode having both hydrophilic and hydrophobic properties while chemically stable and which can be used as binders. This means that, in places with a high proportion of PTFE, no electrolyte can penetrate the pore system and vice versa. In that case the catalyst itself should be non-hydrophobic. Variations There are two technical variations to produce PTFE catalyst-mixtures: Dispersion of water, PTFE, catalyst, emulsifiers, thickening agents... Dry mixture of PTFE powder and catalyst powder The dispersion route is chosen mainly for electrodes with polymer electrolytes, as successfully introduced in the proton exchange membrane fuel cell (PEM fuel cell) and in proton exchange membrane (PEM) or hydrochloric acid (HCL) membrane electrolysis. When used in liquid electrolyte, a dry process is more appropriate. Also, in the dispersion route (through evaporation of water and sintering of the PTFEs at 340 °C) the mechanical pressing is skipped and the produced electrodes are very porous. With fast drying methods, cracks can form in the electrodes which can be penetrated by the liquid electrolyte. For applications with liquid electrolytes, such as the zinc-air battery or the alkaline fuel cell, the dry mixture method is used. Catalyst In acidic electrolytes the catalysts are usually precious metals like platinum, ruthenium, iridium and rhodium. In alkaline electrolytes, like zinc-air batteries and alkaline fuel cells, it is usual to use less expensive catalysts like carbon, manganese, silver, nickel foam or nickel mesh. Application At first solid electrodes were used in the Grove cell, Francis Thomas Bacon was the first to use gas diffusion electrodes for the Bacon fuel cell, converting hydrogen and oxygen at high temperature into electricity. Over the years, gas diffusion electrodes have been adapted for various other processes like: Zinc-air battery since 1980 Nickel-metal hydride battery since 1990 Chlorine production by electrolysis of waste hydrochloric acid Chloralkali process Electrochemical reduction of carbon dioxide Production GDE is produced at all levels. It is not only used for research and development firms but for larger companies as well in the production of a membrane electrode assembly (MEA) that is in most cases used in a fuel cell or battery apparatus. Companies that specialize in high volume production of GDE include Johnson Matthey, Gore and Gaskatel. However, there are many companies which produce custom or low quantity GDE, allowing different shapes, catalysts and loadings to be evaluated as well, which include FuelCellStore, FuelCellsEtc, and many others. See also Anion exchange membrane Concentration cell Electrode potential Glossary of fuel cell terms Ion transport number Ion selective electrode Liquid junction potential References Electrodes Fuel cells
Gas diffusion electrode
[ "Chemistry" ]
1,284
[ "Electrochemistry", "Electrodes" ]
13,642,373
https://en.wikipedia.org/wiki/Upstream%20and%20downstream%20%28DNA%29
In molecular biology and genetics, upstream and downstream both refer to relative positions of genetic code in DNA or RNA. Each strand of DNA or RNA has a 5' end and a 3' end, so named for the carbon position on the deoxyribose (or ribose) ring. By convention, upstream and downstream relate to the 5' to 3' direction respectively in which RNA transcription takes place. Upstream is toward the 5' end of the RNA molecule, and downstream is toward the 3' end. When considering double-stranded DNA, upstream is toward the 5' end of the coding strand for the gene in question and downstream is toward the 3' end. Due to the anti-parallel nature of DNA, this means the 3' end of the template strand is upstream of the gene and the 5' end is downstream. Some genes on the same DNA molecule may be transcribed in opposite directions. This means the upstream and downstream areas of the molecule may change depending on which gene is used as the reference. The terms upstream and downstream are sometimes also applied to a polypeptide sequence, where upstream refers to a region N-terminal and downstream to residues C-terminal of a reference point. See also Upstream and downstream (transduction) References Molecular biology Orientation (geometry)
Upstream and downstream (DNA)
[ "Physics", "Chemistry", "Mathematics", "Biology" ]
260
[ "Molecular biology stubs", "Topology", "Space", "Geometry", "Molecular biology", "Biochemistry", "Spacetime", "Orientation (geometry)" ]
13,643,452
https://en.wikipedia.org/wiki/Pole%20cell
In early Drosophila development, the embryo passes through thirteen nuclear divisions (karyokinesis) without cytokinesis, resulting in a multinucleate cell (generally referred to as a syncytium, but strictly a coenocyte). Pole cells are the cells that form at the polar ends of the Drosophila egg, which begin the adult germ cells. Pole plasm functions to bud the pole cells, as well as restore fertilization, even when the cell was previously sterile. Formation During early development of Drosophila, pole plasm assembles at the posterior pole of the Drosophila embryo, allowing determination of the abdominal patterning. Late in oogenesis, polar organelles, which are electro-negative granules, are in the pole plasm. When the pole plasm further matures, it continues to consist of polar granules into the development of germ cells, which develop into adult germ cells. Serine protease activity occurs less than 2 hours after the budding of the pole cells from the pole plasm, and ending just prior to the movement of the pole cells via gastrulation. The patterning of the pole cells are determined by the activation of oskar, which acts in the determination of body patterning segments. Pole cells begin their migration in a cluster in the midgut primordium. To reach their final destination, pole cells must migrate through the epithelial wall. It is known that the cells migrate through the epithelial wall, but little is known about the mechanisms used to do so. References Mitosis
Pole cell
[ "Biology" ]
334
[ "Cellular processes", "Mitosis" ]
13,646,381
https://en.wikipedia.org/wiki/Open%20Virtualization%20Format
Open Virtualization Format (OVF) is an open standard for packaging and distributing virtual appliances or, more generally, software to be run in virtual machines. The standard describes an "open, secure, portable, efficient and extensible format for the packaging and distribution of software to be run in virtual machines". The OVF standard is not tied to any particular hypervisor or instruction set architecture. The unit of packaging and distribution is a so-called OVF Package which may contain one or more virtual systems each of which can be deployed to a virtual machine. History In September 2007 VMware, Dell, HP, IBM, Microsoft and XenSource submitted to the Distributed Management Task Force (DMTF) a proposal for OVF, then named "Open Virtual Machine Format". The DMTF subsequently released the OVF Specification V1.0.0 as a preliminary standard in September, 2008, and V1.1.0 in January, 2010. In January 2013, DMTF released the second version of the standard, OVF 2.0 which applies to emerging cloud use cases and provides important developments from OVF 1.0 including improved network configuration support and package encryption capabilities for safe delivery. ANSI has ratified OVF 1.1.0 as ANSI standard INCITS 469-2010. OVF 1.1 was adopted in August 2011 by ISO/IEC JTC 1/SC 38 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as an International Standard ISO/IEC 17203. OVF 2.0 brings an enhanced set of capabilities to the packaging of virtual machines, making the standard applicable to a broader range of cloud use cases that are emerging as the industry enters the cloud era. The most significant improvements include support for network configuration along with the ability to encrypt the package to ensure safe delivery. Design An OVF package consists of several files placed in one directory. An OVF package always contains exactly one OVF descriptor (a file with extension .ovf). The OVF descriptor is an XML file which describes the packaged virtual machine; it contains the metadata for the OVF package, such as name, hardware requirements, references to the other files in the OVF package and human-readable descriptions. In addition to the OVF descriptor, the OVF package will typically contain one or more disk images, and optionally certificate files and other auxiliary files. The entire directory can be distributed as an Open Virtual Appliance (OVA) package, which is a tar archive file with the OVF directory inside. Industry support OVF has generally been broadly accepted. Several virtualization players in the industry have announced support for OVF. See also VHD (file format) VMDK References External links DMTF OVF Whitepaper VMware OVF Whitepaper Computer standards DMTF standards ISO/IEC standards Open standards Virtualization
Open Virtualization Format
[ "Technology", "Engineering" ]
630
[ "Computer standards", "DMTF standards", "Virtualization", "Computer networks engineering" ]
13,649,480
https://en.wikipedia.org/wiki/Syndicat%20de%20l%27Architecture
The Syndicat de l'Architecture is a French labor union for architects co-founded by Jean Nouvel. External links Website Architecture organizations Trade unions in France Year of establishment missing
Syndicat de l'Architecture
[ "Engineering" ]
39
[ "Architecture organizations", "Architecture" ]
13,651,046
https://en.wikipedia.org/wiki/Double%20layer%20%28plasma%20physics%29
A double layer is a structure in a plasma consisting of two parallel layers of opposite electrical charge. The sheets of charge, which are not necessarily planar, produce localised excursions of electric potential, resulting in a relatively strong electric field between the layers and weaker but more extensive compensating fields outside, which restore the global potential. Ions and electrons within the double layer are accelerated, decelerated, or deflected by the electric field, depending on their direction of motion. Double layers can be created in discharge tubes, where sustained energy is provided within the layer for electron acceleration by an external power source. Double layers are claimed to have been observed in the aurora and are invoked in astrophysical applications. Similarly, a double layer in the auroral region requires some external driver to produce electron acceleration. Electrostatic double layers are especially common in current-carrying plasmas, and are very thin (typically tens of Debye lengths), compared to the sizes of the plasmas that contain them. Other names for a double layer are electrostatic double layer, electric double layer, plasma double layers. The term ‘electrostatic shock’ in the magnetosphere has been applied to electric fields oriented at an oblique angle to the magnetic field in such a way that the perpendicular electric field is much stronger than the parallel electric field, In laser physics, a double layer is sometimes called an ambipolar electric field. Double layers are conceptually related to the concept of a 'sheath' (see Debye sheath). An early review of double layers from laboratory experiment and simulations is provided by Torvén. Classification Double layers may be classified in the following ways: Weak and strong double layers. The strength of a double layer is expressed as the ratio of the potential drop in comparison with the plasma's equivalent thermal energy, or in comparison with the rest mass energy of the electrons. A double layer is said to be strong if the potential drop within the layer is greater than the equivalent thermal energy of the plasma's components. Relativistic or non-relativistic double layers. A double layer is said to be relativistic if the potential drop within the layer is comparable to the rest mass energy (~512KeV) of the electron. Double layers of such energy are to be found in laboratory experiments. The charge density is low between the two opposing potential regions and the double layer is similar to the charge distribution in a capacitor in that respect. Current carrying double layers These double layers may be generated by current-driven plasma instabilities that amplify variations of the plasma density. One example of these instabilities is the Farley–Buneman instability, which occurs when the streaming velocity of electrons (basically the current density divided by the electron density) exceeds the electron thermal velocity of the plasma. It occurs in collisional plasmas having a neutral component, and is driven by drift currents. Current-free double layers These occur at the boundary between plasma regions with different plasma properties. A plasma may have a higher electron temperature, and thermal velocity, on one side of a boundary layer than on the other. The same may apply for plasma densities. Charged particles exchanged between the regions may enable potential differences to be maintained between them locally. The overall charge density, as in all double layers, will be neutral. Potential imbalance will be neutralised by electron (1&3) and ion (2&4) migration, unless the potential gradients are sustained by an external energy source. Under most laboratory situations, unlike outer space conditions, charged particles may effectively originate within the double layer, by ionization at the anode or cathode, and be sustained. The figure shows the localised perturbation of potential produced by an idealised double layer consisting of two oppositely charged discs. The perturbation is zero at a distance from the double layer in every direction. If an incident charged particle, such as a precipitating auroral electron, encounters such a static or quasistatic structure in the magnetosphere, provided that the particle energy exceeds half the electric potential difference within the double layer, it will pass through without any net change in energy. Incident particles with less energy than this will also experience no net change in energy but will undergo more overall deflection. Four distinct regions of a double layer can be identified, which affect charged particles passing through it, or within it: A positive potential side of the double layer where electrons are accelerated towards it; A positive potential within the double layer where electrons are decelerated; A negative potential within the double layer where electrons are decelerated; and A negative potential side of the double layer where electrons are accelerated. Double layers will tend to be transient in the magnetosphere, as any charge imbalance will become neutralised, unless there is a sustained external source of energy to maintain them as there is under laboratory conditions. Formation mechanisms The details of the formation mechanism depend on the environment of the plasma (e.g. double layers in the laboratory, ionosphere, solar wind, nuclear fusion, etc.). Proposed mechanisms for their formation have included: 1971: Between plasmas of different temperatures 1976: In laboratory plasmas 1982: Disruption of a neutral current sheet 1983: Injection of non-neutral electron current into a cold plasma 1985: Increasing the current density in a plasma 1986: In the accretion column of a neutron star 1986: By pinches in cosmic plasma regions 1987: In a plasma constrained by a magnetic mirror 1988: By an electrical discharge 1988: Current-driven instabilities (strong double layers) 1988: Spacecraft-ejected electron beams 1989: From shock waves in a plasma 2000: Laser radiation 2002: When magnetic field-aligned currents encounter density cavities 2003: By the incidence of plasma on the dark side of the Moon's surface. See picture. Features and characteristics Thickness: The production of a double layer requires regions with a significant excess of positive or negative charge, that is, where quasi-neutrality is violated. In general, quasi-neutrality can only be violated on scales of the Debye length. The thickness of a double layer is of the order of ten Debye lengths, which is a few centimeters in the ionosphere, a few tens of meters in the interplanetary medium, and tens of kilometers in the intergalactic medium. Electrostatic potential distribution: As described under double layer classification above, there are effectively four distinct regions of a double layer where incoming charged particles will be accelerated or decelerated along their trajectory . Within the double layer the two opposing charge distributions will tend to become neutralised by internal charged particle motion. Particle flux: For non-relativistic current carrying double layers, electrons carry most of the current. The Langmuir condition states that the ratio of the electron and the ion current across the layer is given by the square root of the mass ratio of the ions to the electrons. For relativistic double layers the current ratio is 1; i.e. the current is carried equally by electrons and ions. Energy supply: The instantaneous voltage drop across a current-carrying double layer is proportional to the total current, and is similar to that across a resistive element (or load), which dissipates energy in an electric circuit. A double layer cannot supply net energy on its own. Stability: Double layers in laboratory plasmas may be stable or unstable depending on the parameter regime. Various types of instabilities may occur, often arising due to the formation of beams of ions and electrons. Unstable double layers are noisy in the sense that they produce oscillations across a wide frequency band. A lack of plasma stability may also lead to a sudden change in configuration often referred to as an explosion (and hence exploding double layer). In one example, the region enclosed in the double layer rapidly expands and evolves. An explosion of this type was first discovered in mercury arc rectifiers used in high-power direct-current transmission lines, where the voltage drop across the device was seen to increase by several orders of magnitude. Double layers may also drift, usually in the direction of the emitted electron beam, and in this respect are natural analogues to the smooth-bore magnetron Magnetised plasmas: Double layers can form in both magnetised and unmagnetised plasmas. Cellular nature: While double layers are relatively thin, they will spread over the entire cross surface of a laboratory container. Likewise where adjacent plasma regions have different properties, double layers will form and tend to cellularise the different regions. Energy transfer: Double layers can facilitate the transfer of electrical energy into kinetic energy, dW/dt=I•ΔV where I is the electric current dissipating energy into a double layer with a voltage drop of ΔV. Alfvén points out that the current may well consist exclusively of low-energy particles. Torvén et al. have postulated that plasma may spontaneously transfer magnetically stored energy into kinetic energy by electric double layers. No credible mechanism for producing such double layers has been presented, however. Ion thrusters can provide a more direct case of energy transfer from opposing potentials in the form of double layers produced by an external electric field. Oblique double layer: An oblique double layer has electric fields that are not parallel to the ambient magnetic field; i.e., it is not field-aligned. Simulation: Double layers may be modelled using kinetic computer models like particle-in-cell (PIC) simulations. In some cases the plasma is treated as essentially one- or two-dimensional to reduce the computational cost of a simulation. Bohm Criterion: A double layer cannot exist under all circumstances. In order to produce an electric field that vanishes at the boundaries of the double layer, an existence criterion says that there is a maximum to the temperature of the ambient plasma. This is the so-called Bohm criterion. Bio-physical analogy: A model of plasma double layers has been used to investigate their applicability to understanding ion transport across biological cell membranes. Brazilian researchers have noted that "Concepts like charge neutrality, Debye length, and double layer are very useful to explain the electrical properties of a cellular membrane." Plasma physicist Hannes Alfvén also noted that association of double layers with cellular structure, as had Irving Langmuir before him, who coined the term "plasma" after its resemblance to blood cells. History It was already known in the 1920s that a plasma has a limited capacity for current maintenance, Irving Langmuir characterized double layers in the laboratory and called these structures double-sheaths. In the 1950s a thorough study of double layers started in the laboratory. Many groups are still working on this topic theoretically, experimentally and numerically. It was first proposed by Hannes Alfvén (the developer of magnetohydrodynamics from laboratory experiments) that the polar lights or Aurora Borealis are created by electrons accelerated in the magnetosphere of the Earth. He supposed that the electrons were accelerated electrostatically by an electric field localized in a small volume bounded by two charged regions, and the so-called double layer would accelerate electrons earthwards. Since then other mechanisms involving wave-particle interactions have been proposed as being feasible, from extensive spatial and temporal in situ studies of auroral particle characteristics. Many investigations of the magnetosphere and auroral regions have been made using rockets and satellites. McIlwain discovered from a rocket flight in 1960 that the energy spectrum of auroral electrons exhibited a peak that was thought then to be too sharp to be produced by a random process and which suggested, therefore, that an ordered process was responsible. It was reported in 1977 that satellites had detected the signature of double layers as electrostatic shocks in the magnetosphere. indications of electric fields parallel to the geomagnetic field lines was obtained by the Viking satellite, which measures the differential potential structures in the magnetosphere with probes mounted on 40m long booms. These probes measured the local particle density and the potential difference between two points 80m apart. Asymmetric potential excursions with respect to 0 V were measured, and interpreted as a double layer with a net potential within the region. Magnetospheric double layers typically have a strength (where the electron temperature is assumed to lie in the range ) and are therefore weak. A series of such double layers would tend to merge, much like a string of bar magnets, and dissipate, even within a rarefied plasma. It has yet to be explained how any overall localised charge distribution in the form of double layers might provide a source of energy for auroral electrons precipitated into the atmosphere. Interpretation of the FAST spacecraft data proposed strong double layers in the auroral acceleration region. Strong double layers have also been reported in the downward current region by Andersson et al. Parallel electric fields with amplitudes reaching nearly 1 V/m were inferred to be confined to a thin layer of approximately 10 Debye lengths. It is stated that the structures moved ‘at roughly the ion acoustic speed in the direction of the accelerated electrons, i.e., anti-earthward.’ That raises a question of what role, if any, double layers might play in accelerating auroral electrons that are precipitated downwards into the atmosphere from the magnetosphere. Double layers have also been found in the Earth's magnetosphere by the space missions Cluster and MMS. The possible role of precipitating electrons from 1-10keV themselves generating such observed double layers or electric fields has seldom been considered or analysed. Equally, the general question of how such double layers might be generated from an alternative source of energy, or what the spatial distribution of electric charge might be to produce net energy changes, is seldom addressed. Under laboratory conditions an external power supply is available. In the laboratory, double layers can be created in different devices. They are investigated in double plasma machines, triple plasma machines, and Q-machines. The stationary potential structures that can be measured in these machines agree very well with what one would expect theoretically. An example of a laboratory double layer can be seen in the figure below, taken from Torvén and Lindberg (1980), where we can see how well-defined and confined is the potential drop of a double layer in a double plasma machine. One of the interesting aspects of the experiment by Torvén and Lindberg (1980) is that not only did they measure the potential structure in the double plasma machine but they also found high-frequency fluctuating electric fields at the high-potential side of the double layer (also shown in the figure). These fluctuations are probably due to a beam-plasma interaction outside the double layer, which excites plasma turbulence. Their observations are consistent with experiments on electromagnetic radiation emitted by double layers in a double plasma machine by Volwerk (1993), who, however, also observed radiation from the double layer itself. The power of these fluctuations has a maximum around the plasma frequency of the ambient plasma. It was later reported that the electrostatic high-frequency fluctuations near the double layer can be concentrated in a narrow region, sometimes called the hf-spike. Subsequently, both radio emissions, near the plasma frequency, and whistler waves at much lower frequencies were seen to emerge from this region. Similar whistler wave structures were observed together with electron beams near Saturn's moon Enceladus, suggesting the possible presence of a double layer at lower altitude. A recent development in double layer experiments in the laboratory is the investigation of so-called stairstep double layers. It has been observed that a potential drop in a plasma column can be divided into different parts. Transitions from a single double layer into two-, three-, or greater-step double layers are strongly sensitive to the boundary conditions of the plasma. Unlike experiments in the laboratory, the concept of such double layers in the magnetosphere, and any role in creating the aurora, suffers from there so far being no identified steady source of energy. The electric potential characteristic of double layers might however indicate that, those observed in the auroral zone are a secondary product of precipitating electrons that have been energized in other ways, such as by electrostatic waves. Some scientists have suggested a role of double layers in solar flares. Establishing such a role indirectly is even harder to verify than postulating double layers as accelerators of auroral electrons within the Earth's magnetosphere. Serious questions have been raised on their role even there. Footnotes External links Numerical modeling of low-pressure plasmas: applications to electric double layers (2006, PDF), A. Meige, PhD thesis References Alfvén, H., On the theory of magnetic storms and aurorae, Tellus, 10, 104, 1958. Peratt, A., Physics of the Plasma Universe, 1991 Raadu, M.,A., The physics of double layers and their role in astrophysics, Physics Reports, 178, 25–97, 1989. Plasma phenomena ja:電気二重層
Double layer (plasma physics)
[ "Physics" ]
3,464
[ "Plasma phenomena", "Physical phenomena", "Plasma physics" ]
13,651,338
https://en.wikipedia.org/wiki/Interface%20and%20colloid%20science
Interface and colloid science is an interdisciplinary intersection of branches of chemistry, physics, nanoscience and other fields dealing with colloids, heterogeneous systems consisting of a mechanical mixture of particles between 1 nm and 1000 nm dispersed in a continuous medium. A colloidal solution is a heterogeneous mixture in which the particle size of the substance is intermediate between a true solution and a suspension, i.e. between 1–1000 nm. Smoke from a fire is an example of a colloidal system in which tiny particles of solid float in air. Just like true solutions, colloidal particles are small and cannot be seen by the naked eye. They easily pass through filter paper. But colloidal particles are big enough to be blocked by parchment paper or animal membrane. Interface and colloid science has applications and ramifications in the chemical industry, pharmaceuticals, biotechnology, ceramics, minerals, nanotechnology, and microfluidics, among others. There are many books dedicated to this scientific discipline, and there is a glossary of terms, Nomenclature in Dispersion Science and Technology, published by the US National Institute of Standards and Technology. See also Interface (matter) Electrokinetic phenomena Surface science References External links Max Planck Institute of Colloids and Interfaces American Chemical Society division of Colloid & Surface Chemistry Chemical mixtures Colloidal chemistry Condensed matter physics
Interface and colloid science
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
282
[ "Colloidal chemistry", "Phases of matter", "Materials science", "Colloids", "Surface science", "Chemical mixtures", "Condensed matter physics", "nan", "Matter" ]
7,422,711
https://en.wikipedia.org/wiki/Energy%20budget
An energy budget is a balance sheet of energy income against expenditure. It is studied in the field of Energetics which deals with the study of energy transfer and transformation from one form to another. Calorie is the basic unit of measurement. An organism in a laboratory experiment is an open thermodynamic system, exchanging energy with its surroundings in three ways - heat, work and the potential energy of biochemical compounds. Organisms use ingested food resources (C=consumption) as building blocks in the synthesis of tissues (P=production) and as fuel in the metabolic process that power this synthesis and other physiological processes (R=respiratory loss). Some of the resources are lost as waste products (F=faecal loss, U=urinary loss). All these aspects of metabolism can be represented in energy units. The basic model of energy budget may be shown as: P = C - R - U - F or P = C - (R + U + F) or C = P + R + U + F All the aspects of metabolism can be represented in energy units (e.g. joules (J);1 calorie = 4.2 kJ). Energy used for metabolism will be R = C - (F + U + P) Energy used in the maintenance will be R + F + U = C - P Endothermy and ectothermy Energy budget allocation varies for endotherms and ectotherms. Ectotherms rely on the environment as a heat source while endotherms maintain their body temperature through the regulation of metabolic processes. The heat produced in association with metabolic processes facilitates the active lifestyles of endotherms and their ability to travel far distances over a range of temperatures in the search for food. Ectotherms are limited by the ambient temperature of the environment around them but the lack of substantial metabolic heat production accounts for an energetically inexpensive metabolic rate. The energy demands for ectotherms are generally one tenth of that required for endotherms. References Kumar, Ranjan (1999): Studies on Bioenergics modelling in a fresh water fish, Mystus vittatus (Bloch), Ph.D. thesis, Magadh University, Bodh Gaya. B.R. Braaten (1976): Bioenergetics - a review on Methodology. In: Halver J. E. and K. Tiews (eds). Finfish nutrition and Finfish Technology vol. II, pp. 461–504. Berlin, Hennemann. Brett, J. R. (1962) and T. D. D. Groves (1979): Physiological energetics. In: W.S. Hoar, D.J. Randall and J. R. Brett(eds). In: Fish Physiology, Vol VII. PP.279–352. N.Y.; A.P. Cui, Y and R. J. Wootton (1988): Bioenergetics of growth of a cyprinid Phoxinus phoxinus : the effect of ration, temperature, and body size on food consumption, faecal and nitrogen excretion. J. Fish. Biol, 33: 431–443. Elliott, J.M. and L. Persson (1978): The estimation of daily rate of food consumption for fish. J. Anim. Ecol. 47,977. Fischer, Z (1983): The elements of energy balance in grass carp (Ctenophayngodon idella) part-IV, consumption rate of grass carp fed on different types of food. Kerr, S.R. (1982): Estimating the energy budgets of actively predatory fish. Can. J.Fish Aqual. Sci, 39-371. Kleiber, M. (1961): The fire of life - An Introduction to animal Energetics. Wiley, New York Prabhakar, A. K. (1997): Studies on energy budget in a siluroid fish, Heteropneustes fossilis (Bloch), Ph.D. thesis, Magadh University, Bodh Gaya. Ray, A. K and B. C. Patra (1987): Method for collecting fish faeces for studying the digestibility of feeds J. Inland. Fish Soc. India. 19 (I) 71–73. Sengupta, A. and Amitta Moitra (1996): Energy Budget in relation to various dietary conditions in snake headed murrel, Channa punctatus: Proc. 83rd ISCA: ABS No. 95: pp. 56. Staples, D.J. and M. Nomura (1976): Influence of body size and food ration on the energy budget of rainbow trout, Salmo gairdneri (Rechardson). J. Fish Biol. 9, 29. Von Bertalanfly, L. (1957): Quantitative law in Metabolism. Quartz. Rev. biol. 32: 217–231 Warren, C.E. and G.E. Davies (1967): Laboratory studies on the feeding bioenergetics and growth of fish. In: Gerking, S.D. (eds). The biological basis for freshwater Fish Production. pp. 175–214. Oxford, Blackwell. Budgets Biology
Energy budget
[ "Biology" ]
1,114
[ "Physiology" ]
7,428,961
https://en.wikipedia.org/wiki/Hadamard%20code
The Hadamard code is an error-correcting code named after the French mathematician Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9. Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science. The Hadamard code is also known under the names Walsh code, Walsh family, and Walsh–Hadamard code in recognition of the American mathematician Joseph Leonard Walsh. The Hadamard code is an example of a linear code of length over a binary alphabet. Unfortunately, this term is somewhat ambiguous as some references assume a message length while others assume a message length of . In this article, the first case is called the Hadamard code while the second is called the augmented Hadamard code. The Hadamard code is unique in that each non-zero codeword has a Hamming weight of exactly , which implies that the distance of the code is also . In standard coding theory notation for block codes, the Hadamard code is a -code, that is, it is a linear code over a binary alphabet, has block length , message length (or dimension) , and minimum distance . The block length is very large compared to the message length, but on the other hand, errors can be corrected even in extremely noisy conditions. The augmented Hadamard code is a slightly improved version of the Hadamard code; it is a -code and thus has a slightly better rate while maintaining the relative distance of , and is thus preferred in practical applications. In communication theory, this is simply called the Hadamard code and it is the same as the first order Reed–Muller code over the binary alphabet. Normally, Hadamard codes are based on Sylvester's construction of Hadamard matrices, but the term “Hadamard code” is also used to refer to codes constructed from arbitrary Hadamard matrices, which are not necessarily of Sylvester type. In general, such a code is not linear. Such codes were first constructed by Raj Chandra Bose and Sharadchandra Shankar Shrikhande in 1959. If n is the size of the Hadamard matrix, the code has parameters , meaning it is a not-necessarily-linear binary code with 2n codewords of block length n and minimal distance n/2. The construction and decoding scheme described below apply for general n, but the property of linearity and the identification with Reed–Muller codes require that n be a power of 2 and that the Hadamard matrix be equivalent to the matrix constructed by Sylvester's method. The Hadamard code is a locally decodable code, which provides a way to recover parts of the original message with high probability, while only looking at a small fraction of the received word. This gives rise to applications in computational complexity theory and particularly in the design of probabilistically checkable proofs. Since the relative distance of the Hadamard code is 1/2, normally one can only hope to recover from at most a 1/4 fraction of error. Using list decoding, however, it is possible to compute a short list of possible candidate messages as long as fewer than of the bits in the received word have been corrupted. In code-division multiple access (CDMA) communication, the Hadamard code is referred to as Walsh Code, and is used to define individual communication channels. It is usual in the CDMA literature to refer to codewords as “codes”. Each user will use a different codeword, or “code”, to modulate their signal. Because Walsh codewords are mathematically orthogonal, a Walsh-encoded signal appears as random noise to a CDMA capable mobile terminal, unless that terminal uses the same codeword as the one used to encode the incoming signal. History Hadamard code is the name that is most commonly used for this code in the literature. However, in modern use these error correcting codes are referred to as Walsh–Hadamard codes. There is a reason for this: Jacques Hadamard did not invent the code himself, but he defined Hadamard matrices around 1893, long before the first error-correcting code, the Hamming code, was developed in the 1940s. The Hadamard code is based on Hadamard matrices, and while there are many different Hadamard matrices that could be used here, normally only Sylvester's construction of Hadamard matrices is used to obtain the codewords of the Hadamard code. James Joseph Sylvester developed his construction of Hadamard matrices in 1867, which actually predates Hadamard's work on Hadamard matrices. Hence the name Hadamard code is disputed and sometimes the code is called Walsh code, honoring the American mathematician Joseph Leonard Walsh. An augmented Hadamard code was used during the 1971 Mariner 9 mission to correct for picture transmission errors. The binary values used during this mission were 6 bits long, which represented 64 grayscale values. Because of limitations of the quality of the alignment of the transmitter at the time (due to Doppler Tracking Loop issues) the maximum useful data length was about 30 bits. Instead of using a repetition code, a [32, 6, 16] Hadamard code was used. Errors of up to 7 bits per 32-bit word could be corrected using this scheme. Compared to a 5-repetition code, the error correcting properties of this Hadamard code are much better, yet its rate is comparable. The efficient decoding algorithm was an important factor in the decision to use this code. The circuitry used was called the "Green Machine". It employed the fast Fourier transform which can increase the decoding speed by a factor of three. Since the 1990s use of this code by space programs has more or less ceased, and the NASA Deep Space Network does not support this error correction scheme for its dishes that are greater than 26 m. Constructions While all Hadamard codes are based on Hadamard matrices, the constructions differ in subtle ways for different scientific fields, authors, and uses. Engineers, who use the codes for data transmission, and coding theorists, who analyse extremal properties of codes, typically want the rate of the code to be as high as possible, even if this means that the construction becomes mathematically slightly less elegant. On the other hand, for many applications of Hadamard codes in theoretical computer science it is not so important to achieve the optimal rate, and hence simpler constructions of Hadamard codes are preferred since they can be analyzed more elegantly. Construction using inner products When given a binary message of length , the Hadamard code encodes the message into a codeword using an encoding function This function makes use of the inner product of two vectors , which is defined as follows: Then the Hadamard encoding of is defined as the sequence of all inner products with : As mentioned above, the augmented Hadamard code is used in practice since the Hadamard code itself is somewhat wasteful. This is because, if the first bit of is zero, , then the inner product contains no information whatsoever about , and hence, it is impossible to fully decode from those positions of the codeword alone. On the other hand, when the codeword is restricted to the positions where , it is still possible to fully decode . Hence it makes sense to restrict the Hadamard code to these positions, which gives rise to the augmented Hadamard encoding of ; that is, . Construction using a generator matrix The Hadamard code is a linear code, and all linear codes can be generated by a generator matrix . This is a matrix such that holds for all , where the message is viewed as a row vector and the vector-matrix product is understood in the vector space over the finite field . In particular, an equivalent way to write the inner product definition for the Hadamard code arises by using the generator matrix whose columns consist of all strings of length , that is, where is the -th binary vector in lexicographical order. For example, the generator matrix for the Hadamard code of dimension is: The matrix is a -matrix and gives rise to the linear operator . The generator matrix of the augmented Hadamard code is obtained by restricting the matrix to the columns whose first entry is one. For example, the generator matrix for the augmented Hadamard code of dimension is: Then is a linear mapping with . For general , the generator matrix of the augmented Hadamard code is a parity-check matrix for the extended Hamming code of length and dimension , which makes the augmented Hadamard code the dual code of the extended Hamming code. Hence an alternative way to define the Hadamard code is in terms of its parity-check matrix: the parity-check matrix of the Hadamard code is equal to the generator matrix of the Hamming code. Construction using general Hadamard matrices Hadamard codes are obtained from an n-by-n Hadamard matrix H. In particular, the 2n codewords of the code are the rows of H and the rows of −H. To obtain a code over the alphabet {0,1}, the mapping −1 ↦ 1, 1 ↦ 0, or, equivalently, x ↦ (1 − x)/2, is applied to the matrix elements. That the minimum distance of the code is n/2 follows from the defining property of Hadamard matrices, namely that their rows are mutually orthogonal. This implies that two distinct rows of a Hadamard matrix differ in exactly n/2 positions, and, since negation of a row does not affect orthogonality, that any row of H differs from any row of −H in n/2 positions as well, except when the rows correspond, in which case they differ in n positions. To get the augmented Hadamard code above with , the chosen Hadamard matrix H has to be of Sylvester type, which gives rise to a message length of . Distance The distance of a code is the minimum Hamming distance between any two distinct codewords, i.e., the minimum number of positions at which two distinct codewords differ. Since the Walsh–Hadamard code is a linear code, the distance is equal to the minimum Hamming weight among all of its non-zero codewords. All non-zero codewords of the Walsh–Hadamard code have a Hamming weight of exactly by the following argument. Let be a non-zero message. Then the following value is exactly equal to the fraction of positions in the codeword that are equal to one: The fact that the latter value is exactly is called the random subsum principle. To see that it is true, assume without loss of generality that . Then, when conditioned on the values of , the event is equivalent to for some depending on and . The probability that happens is exactly . Thus, in fact, all non-zero codewords of the Hadamard code have relative Hamming weight , and thus, its relative distance is . The relative distance of the augmented Hadamard code is as well, but it no longer has the property that every non-zero codeword has weight exactly since the all s vector is a codeword of the augmented Hadamard code. This is because the vector encodes to . Furthermore, whenever is non-zero and not the vector , the random subsum principle applies again, and the relative weight of is exactly . Local decodability A locally decodable code is a code that allows a single bit of the original message to be recovered with high probability by only looking at a small portion of the received word. A code is -query locally decodable if a message bit, , can be recovered by checking bits of the received word. More formally, a code, , is -locally decodable, if there exists a probabilistic decoder, , such that (Note: represents the Hamming distance between vectors and ): , implies that Theorem 1: The Walsh–Hadamard code is -locally decodable for all . Lemma 1: For all codewords, in a Walsh–Hadamard code, , , where represent the bits in in positions and respectively, and represents the bit at position . Proof of lemma 1 Let be the codeword in corresponding to message . Let be the generator matrix of . By definition, . From this, . By the construction of , . Therefore, by substitution, . Proof of theorem 1 To prove theorem 1 we will construct a decoding algorithm and prove its correctness. Algorithm Input: Received word For each : Pick uniformly at random. Pick such that , where is the -th standard basis vector and is the bitwise xor of and . . Output: Message Proof of correctness For any message, , and received word such that differs from on at most fraction of bits, can be decoded with probability at least . By lemma 1, . Since and are picked uniformly, the probability that is at most . Similarly, the probability that is at most . By the union bound, the probability that either or do not match the corresponding bits in is at most . If both and correspond to , then lemma 1 will apply, and therefore, the proper value of will be computed. Therefore, the probability is decoded properly is at least . Therefore, and for to be positive, . Therefore, the Walsh–Hadamard code is locally decodable for . Optimality For k ≤ 7 the linear Hadamard codes have been proven optimal in the sense of minimum distance. See also Zadoff–Chu sequence — improve over the Walsh–Hadamard codes References Further reading (xiv+225 pages) Coding theory Error detection and correction
Hadamard code
[ "Mathematics", "Engineering" ]
2,842
[ "Discrete mathematics", "Coding theory", "Reliability engineering", "Error detection and correction" ]
7,430,174
https://en.wikipedia.org/wiki/Discrete%20dipole%20approximation
Discrete dipole approximation (DDA), also known as coupled dipole approximation, is a method for computing scattering of radiation by particles of arbitrary shape and by periodic structures. Given a target of arbitrary geometry, one seeks to calculate its scattering and absorption properties by an approximation of the continuum target by a finite array of small polarizable dipoles. This technique is used in a variety of applications including nanophotonics, radar scattering, aerosol physics and astrophysics. Basic concepts The basic idea of the DDA was introduced in 1964 by DeVoe who applied it to study the optical properties of molecular aggregates; retardation effects were not included, so DeVoe's treatment was limited to aggregates that were small compared with the wavelength. The DDA, including retardation effects, was proposed in 1973 by Purcell and Pennypacker who used it to study interstellar dust grains. Simply stated, the DDA is an approximation of the continuum target by a finite array of polarizable points. The points acquire dipole moments in response to the local electric field. The dipoles interact with one another via their electric fields, so the DDA is also sometimes referred to as the coupled dipole approximation. Nature provides the physical inspiration for the DDA - in 1909 Lorentz showed that the dielectric properties of a substance could be directly related to the polarizabilities of the individual atoms of which it was composed, with a particularly simple and exact relationship, the Clausius-Mossotti relation (or Lorentz-Lorenz), when the atoms are located on a cubical lattice. We may expect that, just as a continuum representation of a solid is appropriate on length scales that are large compared with the interatomic spacing, an array of polarizable points can accurately approximate the response of a continuum target on length scales that are large compared with the interdipole separation. For a finite array of point dipoles the scattering problem may be solved exactly, so the only approximation that is present in the DDA is the replacement of the continuum target by an array of N-point dipoles. The replacement requires specification of both the geometry (location of the dipoles) and the dipole polarizabilities. For monochromatic incident waves the self-consistent solution for the oscillating dipole moments may be found; from these the absorption and scattering cross sections are computed. If DDA solutions are obtained for two independent polarizations of the incident wave, then the complete amplitude scattering matrix can be determined. Alternatively, the DDA can be derived from volume integral equation for the electric field. This highlights that the approximation of point dipoles is equivalent to that of discretizing the integral equation, and thus decreases with decreasing dipole size. With the recognition that the polarizabilities may be tensors, the DDA can readily be applied to anisotropic materials. The extension of the DDA to treat materials with nonzero magnetic susceptibility is also straightforward, although for most applications magnetic effects are negligible. There are several reviews of DDA method. The method was improved by Draine, Flatau, and Goodman, who applied the fast Fourier transform to solve fast convolution problems arising in the discrete dipole approximation (DDA). This allowed for the calculation of scattering by large targets. They distributed an open-source code DDSCAT. There are now several DDA implementations, extensions to periodic targets, and particles placed on or near a plane substrate. Comparisons with exact techniques have also been published. Other aspects, such as the validity criteria of the discrete dipole approximation, were published. The DDA was also extended to employ rectangular or cuboid dipoles, which are more efficient for highly oblate or prolate particles. Fast Fourier Transform for fast convolution calculations The Fast Fourier Transform (FFT) method was introduced in 1991 by Goodman, Draine, and Flatau for the discrete dipole approximation. They utilized a 3D FFT GPFA written by Clive Temperton. The interaction matrix was extended to twice its original size to incorporate negative lags by mirroring and reversing the interaction matrix. Several variants have been developed since then. Barrowes, Teixeira, and Kong in 2001 developed a code that uses block reordering, zero padding, and a reconstruction algorithm, claiming minimal memory usage. McDonald, Golden, and Jennings in 2009 used a 1D FFT code and extended the interaction matrix in the x, y, and z directions of the FFT calculations, suggesting memory savings due to this approach. This variant was also implemented in the MATLAB 2021 code by Shabaninezhad and Ramakrishna. Other techniques to accelerate convolutions have been suggested in a general context along with faster evaluations of Fast Fourier Transforms arising in DDA problem solvers. Conjugate gradient iteration schemes and preconditioning Some of the early calculations of the polarization vector were based on direct inversion and the implementation of the conjugate gradient method by Petravic and Kuo-Petravic. Subsequently, many other conjugate gradient methods have been tested. Advances in the preconditioning of linear systems of equations arising in the DDA setup have also been reported. Thermal discrete dipole approximation Thermal discrete dipole approximation is an extension of the original DDA to simulations of near-field heat transfer between 3D arbitrarily-shaped objects. Discrete dipole approximation codes Most of the codes apply to arbitrary-shaped inhomogeneous nonmagnetic particles and particle systems in free space or homogeneous dielectric host medium. The calculated quantities typically include the Mueller matrices, integral cross-sections (extinction, absorption, and scattering), internal fields and angle-resolved scattered fields (phase function). There are some published comparisons of existing DDA codes. General-purpose open-source DDA codes These codes typically use regular grids (cubical or rectangular cuboid), conjugate gradient method to solve large system of linear equations, and FFT-acceleration of the matrix-vector products which uses convolution theorem. Complexity of this approach is almost linear in number of dipoles for both time and memory. Specialized DDA codes These list include codes that do not qualify for the previous section. The reasons may include the following: source code is not available, FFT acceleration is absent or reduced, the code focuses on specific applications not allowing easy calculation of standard scattering quantities. Gallery of shapes See also Computational electromagnetics Mie theory Finite-difference time-domain method Method of moments (electromagnetics) References Computational science Electrodynamics Scattering Scattering, absorption and radiative transfer (optics) Computational electromagnetics
Discrete dipole approximation
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,364
[ "Computational electromagnetics", " absorption and radiative transfer (optics)", "Applied mathematics", "Computational physics", "Computational science", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics", "Electrodynamics", "Dynamical systems" ]
12,083,818
https://en.wikipedia.org/wiki/Filling%20area%20conjecture
In differential geometry, Mikhail Gromov's filling area conjecture asserts that the hemisphere has minimum area among the orientable surfaces that fill a closed curve of given length without introducing shortcuts between its points. Definitions and statement of the conjecture Every smooth surface or curve in Euclidean space is a metric space, in which the (intrinsic) distance between two points of is defined as the infimum of the lengths of the curves that go from to along . For example, on a closed curve of length , for each point of the curve there is a unique other point of the curve (called the antipodal of ) at distance from . A compact surface fills a closed curve if its border (also called boundary, denoted ) is the curve . The filling is said to be isometric if for any two points of the boundary curve , the distance between them along is the same (not less) than the distance along the boundary. In other words, to fill a curve isometrically is to fill it without introducing shortcuts. Question: How small can be the area of a surface that isometrically fills its boundary curve, of given length? For example, in three-dimensional Euclidean space, the circle (of length 2) is filled by the flat disk which is not an isometric filling, because any straight chord along it is a shortcut. In contrast, the hemisphere is an isometric filling of the same circle , which has twice the area of the flat disk. Is this the minimum possible area? The surface can be imagined as made of a flexible but non-stretchable material, that allows it to be moved around and bended in Euclidean space. None of these transformations modifies the area of the surface nor the length of the curves drawn on it, which are the magnitudes relevant to the problem. The surface can be removed from Euclidean space altogether, obtaining a Riemannian surface, which is an abstract smooth surface with a Riemannian metric that encodes the lengths and area. Reciprocally, according to the Nash-Kuiper theorem, any Riemannian surface with boundary can be embedded in Euclidean space preserving the lengths and area specified by the Riemannian metric. Thus the filling problem can be stated equivalently as a question about Riemannian surfaces, that are not placed in Euclidean space in any particular way. Conjecture (Gromov's filling area conjecture, 1983): The hemisphere has minimum area among the orientable compact Riemannian surfaces that fill isometrically their boundary curve, of given length. Gromov's proof for the case of Riemannian disks In the same paper where Gromov stated the conjecture, he proved that the hemisphere has least area among the Riemannian surfaces that isometrically fill a circle of given length, and are homeomorphic to a disk. Proof: Let be a Riemannian disk that isometrically fills its boundary of length . Glue each point with its antipodal point , defined as the unique point of that is at the maximum possible distance from . Gluing in this way we obtain a closed Riemannian surface that is homeomorphic to the real projective plane and whose systole (the length of the shortest non-contractible curve) is equal to . (And reciprocally, if we cut open a projective plane along a shortest noncontractible loop of length , we obtain a disk that fills isometrically its boundary of length .) Thus the minimum area that the isometric filling can have is equal to the minimum area that a Riemannian projective plane of systole can have. But then Pu's systolic inequality asserts precisely that a Riemannian projective plane of given systole has minimum area if and only if it is round (that is, obtained from a Euclidean sphere by identifying each point with its opposite). The area of this round projective plane equals the area of the hemisphere (because each of them has half the area of the sphere). The proof of Pu's inequality relies, in turn, on the uniformization theorem. Fillings with Finsler metrics In 2001, Sergei Ivanov presented another way to prove that the hemisphere has smallest area among isometric fillings homeomorphic to a disk. His argument does not employ the uniformization theorem and is based instead on the topological fact that two curves on a disk must cross if their four endpoints are on the boundary and interlaced. Moreover, Ivanov's proof applies more generally to disks with Finsler metrics, which differ from Riemannian metrics in that they need not satisfy the Pythagorean equation at the infinitesimal level. The area of a Finsler surface can be defined in various inequivalent ways, and the one employed here is the Holmes–Thompson area, which coincides with the usual area when the metric is Riemannian. What Ivanov proved is that The hemisphere has minimum Holmes–Thompson area among Finsler disks that isometrically fill a closed curve of given length. Let be a Finsler disk that isometrically fills its boundary of length . We may assume that is the standard round disk in , and the Finsler metric is smooth and strongly convex. The Holmes–Thompson area of the filling can be computed by the formula where for each point , the set is the dual unit ball of the norm (the unit ball of the dual norm ), and is its usual area as a subset of . Choose a collection of boundary points, listed in counterclockwise order. For each point , we define on M the scalar function . These functions have the following properties: Each function is Lipschitz on M and therefore (by Rademacher's theorem) differentiable at almost every point . If is differentiable at an interior point , then there is a unique shortest curve from to x (parametrized with unit speed), that arrives at x with a speed . The differential has norm 1 and is the unique covector such that . In each point where all the functions are differentiable, the covectors are distinct and placed in counterclockwise order on the dual unit sphere . Indeed, they must be distinct because different geodesics cannot arrive at with the same speed. Also, if three of these covectors (for some ) appeared in inverted order, then two of the three shortest curves from the points to would cross each other, which is not possible. In summary, for almost every interior point , the covectors are vertices, listed in counterclockwise order, of a convex polygon inscribed in the dual unit ball . The area of this polygon is (where the index i + 1 is computed modulo n). Therefore we have a lower bound for the area of the filling. If we define the 1-form , then we can rewrite this lower bound using the Stokes formula as . The boundary integral that appears here is defined in terms of the distance functions restricted to the boundary, which do not depend on the isometric filling. The result of the integral therefore depends only on the placement of the points on the circle of length 2L. We omitted the computation, and expressed the result in terms of the lengths of each counterclockwise boundary arc from a point to the following point . The computation is valid only if . In summary, our lower bound for the area of the Finsler isometric filling converges to as the collection is densified. This implies that , as we had to prove. Unlike the Riemannian case, there is a great variety of Finsler disks that isometrically fill a closed curve and have the same Holmes–Thompson area as the hemisphere. If the Hausdorff area is used instead, then the minimality of the hemisphere still holds, but the hemisphere becomes the unique minimizer. This follows from Ivanov's theorem since the Hausdorff area of a Finsler manifold is never less than the Holmes–Thompson area, and the two areas are equal if and only if the metric is Riemannian. Non-minimality of the hemisphere among rational fillings with Finsler metrics A Euclidean disk that fills a circle can be replaced, without decreasing the distances between boundary points, by a Finsler disk that fills the same circle =10 times (in the sense that its boundary wraps around the circle times), but whose Holmes–Thompson area is less than times the area of the disk. For the hemisphere, a similar replacement can be found. In other words, the filling area conjecture is false if Finsler 2-chains with rational coefficients are allowed as fillings, instead of orientable surfaces (which can be considered as 2-chains with integer coefficients). Riemannian fillings of genus one and hyperellipticity An orientable Riemannian surface of genus one that isometrically fills the circle cannot have less area than the hemisphere. The proof in this case again starts by gluing antipodal points of the boundary. The non-orientable closed surface obtained in this way has an orientable double cover of genus two, and is therefore hyperelliptic. The proof then exploits a formula by J. Hersch from integral geometry. Namely, consider the family of figure-8 loops on a football, with the self-intersection point at the equator. Hersch's formula expresses the area of a metric in the conformal class of the football, as an average of the energies of the figure-8 loops from the family. An application of Hersch's formula to the hyperelliptic quotient of the Riemann surface proves the filling area conjecture in this case. Almost flat manifolds are minimal fillings of their boundary distances If a Riemannian manifold (of any dimension) is almost flat (more precisely, is a region of with a Riemannian metric that is -near the standard Euclidean metric), then is a volume minimizer: it cannot be replaced by an orientable Riemannian manifold that fills the same boundary and has less volume without reducing the distance between some boundary points. This implies that if a piece of sphere is sufficiently small (and therefore, nearly flat), then it is a volume minimizer. If this theorem can be extended to large regions (namely, to the whole hemisphere), then the filling area conjecture is true. It has been conjectured that all simple Riemannian manifolds (those that are convex at their boundary, and where every two points are joined by a unique geodesic) are volume minimizers. The proof that each almost flat manifold is a volume minimizer involves embedding in , and then showing that any isometric replacement of can also be mapped into the same space , and projected onto , without increasing its volume. This implies that the replacement has not less volume than the original manifold . See also Filling radius Pu's inequality Systolic geometry References Conjectures Unsolved problems in geometry Riemannian geometry Differential geometry Differential geometry of surfaces Surfaces Area Systolic geometry
Filling area conjecture
[ "Physics", "Mathematics" ]
2,233
[ "Scalar physical quantities", "Unsolved problems in mathematics", "Geometry problems", "Physical quantities", "Quantity", "Unsolved problems in geometry", "Size", "Conjectures", "Wikipedia categories named after physical quantities", "Mathematical problems", "Area" ]
12,084,179
https://en.wikipedia.org/wiki/Schotten%E2%80%93Baumann%20reaction
The Schotten–Baumann reaction is a method to synthesize amides from amines and acid chlorides: Schotten–Baumann reaction also refers to the conversion of acid chloride to esters. The reaction was first described in 1883 by German chemists Carl Schotten and Eugen Baumann. The name "Schotten–Baumann reaction conditions" often indicate the use of a two-phase solvent system, consisting of water and an organic solvent. The base within the water phase neutralizes the acid, generated in the reaction, while the starting materials and product remain in the organic phase, often dichloromethane or diethyl ether. Applications The Schotten–Baumann reaction or reaction conditions are widely used in organic chemistry. Examples: synthesis of N-vanillyl nonanamide, also known as synthetic capsaicin synthesis of benzamide from benzoyl chloride and a phenethylamine synthesis of flutamide, a nonsteroidal antiandrogen acylation of a benzylamine with acetyl chloride (acetic anhydride is an alternative) In the Fischer peptide synthesis (Emil Fischer, 1903), an α-chloro acid chloride is condensed with the ester of an amino acid. The ester is then hydrolyzed and the acid converted to the acid chloride, enabling the extension of the peptide chain by another unit. In a final step the chloride atom is replaced by an amino group, completing the peptide synthesis. Further reading See also Lumière–Barbier method References Carbon-heteroatom bond forming reactions Amide synthesis reactions 1883 in science 1883 in Germany Name reactions
Schotten–Baumann reaction
[ "Chemistry" ]
348
[ "Organic reactions", "Name reactions", "Amide synthesis reactions", "Carbon-heteroatom bond forming reactions", "Condensation reactions" ]
12,086,708
https://en.wikipedia.org/wiki/Bebugging
Bebugging (or fault seeding or error seeding) is a popular software engineering technique used in the 1970s to measure test coverage. Known bugs are randomly added to a program source code and the software tester is tasked to find them. The percentage of the known bugs not found gives an indication of the real bugs that remain. The term "bebugging" was first mentioned in The Psychology of Computer Programming (1970), where Gerald M. Weinberg described the use of the method as a way of training, motivating, and evaluating programmers, not as a measure of faults remaining in a program. The approach was borrowed from the SAGE system, where it was used to keep operators watching radar screens alert. Here's a quote from the original use of the term: An early application of bebugging was Harlan Mills's fault seeding approach which was later refined by stratified fault-seeding. These techniques worked by adding a number of known faults to a software system for the purpose of monitoring the rate of detection and removal. This assumed that it is possible to estimate the number of remaining faults in a software system still to be detected by a particular test methodology. Bebugging is a type of fault injection. See also Fault injection Mutation testing References Software testing
Bebugging
[ "Engineering" ]
261
[ "Software engineering", "Software testing" ]
12,092,048
https://en.wikipedia.org/wiki/Capillary%20length
The capillary length or capillary constant is a length scaling factor that relates gravity and surface tension. It is a fundamental physical property that governs the behavior of menisci, and is found when body forces (gravity) and surface forces (Laplace pressure) are in equilibrium. The pressure of a static fluid does not depend on the shape, total mass or surface area of the fluid. It is directly proportional to the fluid's specific weight – the force exerted by gravity over a specific volume, and its vertical height. However, a fluid also experiences pressure that is induced by surface tension, commonly referred to as the Young–Laplace pressure. Surface tension originates from cohesive forces between molecules, and in the bulk of the fluid, molecules experience attractive forces from all directions. The surface of a fluid is curved because exposed molecules on the surface have fewer neighboring interactions, resulting in a net force that contracts the surface. There exists a pressure difference either side of this curvature, and when this balances out the pressure due to gravity, one can rearrange to find the capillary length. In the case of a fluid–fluid interface, for example a drop of water immersed in another liquid, the capillary length denoted or is most commonly given by the formula, , where is the surface tension of the fluid interface, is the gravitational acceleration and is the mass density difference of the fluids. The capillary length is sometimes denoted in relation to the mathematical notation for curvature. The term capillary constant is somewhat misleading, because it is important to recognize that is a composition of variable quantities, for example the value of surface tension will vary with temperature and the density difference will change depending on the fluids involved at an interface interaction. However if these conditions are known, the capillary length can be considered a constant for any given liquid, and be used in numerous fluid mechanical problems to scale the derived equations such that they are valid for any fluid. For molecular fluids, the interfacial tensions and density differences are typically of the order of mN m−1 and g mL−1 respectively resulting in a capillary length of mm for water and air at room temperature on earth. On the other hand, the capillary length would be mm for water-air on the moon. For a soap bubble, the surface tension must be divided by the mean thickness, resulting in a capillary length of about meters in air! The equation for can also be found with an extra term, most often used when normalising the capillary height. Origin Theoretical One way to theoretically derive the capillary length, is to imagine a liquid droplet at the point where surface tension balances gravity. Let there be a spherical droplet with radius , The characteristic Laplace pressure , due to surface tension, is equal to , where is the surface tension. The pressure due to gravity (hydrostatic pressure) of a column of liquid is given by , where is the droplet density, the gravitational acceleration, and is the height of the droplet. At the point where the Laplace pressure balances out the pressure due to gravity , . Relationship with the Eötvös number The above derivation can be used when dealing with the Eötvös number, a dimensionless quantity that represents the ratio between the gravitational forces and surface tension of the liquid. Despite being introduced by Loránd Eötvös in 1886, he has since become fairly dissociated with it, being replaced with Wilfrid Noel Bond such that it is now referred to as the Bond number in recent literature. The Bond number can be written such that it includes a characteristic length- normally the radius of curvature of a liquid, and the capillary length , with parameters defined above, and the radius of curvature. Therefore the bond number can be written as , with the capillary length. If the bond number is set to 1, then the characteristic length is the capillary length. Experimental The capillary length can also be found through the manipulation of many different physical phenomenon. One method is to focus on capillary action, which is the attraction of a liquids surface to a surrounding solid. Association with Jurin's law Jurin's law is a quantitative law that shows that the maximum height that can be achieved by a liquid in a capillary tube is inversely proportional to the diameter of the tube. The law can be illustrated mathematically during capillary uplift, which is a traditional experiment measuring the height of a liquid in a capillary tube. When a capillary tube is inserted into a liquid, the liquid will rise or fall in the tube, due to an imbalance in pressure. The characteristic height is the distance from the bottom of the meniscus to the base, and exists when the Laplace pressure and the pressure due to gravity are balanced. One can reorganize to show the capillary length as a function of surface tension and gravity. , with the height of the liquid, the radius of the capillary tube, and the contact angle. The contact angle is defined as the angle formed by the intersection of the liquid-solid interface and the liquid–vapour interface. The size of the angle quantifies the wettability of liquid, i.e., the interaction between the liquid and solid surface. A contact angle of can be considered, perfect wetting. . Thus the forms a cyclical 3 factor equation with . This property is usually used by physicists to estimate the height a liquid will rise in a particular capillary tube, radius known, without the need for an experiment. When the characteristic height of the liquid is sufficiently less than the capillary length, then the effect of hydrostatic pressure due to gravity can be neglected. Using the same premises of capillary rise, one can find the capillary length as a function of the volume increase, and wetting perimeter of the capillary walls. Association with a sessile droplet Another way to find the capillary length is using different pressure points inside a sessile droplet, with each point having a radius of curvature, and equate them to the Laplace pressure equation. This time the equation is solved for the height of the meniscus level which again can be used to give the capillary length. The shape of a sessile droplet is directly proportional to whether the radius is greater than or less than the capillary length. Microdrops are droplets with radius smaller than the capillary length, and their shape is governed solely by surface tension, forming a spherical cap shape. If a droplet has a radius larger than the capillary length, they are known as macrodrops and the gravitational forces will dominate. Macrodrops will be 'flattened' by gravity and the height of the droplet will be reduced. History The investigations in capillarity stem back as far as Leonardo da Vinci, however the idea of capillary length was not developed until much later. Fundamentally the capillary length is a product of the work of Thomas Young and Pierre Laplace. They both appreciated that surface tension arose from cohesive forces between particles and that the shape of a liquid's surface reflected the short range of these forces. At the turn of the 19th century they independently derived pressure equations, but due to notation and presentation, Laplace often gets the credit. The equation showed that the pressure within a curved surface between two static fluids is always greater than that outside of a curved surface, but the pressure will decrease to zero as the radius approached infinity. Since the force is perpendicular to the surface and acts towards the centre of the curvature, a liquid will rise when the surface is concave and depress when convex. This was a mathematical explanation of the work published by James Jurin in 1719, where he quantified a relationship between the maximum height taken by a liquid in a capillary tube and its diameter – Jurin's law. The capillary length evolved from the use of the Laplace pressure equation at the point it balanced the pressure due to gravity, and is sometimes called the Laplace capillary constant, after being introduced by Laplace in 1806. In nature Bubbles Like a droplet, bubbles are round because cohesive forces pull its molecules into the tightest possible grouping, a sphere. Due to the trapped air inside the bubble, it is impossible for the surface area to shrink to zero, hence the pressure inside the bubble is greater than outside, because if the pressures were equal, then the bubble would simply collapse. This pressure difference can be calculated from Laplace's pressure equation, . For a soap bubble, there exists two boundary surfaces, internal and external, and therefore two contributions to the excess pressure and Laplace's formula doubles to . The capillary length can then be worked out the same way except that the thickness of the film, must be taken into account as the bubble has a hollow center, unlike the droplet which is a solid. Instead of thinking of a droplet where each side is as in the above derivation, for a bubble is now , with and the radius and thickness of the bubble respectively. As above, the Laplace and hydrostatic pressure are equated resulting in . Thus the capillary length contributes to a physiochemical limit that dictates the maximum size a soap bubble can take. See also Capillarity Surface tension Pressure Bond number Jurin's law Young–Laplace equation References Fluid dynamics
Capillary length
[ "Chemistry", "Engineering" ]
1,937
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
17,600,071
https://en.wikipedia.org/wiki/Polydioctylfluorene
Polydioctylfluorene (PFO) is an organic compound, a polymer of 9,9-dioctylfluorene, with formula (C13H6(C8H17)2)n. It is an electroluminescent conductive polymer that characteristically emits blue light. Like other polyfluorene polymers, it has been studied as a possible material for light-emitting diodes. Structure The monomer has an aromatic fluorene core -C13H6- with two aliphatic n-octyl -C8H17 tails attached to the central carbon. Polydioctylfluorene (PFO) can be found in liquid-crystalline, glassy, amorphous, semi-crystalline or β-chain formation. This variety is on account of the intermolecular forces that PFO can participate in. The secondary forces present in PFO are typically van der Waals, which are relatively weak. These weak forces makes it a solid that can also be used as a film on a substrate. The glassy films formed by PFO chains form solutions in good solvents, meaning it is at least partially soluble. These van der Waals also add complexity to the microstructure of PFO, which is why it has a wide range of solid formations. The solid formations though, typically form low density due to the low cooling rate of the polymer. The density of polydioctylfluorene is measured by using the process of ultraviolet photoelectron spectroscopy. Chain stiffness is also prominent in PFO, because of this it is predicted that the molecular weight is a factor of 2.7 lower than polystyrene, which can produce an approximation of 190 repeat units in a standard PFO chain. By changing the strain and temperature applied to the polymer's structure results in an alteration of the PFO's properties. Thermal treatment such as friction transfer can be applied to the structure, this is a way to alter the properties. The friction transfer aligns the structure to become crystalline or liquid crystalline. Polymer 196 is the most commonly studied type of polydioctylfluorene. In studies, polymer 196 has shown the most promising properties and the best crystallinity. Within the crystal structure of polymer 196 octyl side chains are inserted between the layer of the polymer to provide more space for efficiency in structuring the material. In studies, the structure of polydioctylfluorene was observed by using grazing-incidence X-ray diffraction after applying friction to the structure. Experiments revealed PFO was present in crystalline films and liquid crystalline after cooling and use of friction. As a result of the friction exerted, twofold symmetry in PFO was broken. The friction transfer used to obtain a single crystal film is important in the process of fabricating polarized light emitting diodes. Properties Polydioctylfluorene, can also be known as polymer 196 to polyfluorene. The molar mass of PFO ranges between 24,000–41,600 (g/mol) and because of this varying molar mass, many other properties vary as well. For example, the glass transition temperature can fall somewhere between 72–113 degrees Celsius. The absolute wavelength emitted by PFO can range between 386–389 nm in a solution of CHCl3, and falls around 389 in a solution of THF. The absolute film wavelength of PFO though falls between 380–394 nm. The melting point of a crystalline molecule of PFO is predicted to be about 150 degrees Celsius. There have also been reports that some of the solid states of polydioctylfluorene are composted in sheet-like layers which are about 50–100 nm thick. As a result of these sheets, the glassy and semicrystalline states can be formed (excluding amorphous, liquid crystalline, and beta chain states). When cooled quickly, the chains tightly align, giving PFO a close packing factor, though because of the high complexity of the chains, this sometimes gets messy and creates the amorphous state. The parts of the molecule that add this complexity are the carbon rings (that are located in the backbone) making the molecule overall large in size. Applications The formation of beta-phase chains in PFO can be formed through dip-pen nanolithography, to represent wavelength changes in metamaterials. The dip-pen technique allows a scale of 500 nm > to be visible. The beta chains can be converted into the glassy films by adding extra stress to the main fluorine backbone unit, whether beta chains are formed is determined by peaks in wavelength absorption. Beta chains can also be confirmed to be present by using solvent to non-solvent mixtures. If the molecule were to be dipped into this mixture for ten seconds, the chains with no dissolution of films are able to produce these said beta chains. Polydioctylfluorene is a polymer light-emitting device known as PLED, which covalently bonds to the carbon hydrogen chains. PFO is a copolymer of basic polyfluorene, which enables it to release phosphorescent light. This basic fluorene backbone strengthens the molecule on account of the carbon rings. The cross-linking in polydioctylfluorene structure provides an efficient technique for hole-transport layers to emit light. Also, when a solvent-polymer compound is added the β-phase crystalline structure to be maintained. Efficiency in current can reach a maximum of about 17 cd/A and maximum luminance obtained can be approximately 14,000 cd/m(2). The hole-transport layers (HTLs) improve the polymer's anode hole injection and greatly increase electron blocking. By having the capability to control the microstructure of phase domains gives an opportunity to optimize the optoelectronic properties of PFO based products. When needs for optoelectronic emittance are reached in polydioctylfluorene, the electroluminescence given off in dependent on the active layer in the conjugate polymer. Another way to affect the optoelectronic properties is by altering how dense the phase chain segments are ordered. Low densities can be achieved from tremendously slow crystallization while on the other hand directional crystalline solution can be achieved by use of thermal gradients. References Organic polymers Conductive polymers
Polydioctylfluorene
[ "Chemistry" ]
1,350
[ "Organic compounds", "Organic polymers", "Molecular electronics", "Conductive polymers" ]
17,600,105
https://en.wikipedia.org/wiki/Scrim%20and%20sarking
Scrim and sarking is a method of interior construction widely used in Australia and New Zealand in the late 19th and early 20th centuries. In this method, wooden panels were nailed over the beams and joists of a house frame, and a heavy, loosely woven cloth, called scrim, was then stapled or tacked over the wood panels. This construction method allowed wallpaper to be applied directly. In New Zealand, the sarking was often the native rimu (red pine), and the scrim was usually either jute or hessian. It is easy to tell whether walls have scrim and sarking as their basis: knocking on the wall produces the sound of the wood, and any wallpaper laid over the top has an uneven finish. In many instances, the scrim will come loose from the sarking, in which case the wallpaper will appear to float loose from the wall. Disuse Compared with more modern forms of interior wall surfacing, scrim and sarking has poor insulation properties and can encourage damp. It is also more costly to insure homes with scrim and sarking walls, as they pose a fire danger. For these reasons, home renovation will often see it replaced with gypsum-based wallboards. References Interior design Construction Timber framing
Scrim and sarking
[ "Physics", "Technology", "Engineering" ]
263
[ "Timber framing", "Materials stubs", "Structural system", "Construction", "Materials", "Matter" ]
17,601,646
https://en.wikipedia.org/wiki/Restoration%20of%20the%20Everglades
An ongoing effort to remedy damage inflicted during the 20th century on the Everglades, a region of tropical wetlands in southern Florida, is the most expensive and comprehensive environmental repair attempt in history. The degradation of the Everglades became an issue in the United States in the early 1970s after a proposal to construct an airport in the Big Cypress Swamp. Studies indicated the airport would have destroyed the ecosystem in South Florida and Everglades National Park. After decades of destructive practices, both state and federal agencies are looking for ways to balance the needs of the natural environment in South Florida with urban and agricultural centers that have recently and rapidly grown in and near the Everglades. In response to floods caused by hurricanes in 1947, the Central and Southern Florida Flood Control Project (C&SF) was established to construct flood control devices in the Everglades. The C&SF built of canals and levees between the 1950s and 1971 throughout South Florida. Their last venture was the C-38 canal, which straightened the Kissimmee River and caused catastrophic damage to animal habitats, adversely affecting water quality in the region. The canal became the first C&SF project to revert when the canal began to be backfilled, or refilled with the material excavated from it, in the 1980s. When high levels of phosphorus and mercury were discovered in the waterways in 1986, water quality became a focus for water management agencies. Costly and lengthy court battles were waged between various government entities to determine who was responsible for monitoring and enforcing water quality standards. Governor Lawton Chiles proposed a bill that determined which agencies would have that responsibility, and set deadlines for pollutant levels to decrease in water. Initially the bill was criticized by conservation groups for not being strict enough on polluters, but the Everglades Forever Act was passed in 1994. Since then, the South Florida Water Management District (SFWMD) and the U.S. Army Corps of Engineers have surpassed expectations for achieving lower phosphorus levels. A commission appointed by Governor Chiles published a report in 1995 stating that South Florida was unable to sustain its growth, and the deterioration of the environment was negatively affecting daily life for residents in South Florida. The environmental decline was predicted to harm tourism and commercial interests if no actions were taken to halt current trends. Results of an eight-year study that evaluated the C&SF were submitted to the United States Congress in 1999. The report warned that if no action was taken the region would rapidly deteriorate. A strategy called the Comprehensive Everglades Restoration Plan (CERP) was enacted to restore portions of the Everglades, Lake Okeechobee, the Caloosahatchee River, and Florida Bay to undo the damage of the past 50 years. It would take 30 years and cost $7.8 billion to complete. Though the plan was passed into law in 2000, it has been compromised by political and funding problems. Background The Everglades are part of a very large watershed that begins in the vicinity of Orlando. The Kissimmee River drains into Lake Okeechobee, a lake with an average depth of . During the wet season when the lake exceeds its capacity, the water leaves the lake in a very wide and shallow river, approximately long and wide. This wide and shallow flow is known as sheetflow. The land gradually slopes toward Florida Bay, the historical destination of most of the water leaving the Everglades. Before drainage attempts, the Everglades comprised , taking up a third of the Florida peninsula. Since the early 19th century the Everglades have been a subject of interest for agricultural development. The first attempt to drain the Everglades occurred in 1882 when Pennsylvania land developer Hamilton Disston constructed the first canals. Though these attempts were largely unsuccessful, Disston's purchase of land spurred tourism and real estate development of the state. The political motivations of Governor Napoleon Bonaparte Broward resulted in more successful attempts at canal construction between 1906 and 1920. Recently reclaimed wetlands were used for cultivating sugarcane and vegetables, while urban development began in the Everglades. The 1926 Miami Hurricane and the 1928 Okeechobee Hurricane caused widespread devastation and flooding which prompted the Army Corps of Engineers to construct a dike around Lake Okeechobee. The four-story wall cut off water from the Everglades. Floods from hurricanes in 1947 motivated the US Congress to establish the Central and Southern Florida Flood Control Project (C&SF), responsible for constructing of canals and levees, hundreds of pumping stations and other water control devices. The C&SF established Water Conservation Areas (WCAs) in 37% of the original Everglades, which acted as reservoirs providing excess water to the South Florida metropolitan area, or flushing it into the Atlantic Ocean or the Gulf of Mexico. The C&SF also established the Everglades Agricultural Area (EAA), which grows the majority of sugarcane crops in the United States. When the EAA was first established, it encompassed approximately 27% of the original Everglades. By the 1960s, urban development and agricultural use had decreased the size of the Everglades considerably. The remaining 25% of the Everglades in its original state is protected in Everglades National Park, but the park was established before the C&SF, and it depended upon the actions of the C&SF to release water. As Miami and other metropolitan areas began to intrude on the Everglades in the 1960s, political battles took place between park management and the C&SF when insufficient water in the park threw ecosystems into chaos. Fertilizers used in the EAA began to alter soil and hydrology in Everglades National Park, causing the proliferation of exotic plant species. A proposition to build a massive jetport in the Big Cypress Swamp in 1969 focused attention on the degraded natural systems in the Everglades. For the first time, the Everglades became a subject of environmental conservation. Everglades as a priority Environmental protection became a national priority in the 1970s. Time magazine declared it the Issue of the Year in January 1971, reporting that it was rated as Americans' "most serious problem confronting their community—well ahead of crime, drugs and poor schools". When South Florida experienced a severe drought from 1970 to 1975, with Miami receiving only of rain in 1971— less than average—media attention focused on the Everglades. With the assistance of governor's aide Nathaniel Reed and U.S. Fish and Wildlife Service biologist Arthur R. Marshall, politicians began to take action. Governor Reubin Askew implemented the Land Conservation Act in 1972, allowing the state to use voter-approved bonds of $240 million to purchase land considered to be environmentally unique and irreplaceable. Since then, Florida has purchased more land for public use than any other state. In 1972 President Richard Nixon declared the Big Cypress Swamp—the intended location for the Miami jetport in 1969—to be federally protected. Big Cypress National Preserve was established in 1974, and Fakahatchee Strand State Preserve was created the same year. In 1976, Everglades National Park was declared an International Biosphere Reserve by UNESCO, which also listed the park as a World Heritage Site in 1979. The Ramsar Convention designated the Everglades a Wetland of International Importance in 1987. Only three locations on Earth have appeared on all three lists: Everglades National Park, Lake Ichkeul in Tunisia, and Srebarna Lake in Bulgaria. Kissimmee River In the 1960s, the C&SF came under increased scrutiny from government overseers and conservation groups. Critics maintained its size was comparable to the Tennessee Valley Authority's dam-building projects during the Great Depression, and that the construction had run into the billions of dollars without any apparent resolution or plan. The projects of the C&SF have been characterized as part of "crisis and response" cycles that "ignored the consequence for the full system, assumed certainty of the future, and succeeded in solving the momentary crisis, but set in motion conditions that exaggerate future crises". The last project, to build a canal to straighten the winding floodplain of the Kissimmee River that had historically fed Lake Okeechobee which in turn fed the Everglades, began in 1962. Marjory Stoneman Douglas later wrote that the C&SF projects were "interrelated stupidity", crowned by the C-38 canal. Designed to replace a meandering river with a channel, the canal was completed in 1971 and cost $29 million. It supplanted approximately of marshland with retention ponds, dams, and vegetation. Loss of habitat has caused the region to experience a drastic decrease of waterfowl, wading birds, and game fish. The reclaimed floodplains were taken over by agriculture, bringing fertilizers and insecticides that washed into Lake Okeechobee. Even before the canal was finished, conservation organizations and sport fishing and hunting groups were calling for the restoration of the Kissimmee River. Arthur R. Marshall led the efforts to undo the damage. According to Douglas, Marshall was successful in portraying the Everglades from the Kissimmee Chain of Lakes to Florida Bay—including the atmosphere, climate, and limestone—as a single organism. Rather than remaining the preserve of conservation organizations, the cause of restoring the Everglades became a priority for politicians. Douglas observed, "Marshall accomplished the extraordinary magic of taking the Everglades out of the bleeding-hearts category forever". At the insistent urging of Marshall, newly elected Governor Bob Graham announced the formation of the "Save Our Everglades" campaign in 1983, and in 1985 Graham lifted the first shovel of backfill for a portion of the C-38 canal. Within a year the area was covered with water returning to its original state. Graham declared that by the year 2000, the Everglades would resemble its predrainage state as much as possible. The Kissimmee River Restoration Project was approved by Congress in the Water Resources Development Act of 1992. The project was estimated to cost $578 million to convert only of the canal; the cost was designed to be divided between the state of Florida and the U.S. government, with the state being responsible for purchasing land to be restored. A project manager for the Army Corps of Engineers explained in 2002, "What we're doing on this scale is going to be taken to a larger scale when we do the restoration of the Everglades". The entire project was originally estimated to be completed by 2011, but was completed in July 2021. In all, about of the Kissimmee River was restored, plus 20,000 acres of wetlands. Water quality Attention to water quality was focused in South Florida in 1986 when a widespread algal bloom occurred in one-fifth of Lake Okeechobee. The bloom was discovered to be the result of fertilizers from the Everglades Agricultural Area. Although laws stated in 1979 that the chemicals used in the EAA should not be deposited into the lake, they were flushed into the canals that fed the Everglades Water Conservation Areas, and eventually pumped into the lake. Microbiologists discovered that, although phosphorus assists plant growth, it destroys periphyton, one of the basic building blocks of marl in the Everglades. Marl is one of two types of Everglades soil, along with peat; it is found where parts of the Everglades are flooded for shorter periods of time as layers of periphyton dry. Most of the phosphorus compounds also rid peat of dissolved oxygen and promote algae growth, causing native invertebrates to die, and sawgrass to be replaced with invasive cattails that grow too tall and thick to allow nesting for birds and alligators. Tested water showed 500 parts per billion (ppb) of phosphorus near sugarcane fields. State legislation in 1987 mandated a 40% reduction of phosphorus by 1992. Attempts to correct phosphorus levels in the Everglades met with resistance. The sugarcane industry, dominated by two companies named U.S. Sugar and Flo-Sun, was responsible for more than half of the crop in the EAA. They were well represented in state and federal governments by lobbyists who enthusiastically protected their interests. According to the Audubon Society, the sugar industry, nicknamed "Big Sugar", donated more money to political parties and candidates than General Motors. The sugar industry attempted to block government-funded studies of polluted water, and when the federal prosecutor in Miami faulted the sugar industry in legal action to protect Everglades National Park, Big Sugar tried to get the lawsuit withdrawn and the prosecutor fired. A costly legal battle ensued from 1988 to 1992 between the State of Florida, the U.S. government, and the sugar industry to resolve who was responsible for water quality standards, the maintenance of Everglades National Park and the Arthur R. Marshall Loxahatchee National Wildlife Refuge. A different concern about water quality arose when mercury was discovered in fish during the 1980s. Because mercury is damaging to humans, warnings were posted for fishermen that cautioned against eating fish caught in South Florida, and scientists became alarmed when a Florida panther was found dead near Shark River Slough with mercury levels high enough to be fatal to humans. When mercury is ingested it adversely affects the central nervous system, and can cause brain damage and birth defects. Studies of mercury levels found that it is bioaccumulated through the food chain: animals that are lower on the chain have decreased amounts, but as larger animals eat them, the amount of mercury is multiplied. The dead panther's diet consisted of small animals, including raccoons and young alligators. The source of the mercury was found to be waste incinerators and fossil fuel power plants that expelled the element in the atmosphere, which precipitated with rain, or in the dry season, dust. Naturally occurring bacteria in the Everglades that function to reduce sulfur also transform mercury deposits into methylmercury. This process was more dramatic in areas where flooding was not as prevalent. Because of requirements that reduced power plant and incinerator emissions, the levels of mercury found in larger animals decreased as well: approximately a 60% decrease in fish and a 70% decrease in birds, though some levels still remain a health concern for people. Everglades Forever Act In an attempt to resolve the political quagmire over water quality, Governor Lawton Chiles introduced a bill in 1994 to clean up water within the EAA that was being released to the lower Everglades. The bill stated that the "Everglades ecosystem must be restored both in terms of water quality and water quantity and must be preserved and protected in a manner that is long term and comprehensive". It ensured the Florida Department of Environmental Protection (DEP) and the South Florida Water Management District (SFWMD) would be responsible for researching water quality, enforcing water supply improvement, controlling exotic species, and collecting taxes, with the aim of decreasing the levels of phosphorus in the region. It allowed for purchase of land where pollutants would be sent to "treat and improve the quality of waters coming from the EAA". Critics of the bill argued that the deadline for meeting the standards was unnecessarily delayed until 2006—a period of 12 years—to enforce better water quality. They also maintained that it did not force sugarcane farmers, who were the primary polluters, to pay enough of the costs, and increased the threshold of what was an acceptable amount of phosphorus in water from 10 ppb to 50 ppb. Governor Chiles initially named it the Marjory Stoneman Douglas Act, but Douglas was so unimpressed with the action it took against polluters that she wrote to Chiles and demanded her name be stricken from it. Despite criticism, the Florida legislature passed the Act in 1994. The SFWMD stated that its actions have exceeded expectations earlier than anticipated, by creating Stormwater Treatment Areas (STA) within the EAA that contain a calcium-based substance such as lime rock layered between peat, and filled with calcareous periphyton. Early tests by the Army Corps of Engineers revealed this method reduced phosphorus levels from 80 ppb to 10 ppb. The STAs are intended to treat water until the phosphorus levels are low enough to be released into the Loxahatchee National Wildlife Refuge or other WCAs. Wildlife concerns The intrusion of urban areas into wilderness has had a substantial impact on wildlife, and several species of animals are considered endangered in the Everglades region. One animal that has benefited from endangered species protection is the American alligator (Alligator mississippiensis), whose holes give refuge to other animals, often allowing many species to survive during times of drought. Once abundant in the Everglades, the alligator was listed as an endangered species in 1967, but a combined effort by federal and state organizations and the banning of alligator hunting allowed it to rebound; it was pronounced fully recovered in 1987 and is no longer an endangered species. However, alligators' territories and average body masses have been found to be generally smaller than in the past, and because populations have been reduced, their role during droughts has become limited. The American Crocodile (Crocodylus acutus) is also native to the region and has been designated as endangered since 1975. Unlike their relatives the alligators, crocodiles tend to thrive in brackish or salt-water habitats such as estuarine or marine coasts. Their most significant threat is disturbance by people. Too much contact with humans causes females to abandon their nests, and males in particular are often victims of vehicle collisions while roaming over large territories and attempting to cross U.S. 1 and Card Sound Road in the Florida Keys. There are an estimated 500 to 1,000 crocodiles in southern Florida. The most critically endangered of any animal in the Everglades region is the Florida panther (Puma concolor coryi), a species that once lived throughout the southeastern United States: there were only 25–30 in the wild in 1995. The panther is most threatened by urban encroachment, because males require approximately for breeding territory. A male and two to five females may live within that range. When habitat is lost, panthers will fight over territory. After vehicle collisions, the second most frequent cause of death for panthers is intra-species aggression. In the 1990s urban expansion crowded panthers from southwestern Florida as Naples and Ft. Myers began to expand into the western Everglades and Big Cypress Swamp. Agencies such as the Army Corps of Engineers and the U.S. Fish and Wildlife Service were responsible for maintaining the Clean Water Act and the Endangered Species Act, yet still approved 99% of all permits to build in wetlands and panther territory. A limited genetic pool is also a danger. Biologists introduced eight female Texas cougars (Puma concolor) in 1995 to diversify genes, and there are between 80 and 120 panthers in the wild . Perhaps the most dramatic loss of any group of animals has been to wading birds. Their numbers were estimated by eyewitness accounts to be approximately 2.5 million in the late 19th century. However, snowy egrets (Egretta thula), roseate spoonbills (Platalea ajaja), and reddish egrets (Egretta rufescens) were hunted to the brink of extinction for the colorful feathers used in women's hats. After about 1920 when the fashion passed, their numbers returned in the 1930s, but over the next 50 years actions by the C&SF further disturbed populations. When the canals were constructed, natural water flow was restricted from the mangrove forests near the coast of Florida Bay. From one wet season to the next, fish were unable to reach traditional locations to repopulate when water was withheld by the C&SF. Birds were forced to fly farther from their nests to forage for food. By the 1970s, bird numbers had decreased 90%. Many of the birds moved to smaller colonies in the WCAs to be closer to a food source, making them more difficult to count. Yet they remain significantly fewer in number than before the canals were constructed. Invasive species Around 6 million people moved to South Florida between 1940 and 1965. With a thousand people moving to Miami each week, urban development quadrupled. As the human population grew rapidly, the problem of exotic plant and animal species also grew. Many species of plants were brought into South Florida from Asia, Central America, or Australia as decorative landscaping. Exotic animals imported by the pet trade have escaped or been released. Biological controls that keep invasive species smaller in size and fewer in number in their native lands often do not exist in the Everglades, and they compete with the embattled native species for food and space. Of imported plant species, melaleuca trees (Melaleuca quinquenervia) have caused the most problems. Melaleucas grow on average in the Everglades, as opposed to in their native Australia. They were brought to southern Florida as windbreaks and deliberately seeded in marsh areas because they absorb vast amounts of water. In a region that is regularly shaped by fire, melaleucas are fire-resistant and their seeds are more efficiently spread by fire. They are too dense for wading birds with large wingspans to nest in, and they choke out native vegetation. Costs of controlling melaleucas topped $2 million in 1998 for Everglades National Park. In Big Cypress National Preserve, melaleucas covered at their most pervasive in the 1990s. Brazilian pepper (Schinus terebinthifolius) was brought to Southern Florida as an ornamental shrub and was dispersed by the droppings of birds and other animals that ate its bright red berries. It thrives on abandoned agricultural land growing in forests too dense for wading birds to nest in, similar to melaleucas. It grows rapidly especially after hurricanes and has invaded pineland forests. Following Hurricane Andrew, scientists and volunteers cleared damaged pinelands of Brazilian pepper so the native trees would be able to return to their natural state. The species that is causing the most impediment to restoration is the Old World climbing fern (Lygodium microphyllum), introduced in 1965. The fern grows rapidly and thickly on the ground, making passage for land animals such as black bears and panthers problematic. The ferns also grow as vines into taller portions of trees, and fires climb the ferns in "fire ladders" to scorch portions of the trees that are not naturally resistant to fire. Several animal species have been introduced to Everglades waterways. Many tropical fish are released, the most detrimental being the blue tilapia (Oreochromis aureus), which builds large nests in shallow waters. Tilapia also consume vegetation which would normally be used by young native fishes for cover and protection. Reptiles have a particular affinity for the South Florida ecosystem. Virtually all lizards appearing in the Everglades have been introduced, such as the brown anole (Anolis sagrei) and the tropical house gecko (Hemidactylus mabouia). The herbivorous green iguana (Iguana iguana) can reproduce rapidly in wilderness habitats. However, the reptile that has earned media attention for its size and potential to harm children and domestic pets is the Burmese python (Python bivittatus), which has spread quickly throughout the area. The python can grow up to long and competes with alligators for the top of the food chain. Though exotic birds such as parrots and parakeets are also found in the Everglades, their impact is negligible. Conversely, perhaps the animal that causes the most damage to native wildlife is the domestic or feral cat. Across the U.S., cats are responsible for approximately a billion bird deaths annually. They are estimated to number 640 per square mile; cats living in suburban areas have devastating effects on migratory birds and marsh rabbits. Homestead Air Force Base Hurricane Andrew struck Miami in 1992, with catastrophic damage to Homestead Air Force Base in Homestead. A plan to rejuvenate the property in 1993 and convert it into a commercial airport was met with enthusiasm from local municipal and commercial entities hoping to recoup $480 million and 11,000 jobs lost in the local community by the destruction and subsequent closing of the base. On March 31, 1994, the base was designated as a reserve base, functioning only part-time. A cursory environmental study performed by the Air Force was deemed insufficient by local conservation groups, who threatened to sue in order to halt the acquisition when estimates of 650 flights a day were projected. Groups had previously been alarmed in 1990 by the inclusion of Homestead Air Force Base on a list of the U.S. Government's most polluted properties. Their concerns also included noise, and the inevitable collisions with birds using the mangrove forests as rookeries. The Air Force base is located between Everglades National Park and Biscayne National Park, giving it the potential to cause harm to both. In 2000, Secretary of the Interior Bruce Babbitt and the director of the U.S. Environmental Protection Agency expressed their opposition to the project, despite other Clinton Administration agencies previously working to ensure the base would be turned over to local agencies quickly and smoothly as "a model of base disposal". Although attempts were made to make the base more environmentally friendly, in 2001 local commercial interests promoting the airport lost federal support. Comprehensive Everglades Restoration Plan Sustainable South Florida Despite the successes of the Everglades Forever Act and the decreases in mercury levels, the focus intensified on the Everglades in the 1990s as quality of life in the South Florida metropolitan areas diminished. It was becoming clear that urban populations were consuming increasingly unsustainable levels of natural resources. A report entitled "The Governor's Commission for a Sustainable South Florida", submitted to Lawton Chiles in 1995, identified the problems the state and municipal governments were facing. The report remarked that the degradation of the natural quality of the Everglades, Florida Bay, and other bodies of water in South Florida would cause a significant decrease in tourism (12,000 jobs and $200 million annually) and income from compromised commercial fishing (3,300 jobs and $52 million annually). The report noted that past abuses and neglect of the environment had brought the region to "a precipitous juncture" where the inhabitants of South Florida faced health hazards in polluted air and water; furthermore, crowded and unsafe urban conditions hurt the reputation of the state. It noted that though the population had increased by 90% over the previous two decades, registered vehicles had increased by 166%. On the quality and availability of water, the report stated, "[The] frequent water shortages ... create the irony of a natural system dying of thirst in a subtropical environment with over 53 inches of rain per year". Restoration of the Everglades, however, briefly became a bipartisan cause in national politics. A controversial penny-a-pound (2 cent/kg) tax on sugar was proposed to fund some of the necessary changes to be made to help decrease phosphorus and make other improvements to water. State voters were asked to support the tax, and environmentalists paid $15 million to encourage the issue. Sugar lobbyists responded with $24 million in advertising to discourage it and succeeded; it became the most expensive ballot issue in state history. How restoration might be funded became a political battleground and seemed to stall without resolution. However, in the 1996 election year, Republican senator Bob Dole proposed that Congress give the State of Florida $200 million to acquire land for the Everglades. Democratic Vice President Al Gore promised the federal government would purchase of land in the EAA to turn it over for restoration. Politicking reduced the number to , but both Dole's and Gore's gestures were approved by Congress. Central and South Florida Project Restudy As part of the Water Resources Development Act of 1992, Congress authorized an evaluation of the effectiveness of the Central and Southern Florida Flood Control Project. A report known as the "Restudy", written by the U.S. Army Corps of Engineers and the South Florida Water Management District, was submitted to Congress in 1999. It cited indicators of harm to the system: a 50% reduction in the original Everglades, diminished water storage, harmful timing of water release, an 85 to 90% decrease in wading bird populations over the past 50 years, and the decline of output from commercial fisheries. Bodies of water including Lake Okeechobee, the Caloosahatchee River, St. Lucie estuary, Lake Worth Lagoon, Biscayne Bay, Florida Bay, and the Everglades reflected drastic water level changes, hypersalinity, and dramatic changes in marine and freshwater ecosystems. The Restudy noted the overall decline in water quality over the past 50 years was caused by loss of wetlands that act as filters for polluted water. It predicted that without intervention the entire South Florida ecosystem would deteriorate. Canals took roughly of water to the Atlantic Ocean or Gulf of Mexico daily, so there was no opportunity for water storage, yet flooding was still a problem. Without changes to the current system, the Restudy predicted water restrictions would be necessary every other year, and annually in some locations. It also warned that revising some portions of the project without dedicating efforts to an overall comprehensive plan would be insufficient and probably detrimental. After evaluating ten plans, the Restudy recommended a comprehensive strategy that would cost $7.8 billion over 20 years. The plan advised taking the following actions: Create surface water storage reservoirs to capture of water in several locations taking up . Create water preserve areas between Miami-Dade and Palm Beach and the eastern Everglades to treat runoff water. Manage Lake Okeechobee as an ecological resource to avoid the drastic rise and fall of water levels in the lake that are harmful to aquatic plant and animal life and disturb the lake sediments. Improve water deliveries to estuaries to reduce the rapid discharge of excess water to the Caloosahatchee and St. Lucie estuaries that upset nutrient balances and cause lesions on fish. Stormwater discharge would be sent instead to reservoirs. Increase underground water storage to hold a day in wells, or reservoirs in the Floridan Aquifer, to be used later in dry periods, in a method called Aquifer Storage and Recovery (ASR). Construct treatment wetlands as Stormwater Treatment Areas throughout , that would decrease the amount of pollutants in the environment. Improve water deliveries to the Everglades by increasing them at a rate of approximately 26% into Shark River Slough. Remove barriers to sheetflow by destroying or removing of canals and levees, specifically removing the Miami Canal and reconstructing the Tamiami Trail from a highway to culverts and bridges to allow sheetflow to return to a more natural rate of water flow into Everglades National Park. Store water in quarries and reuse wastewater by employing existing quarries to supply the South Florida metropolitan area as well as Florida Bay and the Everglades. Construct two wastewater treatment plants capable of discharging a day to recharge the Biscayne Aquifer. The implementation of all of the advised actions, the report stated, would "result in the recovery of healthy, sustainable ecosystems throughout south Florida". The report admitted that it did not have all the answers, though no plan could. However, it predicted that it would restore the "essential defining features of the pre-drainage wetlands over large portions of the remaining system", that populations of all animals would increase, and animal distribution patterns would return to their natural states. Critics expressed concern over some unused technology; scientists were unsure if the quarries would hold as much water as was being suggested, and whether the water would harbor harmful bacteria from the quarries. Overtaxing the aquifers was another concern—it was not a technique that had been previously attempted. Though it was optimistic, the Restudy noted, It is important to understand that the 'restored' Everglades of the future will be different from any version of the Everglades that has existed in the past. While it certainly will be vastly superior to the current ecosystem, it will not completely match the pre-drainage system. This is not possible, in light of the irreversible physical changes that have made (sic) to the ecosystem. It will be an Everglades that is smaller and somewhat differently arranged than the historic ecosystem. But it will be a successfully restored Everglades, because it will have recovered those hydrological and biological patterns which defined the original Everglades, and which made it unique among the world's wetland systems. It will become a place that kindles the wildness and richness of the former Everglades. The report was the result of many cooperating agencies that often had conflicting goals. An initial draft was submitted to Everglades National Park management who asserted not enough water would be released to the park quickly enough—that the priority went to delivering water to urban areas. When they threatened to refuse to support it, the plan was rewritten to provide more water to the park. However, the Miccosukee Indians have a reservation in between the park and water control devices, and they threatened to sue to ensure their tribal lands and a $50 million casino would not be flooded. Other special interests were also concerned that businesses and residents would take second priority after nature. The Everglades, however, proved to be a bipartisan cause. The Comprehensive Everglades Restoration Plan (CERP) was authorized by the Water Resources Development Act of 2000 and signed into law by President Bill Clinton on December 11, 2000. It approved the immediate use of $1.3 billion for implementation to be split by the federal government and other sources. Implementation The State of Florida reports that it has spent more than $2 billion on the various projects since CERP was signed. More than of Stormwater Treatment Areas (STA) have been constructed to filter of phosphorus from Everglades waters. An STA covering was constructed in 2004, making it the largest environmental restoration project in the world. Fifty-five percent of the land necessary for restoration, totaling , has been purchased by the State of Florida. A plan named "Acceler8", to hasten the construction and funding of the project, was put into place, spurring the start of six of eight construction projects, including that of three large reservoirs. Despite the bipartisan goodwill and declarations of the importance of the Everglades, the region still remains in danger. Political maneuvering continues to impede CERP: sugar lobbyists promoted a bill in the Florida legislature in 2003 that increased the acceptable amount of phosphorus in Everglades waterways from 10 ppb to 15 ppb and extended the deadline for the mandated decrease by 20 years. A compromise of 2016 was eventually reached. Environmental organizations express concern that attempts to speed up some of the construction through Acceler8 are politically motivated; the six projects Acceler8 focuses on do not provide more water to natural areas in desperate need of it, but rather to projects in populated areas bordering the Everglades, suggesting that water is being diverted to make room for more people in an already overtaxed environment. Though Congress promised half the funds for restoration, after the War in Iraq began and two of CERP's major supporters in Congress retired, the federal role in CERP was left unfulfilled. According to a story in The New York Times, state officials say the restoration is lost in a maze of "federal bureaucracy, a victim of 'analysis paralysis' ". In 2007, the release of $2 billion for Everglades restoration was approved by Congress, overriding President George W. Bush's veto of the entire Water Development Project the money was a part of. Bush's rare veto went against the wishes of Florida Republicans, including his brother, Governor Jeb Bush. A lack of subsequent action by the Congress prompted Governor Charlie Crist to travel to Washington D.C. in February 2008 and inquire about the promised funds. By June 2008, the federal government had spent only $400 million of the $7.8 billion legislated. Carl Hiaasen characterized George W. Bush's attitude toward the environment as "long-standing indifference" in June 2008, exemplified when Bush stated he would not intervene to change the Environmental Protection Agency's (EPA) policy allowing the release of water polluted with fertilizers and phosphorus into the Everglades. Reassessment of CERP Florida still receives a thousand new residents daily and lands slated for restoration and wetland recovery are often bought and sold before the state has a chance to bid on them. The competitive pricing of real estate also drives it beyond the purchasing ability of the state.  Because the State of Florida is assisting with purchasing lands and funding construction, some of the programs under CERP are vulnerable to state budget cuts. In June 2008 Governor Crist announced that the State of Florida will buy U.S. Sugar for $1.7 billion. The idea came when sugar lobbyists were trying to persuade Crist to relax restriction of U.S. Sugar's practice of pumping phosphorus-laden water into the Everglades. According to one of the lobbyists who characterized it as a "duh moment", Crist said, "If sugar is polluting the Everglades, and we're paying to clean the Everglades, why don't we just get rid of sugar?" The largest producer of cane sugar in the U.S. will continue operations for six years, and when ownership transfers to Florida, of the Everglades will remain undeveloped to allow it to be restored to its pre-drainage state. In September 2008 the National Research Council (NRC), a nonprofit agency providing science and policy advice to the federal government, submitted a report on the progress of CERP. The report noted "scant progress" in restoration because of problems in budgeting, planning, and bureaucracy. The NRC report called the Everglades one of the "world's treasured ecosystems" that is being further endangered by lack of progress: "Ongoing delay in Everglades restoration has not only postponed improvements—it has allowed ecological decline to continue". It cited the shrinking tree islands, and the negative population growth of the endangered Rostrhamus sociabilis or Everglades snail kite, and Ammodramus maritimus mirabilis, the Cape Sable seaside sparrow. The lack of water reaching Everglades National Park was characterized as "one of the most discouraging stories" in implementation of the plan. The NRC recommended improving planning on the state and federal levels, evaluating each CERP project annually, and further acquisition of land for restoration. Everglades restoration was earmarked $96 million in federal funds as part of the American Recovery and Reinvestment Act of 2009 with the intention of providing civil service and construction jobs while simultaneously implementing the legislated repair projects. In January 2010, work began on the C-111 canal, built in the 1960s to drain irrigated farmland, to reconstruct it to keep from diverting water from Everglades National Park. Two other projects focusing on restoration were also scheduled to start in 2010. Governor Crist announced the same month that $50 million would be earmarked for Everglades restoration. In April of the same year, a federal district court judge sharply criticized both state and federal failures to meet deadlines, describing the cleanup efforts as being slowed by "glacial delay" and government neglect of environmental law enforcement "incomprehensible". See also Draining and development of the Everglades Everglades National Park Geography and ecology of the Everglades History of Miami, Florida Indigenous people of the Everglades region Notes and references Bibliography Barnett, Cynthia (2007). Mirage: Florida and the Vanishing Water of the Eastern U.S., University of Michigan Press. Douglas, Marjory; Rothchild, John (1987). Marjory Stoneman Douglas: Voice of the River. Pineapple Press. Grunwald, Michael (2006). The Swamp: The Everglades, Florida, and the Politics of Paradise, Simon & Schuster. Lodge, Thomas E. (1994). The Everglades Handbook: Understanding the Ecosystem. CRC Press. U.S. Army Corps of Engineers and South Florida Water Management District (April 1999). "Summary", Central and Southern Florida Project Comprehensive Review Study. Further reading Alderson, Doug. 2009. New Dawn for the Kissimmee River. Gainesville, FL: University Press of Florida. The Everglades in the Time of Marjory Stoneman Douglas Photo exhibit created by the State Archives of Florida External links CERP: A Visual Explanation of the Comprehensive Everglades Restoration Project (SFWMD) C-44 Reservoir Storm Water Treatment Area Project (SFWMD/CERP) Everglades Everglades History of sugar Constructed wetlands Sugar industry of Florida
Restoration of the Everglades
[ "Chemistry", "Engineering", "Biology" ]
8,391
[ "Bioremediation", "Constructed wetlands", "Environmental engineering" ]
17,604,025
https://en.wikipedia.org/wiki/Jam%20nut
A jam nut is a low profile type of nut, typically half as tall as a standard nut. It is commonly used as a type of locknut, where it is "jammed" up against a standard nut to lock the two in place. It is also used in situations where a standard nut would not fit. The term "jam nut" can also refer to any nut that is used in the same function (even a standard nut used for the jamming purpose). Jam nuts, other types of locknuts, lock washers, and thread-locking fluid are ways to prevent vibration from loosening a bolted joint. Use of two nuts to prevent self-loosening In normal use, a nut-and-bolt joint holds together because the bolt is under a constant tensile stress called the preload. The preload pulls the nut threads against the bolt threads, and the nut face against the bearing surface, with a constant force, so that the nut cannot rotate without overcoming the friction between these surfaces. If the joint is subjected to vibration, however, the preload increases and decreases with each cycle of movement. If the minimum preload during the vibration cycle is not enough to hold the nut firmly in contact with the bolt and the bearing surface, then the nut is likely to become loose. Specialized locking nuts exist to prevent this problem, but sometimes it is sufficient to add a second nut. For this technique to be reliable, each nut must be tightened to the correct torque. The inner nut is tightened to about a quarter to a half of the torque of the outer nut. It is then held in place by a wrench while the outer nut is tightened on top using the full torque. This arrangement causes the two nuts to push against each other, creating a tensile stress in the short section of the bolt that lies between them. Even when the main joint is vibrated, the stress between the two nuts remains constant, thus holding the nut threads in constant contact with the bolt threads and preventing self-loosening. When the joint is assembled correctly, the outer nut bears the full tension of the joint. The inner nut functions merely to add a small additional force to the outer nut and does not need to be as strong, so a thin nut can be used. The jam nut essentially acts as the "other object", as the two nuts are tightened against each other. They can also be used to secure an item on a fastener without applying force to that object. This is achieved by first tightening one of the nuts onto the item. Then the other nut is screwed down on top of the first nut. The inner nut is then slackened back and tightened against the outer nut. Jam nuts can also be used in situations where a threaded rod must be rotated. Since threaded rods have no bolt heads, it is difficult or impossible to apply torque to a threaded rod. A pair of jam nuts is used to create a point where a wrench may be used. Jam nuts can be unreliable under significant loads. If the inner nut is torqued more than the outer nut, the outer nut may yield. If the outer nut is torqued more than the inner nut, the inner nut may loosen up. References Nuts (hardware) Kontermutter
Jam nut
[ "Engineering" ]
657
[ "Mechanical engineering stubs", "Mechanical engineering" ]
17,605,257
https://en.wikipedia.org/wiki/Lambda%20diode
A lambda diode is an electronic circuit that combines a complementary pair of junction gated field effect transistors into a two-terminal device that exhibits an area of differential negative resistance much like a tunnel diode. The term refers to the shape of the V–I curve of the device, which resembles the Greek letter λ (lambda). Lambda diodes work at higher voltage than tunnel diodes. Whereas a typical tunnel diode may exhibit negative differential resistance approximately between 70 mV and 350 mV, this region occurs approximately between 1.5 V and 6 V in a lambda diode due to the higher pinch-off voltages of typical JFET devices. A lambda diode therefore cannot replace a tunnel diode directly. Moreover, in a tunnel diode the current reaches a minimum of about 20% of the peak current before rising again towards higher voltages. The lambda diode current approaches zero as voltage increases, before rising quickly again at a voltage high enough to cause gate–source Zener breakdown in the FETs. It is also possible to construct a device similar to a lambda diode by combining an n-channel JFET with a PNP bipolar transistor. A suggested modulatable variant but is a bit more difficult to build uses a PNP based optocoupler and can be tweaked by using its IR diode. This has the advantage that its properties can be fine tuned with a simple bias driver and used for high sensitivity radio applications. Sometimes, a modified open can PNP transistor with IR LED can be used instead. Applications Like the tunnel diode, the negative resistance aspect of the lambda diode lends itself naturally to application in oscillator circuits and amplifiers. In addition, bistable circuits such as memory cells have been described. References Literature Analog circuits
Lambda diode
[ "Engineering" ]
370
[ "Analog circuits", "Electronic engineering" ]
17,608,555
https://en.wikipedia.org/wiki/Omaha%20Ford%20Motor%20Company%20Assembly%20Plant
The Omaha Ford Motor Company Assembly Plant is located at 1514-1524 Cuming Street in North Omaha, Nebraska. In its 16 years of operation, the plant employed 1,200 people and built approximately 450,000 cars and trucks. In the 1920s, it was Omaha's second-biggest shipper. History Ford plant The plant was designed by Albert Kahn as a Model T assembly plant, and built in 1916. Its design represents an important step in the development of Ford's assembly process. Previously, each step in the assembly of an automobile had taken place in a different building, which entailed a cost in time and labor to move the product from one building to another. From 1903 to 1916, Kahn designed "all-under-one-roof" buildings for a variety of manufacturers. In such buildings, Ford's usual practice was to begin assembly on the top floor and move downward until the product was finished at ground level. The Omaha plant was an exception to this: assembly began on the lowest floor and moved upward. It is speculated that the roof was used for storage of finished automobiles. In 1917, Kahn designed the first single-floor assembly plant with a continuous moving assembly line at Ford's Rouge River plant. This design supplanted the older one; the Model A, which replaced the Model T, used a continuous line that could not be installed in the Omaha plant. Assembly ceased at the Omaha plant in 1932. Ford continued to use the building as a sales and service center until 1955. Post-Ford After Ford's departure, the building was used as a warehouse by the Western Electric Company from 1956 to 1959. It was then vacant until 1963, when it was occupied by Tip Top Products, an Omaha manufacturer of liquid solder, hair accessories, and other plastic goods founded by Carl W. Renstrom. Tip Top left the building in 1986, after which it was again vacant for several years. It served as a tire warehouse and retail outlet for some time, but then fell vacant again. In 2005, the building was opened as TipTop Apartments, a mixed-use building with office space on the first floor and with 96 loft-style apartments on the upper levels; an adjoining building houses a banquet-and-conference center. See also History of Omaha References Industrial buildings completed in 1916 National Register of Historic Places in Omaha, Nebraska Ford factories Motor vehicle assembly plants in Nebraska Buildings and structures in Omaha, Nebraska History of Downtown Omaha, Nebraska Industrial buildings and structures on the National Register of Historic Places in Nebraska Motor vehicle manufacturing plants on the National Register of Historic Places Transportation buildings and structures on the National Register of Historic Places in Nebraska Defunct manufacturing companies based in Nebraska Mill architecture
Omaha Ford Motor Company Assembly Plant
[ "Engineering" ]
546
[ "Mill architecture", "Architecture" ]
1,053,016
https://en.wikipedia.org/wiki/Paul%20Scherrer%20Institute
The Paul Scherrer Institute (PSI) is a multi-disciplinary research institute for natural and engineering sciences in Switzerland. It is located in the Canton of Aargau in the municipalities Villigen and Würenlingen on either side of the River Aare, and covers an area over 35 hectares in size. Like ETH Zurich and EPFL, PSI belongs to the ETH Domain of the Swiss Confederation. The PSI employs around 3000 people. It conducts basic and applied research in the fields of matter and materials, human health, and energy and the environment. About 37% of PSI's research activities focus on material sciences, 24% on life sciences, 19% on general energy, 11% on nuclear energy and safety, and 9% on particle physics. PSI develops, builds and operates large and complex research facilities and makes them available to the national and international scientific communities. In 2017, for example, more than 2,500 researchers from 60 different countries came to PSI to take advantage of the concentration of large-scale research facilities in the same location, which is unique worldwide. About 1,900 experiments are conducted each year at the approximately 40 measuring stations in these facilities. In recent years, the institute has been one of the largest recipients of money from the Swiss lottery fund. History The institute, named after the Swiss physicist Paul Scherrer, was created in 1988 when EIR (Eidgenössisches Institut für Reaktorforschung, Swiss Federal Institute for Reactor Research, founded in 1960) was merged with SIN (Schweizerisches Institut für Nuklearphysik, Swiss Institute for Nuclear Research, founded in 1968). The two institutes on opposite sides of the River Aare served as national centres for research: one focusing on nuclear energy and the other on nuclear and particle physics. Over the years, research at the centres expanded into other areas, and nuclear and reactor physics accounts for just 11 percent of the research work at PSI today. Since Switzerland decided in 2011 to phase out nuclear energy, this research has primarily been concerned with questions of safety, such as how to store radioactive waste safely in a deep geological repository. Since 1984, PSI has operated (initially as SIN) the centre for Proton Therapy for treating patients with eye melanomas and other tumours located deep inside the body. More than 9,000 patients have been treated there until now (status 2020). The institute is also active in space research. For example, in 1990 PSI engineers built the detector of the EUVITA telescope for the Russian satellite Spectrum X-G, and later also supplied NASA and ESA with detectors to analyse radiation in space. In 1992, physicists used accelerator mass spectrometry and radiocarbon methods to determine the age of Ötzi, the mummy found in a glacier in the Ötztal Alps a year earlier, from small samples of just a few milligrams of bone, tissue and grass. They were analysed at the TANDEM accelerator on the Hönggerberg near Zurich, which at the time was jointly operated by ETH Zurich and PSI. In 2009, the Indian-born British structural biologist Venkatraman Ramakrishnan was awarded the Nobel Prize in Chemistry for, among other things, his research at the Synchrotron Light Source Switzerland (SLS). The SLS is one of PSI's four large-scale research facilities. His investigations there enabled Ramakrishnan to clarify what ribosomes look like and how they function at the level of individual molecules. Using the information encoded in the genes, ribosomes produce proteins that control many chemical processes in living organisms. In 2010, an international team of researchers at PSI used negative muons to perform a new measurement of the proton and found that its radius is significantly smaller than previously thought: 0.84184 femtometers instead of 0.8768. According to press reports, this result was not only surprising, it could also call previous models in physics into question. The measurements were only possible with PSI's 590 MeV proton accelerator HIPA because its secondarily generated muon beam is the only one worldwide that is intense enough to conduct the experiment. In 2011, researchers from PSI and elsewhere succeeded in deciphering the basic structure of the protein molecule rhodopsin with the help of the SLS. This optical pigment acts as a kind of light sensor and plays a decisive role in the process of sight. A so-called ‘barrel pixel detector’ built at PSI was a central element in the CMS detector at the Geneva nuclear research centre CERN, and was thus involved in detecting the Higgs boson. This discovery, announced on 4 July 2012, was awarded the Nobel Prize in Physics one year later. In January 2016, 20 kilograms of plutonium were taken from PSI to the USA. According to a newspaper report, the federal government had a secret plutonium storage facility in which the material had been kept since the 1960s to construct an atomic bomb as planned at the time. The Federal Council denied this, maintaining the plutonium-239 content of the material was below 92 percent, which meant it was not weapons-grade material. The idea was rather to use the material obtained from reprocessed fuel rods of the Diorit research reactor, which was operated from 1960 to 1977, to develop a new generation of fuel element types for nuclear power plants. This, however, never happened. By the time it was decided, in 2011, to phase out nuclear power, it had become clear that there was no further use for the material in Switzerland. The Federal Council decided at the Nuclear Security Summit in 2014 to close the Swiss plutonium storage facility. A bilateral agreement between the two countries meant the plutonium could then be transferred to the US for further storage. In July 2017, the three-dimensional alignment of magnetization inside a three-dimensional magnetic object was investigated and visualized with the help of the SLS without affecting the material. The technology is expected to be useful in developing better magnets, for example for motors or data storage. Joël François Mesot, the long-standing Director of PSI (2008 to 2018), was elected President of ETH Zurich at the end of 2018. His post was temporarily taken over by the physicist and PSI Chief of Staff Thierry Strässle from January 2019. Since 1 April 2020, the physicist Christian Rüegg has been Director of PSI. He was previously head of the PSI research division Neutrons and Muons. Numerous PSI spin-off companies have been founded over the years to make the research findings available to the wider society. The largest spin-off, with 120 employees, is the DECTRIS AG, founded in 2006 in nearby Baden, which specializes in the development and marketing of X-ray detectors. SwissNeutronics AG in Klingnau, which sells optical components for neutron research facilities, was founded as early as 1999. Several recent PSI offshoots, such as the manufacturer of metal-organic frameworks novoMOF or the drug developer leadXpro, have settled close to PSI in the Park Innovaare, which was founded in 2015 with the support of several companies and Canton Aargau. Research Areas and Departments PSI develops, builds and operates several accelerator facilities, e. g. a 590 MeV high-current cyclotron, which in normal operation supplies a beam current of about 2.2 mA. PSI also operates four large-scale research facilities: a synchrotron light source (SLS), which is particularly brilliant and stable, a spallation neutron source (SINQ), a muon source (SμS) and an X-ray free-electron laser (SwissFEL). This makes PSI currently (2020) the only institute in the world to provide the four most important probes for researching the structure and dynamics of condensed matter (neutrons, muons and synchrotron radiation) on a campus for the international user community. In addition, HIPA's target facilities also produce pions that feed the muon source and the Ultracold Neutron source UCN produces very slow, ultracold neutrons. All these particle types are used for research in particle physics. Research at PSI is conducted with the help of these facilities. Its focus areas include: Matter and Material All the materials humans work with are made up of atoms. The interaction of atoms and their arrangement determine the properties of a material. Most of the researchers in the field of matter and materials at PSI want to find out more about how the internal structure of different materials relates to their observable properties. Fundamental research in this area contributes to the development of new materials with a wide range of applications, for example in electrical engineering, medicine, telecommunications, mobility, new energy storage systems, quantum computers and spintronics. The phenomena investigated include superconductivity, ferro- and antiferromagnetism, spin fluids and topological insulators. Neutrons are intensively used for materials research at PSI because they enable unique and non-destructive access to the interior of materials on a scale ranging from the size of atoms to objects a centimetre long. They therefore serve as ideal probes for investigating fundamental and applied research topics, such as quantum spin systems and their potential for application in future computer technologies, the functionalities of complex lipid membranes and their use for the transport and targeted release of drug substances, as well as the structure of novel materials for energy storage as key components in intelligent energy networks. In particle physics, PSI researchers are investigating the structure and properties of the innermost layers of matter and what holds them together. Muons, pions and ultra-cold neutrons are used to test the Standard Model of elementary particles, to determine fundamental natural constants and to test theories that go beyond the Standard Model. Particle physics at PSI holds many records, including the most precise determination of the coupling constants of the weak interaction and the most accurate measurement of the charge radius of the proton. Some experiments aim to find effects that are not foreseen in the Standard Model, but which could correct inconsistencies in the theory or solve unexplained phenomena from astrophysics and cosmology. Their results so far agree with the Standard Model. Examples include the upper limit measured in the MEG experiment of the hypothetical decay of positive muons into positrons and photons as well as that of the permanent electric dipole moment for neutrons. Muons are not only useful in particle physics, but also in solid-state physics and materials science. The muon spin spectroscopy method (μSR) is used to investigate the fundamental properties of magnetic and superconducting materials as well as of semiconductors, insulators and semiconductor structures, including technologically relevant applications such as for solar cells. Energy and the Environment PSI researchers are addressing all aspects of energy use with the aim to make energy supplies more sustainable. Focus areas include: new technologies for renewable energies, low-loss energy storage, energy efficiency, low-pollution combustion, fuel cells, experimental and model-based assessment of energy and material cycles, environmental impacts of energy production and consumption, and nuclear energy research, in particular reactor safety and waste management. PSI operates the ESI (Energy System Integration) experimental platform to answer specific questions on seasonal energy storage and sector coupling. The platform can be used in research and industry to test promising approaches to integrating renewable energies into the energy system – for example, storing excess electricity from solar or wind power in the form of hydrogen or methane. At PSI a method for extracting significantly more methane gas from biowaste was developed and successfully tested with the help of the ESI platform together with the Zurich power company Energie 360°. The team was awarded the Watt d'Or 2018 of the Swiss Federal Office of Energy. A platform for catalyst research is also maintained at PSI. Catalysis is a central component in various energy conversion processes, for example in fuel cells, water electrolysis and the methanation of carbon dioxide. To test the pollutant emissions of various energy production processes and the behaviour of the corresponding substances in the atmosphere, PSI also operates a smog chamber. Another area of research at PSI is on the effects of energy production on the atmosphere locally, including in the Alps, in the polar regions of the Earth and in China. The Nuclear Energy and Safety Division is dedicated to maintaining a good level of nuclear expertise and thus to training scientists and engineers in nuclear energy. For example, PSI maintains one of the few laboratories in Europe for investigating fuel rods in commercial reactors. The division works closely with ETH Zurich, EPFL and the University of Bern, using, for example, their high-performance computers or the CROCUS research reactor at EPFL. Human health PSI is one of the leading institutions worldwide in the research and application of proton therapy for the treatment of cancer. Since 1984, the Center for Proton Therapy has been successfully treating cancer patients with a special form of radiation therapy. To date, more than 7500 patients with ocular tumours have been irradiated (status 2020). The success rate for eye therapy using the OPTIS facility is over 98 percent. In 1996, an irradiation unit (Gantry 1) was equipped for the first time to use the so-called spot-scanning proton technique developed at PSI. With this technique, tumours deep inside the body are scanned three-dimensionally with a proton beam about 5 to 7 mm in width. By superimposing many individual proton spots – about 10,000 spots per litre volume – the tumour is evenly exposed to the necessary radiation dose, which is monitored individually for each spot. This allows an extremely precise, homogeneous irradiation that is optimally adapted to the usually irregular shape of the tumour. The technique enables as much as possible of the surrounding healthy tissue to be spared. The first gantry was in operation for patients from 1996 to the end of 2018. In 2013, the second Gantry 2, developed at PSI, went into operation, and in mid-2018 another treatment station, Gantry 3, was opened. In the field of radiopharmacy, PSI's infrastructure covers the entire spectrum. In particular, PSI researchers are tackling very small tumours distributed throughout the body. These cannot be treated with the usual radiotherapy techniques. New medically applicable radionuclides have, however, been produced with the help of the proton accelerators and the neutron source SINQ at PSI. When combined for therapy with special biomolecules (antibodies), therapeutic molecules can be formed to selectively and specifically detect tumour cells. These are then labelled with a radioactive isotope. Its radiation can be localized with imaging techniques such as SPECT or PET, which enables the diagnosis of tumours and their metastases. Moreover, it can be dosed so that it also destroys the tumour cells. Several such radioactive substances have been developed at PSI. They are currently being tested in clinical trials, in close cooperation with universities, clinics and the pharmaceutical industry. PSI also supplies local hospitals with radiopharmaceuticals if required. Since the opening of the Synchrotron Light Source Switzerland (SLS), structural biology has been a further focus of research in the field of human health. Here, the structure and function of biomolecules are being investigated – preferably at atomic resolution. The PSI researchers are primarily concerned with proteins. Every living cell needs a myriad of these molecules in order, for example, to be able to metabolise, receive and transmit signals or to divide. The aim is to understand these life processes better and thus to be able to treat or prevent diseases more effectively. For example, PSI is investigating the structure of microtubules, filamentous structures which, among other things, pull apart chromosomes during cell division. They consist of long protein chains. When chemotherapy is used to treat cancer, it disturbs the assembly or breakdown of these chains so that the cancer cells can no longer divide. Researchers are closely observing the structure of these proteins and how they change to find out exactly where cancer drugs have to attack the microtubules. With the help of PSI's SwissFEL free-electron X-ray laser, which was inaugurated in 2016, researchers have been able to analyse dynamic processes in biomolecules with extremely high time resolution – less than a trillionth of a second (picosecond). For example, they have detected how certain proteins in the photoreceptors of the retina of our eyes are activated by light. Accelerators and large research facilities at PSI Proton accelerator facility While PSI's proton accelerator, which went into service in 1974, was primarily used in the early days for elementary particle physics, today the focus is on applications for solid-state physics, radiopharmaceuticals and cancer therapy. Since it started operating, it has been constantly developed further, and its performance today is as much as 2.4 mA, which is 24 times higher than the initial 100 μA. This is why the facility is now considered a high-performance proton accelerator, or HIPA (High Intensity Proton Accelerator) for short. Basically, it consists of three accelerators in series: the Cockcroft-Walton, the injector-2 cyclotron, and the ring-cyclotron. They accelerate the protons to around 80 percent of the speed of light. Proton source and Cockcroft-Walton In a proton source based on cyclotron resonance, microwaves are used to strip electrons from hydrogen atoms. What remains are the hydrogen atomic nuclei, each consisting of only one proton. These protons leave the source with a potential of 60 kilovolts and are then subjected to a further voltage of 810 kilovolts in an accelerator tube. Both voltages are supplied by a Cockcroft-Walton accelerator. With a total of 870 kilovolts, the protons are accelerated to a speed of 46 million km/h or 4 percent of the speed of light. The protons are then fed into the Injector-2. Injector-1 With Injector-1, operating currents of 170 μA and peak currents of 200 μA could be reached. It was also used for low energy experiments, for OPTIS eye therapy and for the LiSoR experiment in the MEGAPIE project. Since December 1, 2010, this ring accelerator has been out of operation. Injector-2 The Injector-2, which was commissioned in 1984 and developed by what was then SIN, replaced the Injector-1 as the injection machine for the 590 MeV ring cyclotron. Initially, it was possible to operate Injector-1 and Injector-2 alternately, but now only Injector-2 is used to feed the proton beam into the ring. The new cyclotron has enabled an increase in the beam current from 1 to 2 mA, which was the absolute record value for the 1980s. Today, the injector-2 delivers a beam current of ≈ 2.2 mA in routine operation and 2.4 mA in high current operation at 72 MeV, which is about 38 percent of the speed of light. Originally, two resonators were operated at 150 MHz in flat-top mode to enable a clear separation of the proton orbits, but these are now also used for acceleration. Part of the extracted 72 MeV proton beam can be split off for isotope production, while the main part is fed into the Ring Cyclotron for further acceleration. Ring Like the Injector-2, the Ring Cyclotron, which has a circumference of about 48 m, went into operation in 1974. It was specially developed at SIN and is at the heart of the PSI proton accelerator facilities. The protons are accelerated to 80 percent of the speed of light on the approximately 4 km long track, which the protons cover inside the ring in 186 laps. This corresponds to a kinetic energy of 590 MeV. Only three such rings exist worldwide, namely: TRIUMF in Vancouver, Canada; LAMPF in Los Alamos, USA; and the one at PSI. TRIUMF has only reached beam currents of 500 μA and LAMPF 1 mA. In addition to the four original Cavities, a smaller fifth cavity was added in 1979. It is operated at 150 megahertz as a flat-top cavity, and has enabled a significant increase in the number of extracted particles. Since 2008 all the old aluminium cavities of the Ring Cyclotron have been replaced with new copper cavities. These allow higher voltage amplitudes and thus a greater acceleration of the protons per revolution. The number of revolutions of the protons in the cyclotron could thus be reduced from approx. 200 to 186, and the distance travelled by the protons in the cyclotron decreased from 6 km to 4 km. With a beam current of 2.2 mA, this proton facility at PSI is currently the most powerful continuous particle accelerator in the world. The 1.3 MW strong proton beam is directed towards the muon source (SμS) and the spallation neutron source (SINQ). Swiss Muon Source (SμS) In the middle of the large experimental hall, the proton beam of the Ring Cyclotron collides with two targets – rings of carbon. During the collisions of the protons with the atomic carbon nuclei, pions are first formed and then decay into muons after about 26 billionths of a second. Magnets then direct these muons to instruments used in materials science and particle physics. Thanks to the Ring Cyclotron's enormously high proton current, the muon source is able to generate the world's most intense muon beams. These enable researchers to conduct experiments in particle physics and materials science that cannot be carried out anywhere else. The Swiss Muon Source (SμS) has seven beamlines that scientists can use to investigate various aspects of modern physics. Some materials scientists use them for muon spin spectroscopy experiments. PSI is the only place in the world where a muon beam of sufficient intensity is available at a very low energy of only a few kiloelectron volts – thanks to the Muon Source's high muon intensity and a special process. The resulting muons are slow enough to be used to analyse thin layers of material and surfaces. Six measuring stations (FLAME (from 2021), DOLLY, GPD, GPS, HAL-9500, and LEM) with instruments for a wide range of applications are available for such investigations. Particle physicists are using some of the beamlines to perform high-precision measurements to test the limits of the Standard Model. Swiss Spallation Neutron Source (SINQ) The neutron source SINQ, which has been in operation since 1996, was the first, and is still the strongest, of its kind. It delivers a continuous neutron flux of 1014 n cm−2s−1. In SINQ the protons from the large particle accelerator strike a lead target and knock the neutrons out of the lead nuclei, making them available for experiments. In addition to thermal neutrons, a moderator made of liquid deuterium also enables the production of slow neutrons, which have a lower energy spectrum. The MEGAPIE Target (Megawatt Pilot-Experiment) came into operation in summer 2006. By replacing the solid target with a target made of a lead-bismuth eutectic, the neutron yield could be increased by about another 80%. Since it would be very costly to dispose of the MEGAPIE target, PSI decided in 2009 not to produce another such target and instead to develop the solid target further as it had already proven its worth. Based on the findings from the MEGAPIE project, it was possible to obtain almost as large an increase in neutron yield for operation with a solid target. SINQ was one of the first facilities to use specially developed optical guide systems to transport slow neutrons. Metal-coated glass conduits guide neutrons over longer distances (a few tens of metres) by means of total reflection, analogous to the light guidance in glass fibres, with a low loss of intensity. The efficiency of these neutron guides has steadily increased with advances in manufacturing technology. This is why PSI decided to carry out a comprehensive upgrade in 2019. When SINQ goes back into operation in summer 2020, it will be able to provide, on average, five times more neutrons for experiments, and in a special case, even 30 times more. SINQ's 15 instruments are not only used for PSI research projects but are also available for national and international users. Ultracold Neutron Source (UCN) Since 2011, PSI has also been operating a second spallation neutron source for the generation of ultracold neutrons (UCN). Unlike SINQ, it is pulsed and uses HIPA's full beam, but normally only for 8 seconds every 5 minutes. The design is similar to that of SINQ. In order to cool down the neutrons, however, it uses frozen deuterium at a temperature of 5 Kelvin (corresponding to −268 degrees Celsius) as a cold moderator. The UCN generated can be stored in the facility and observed for a few minutes in experiments. COMET cyclotron This superconducting 250 MeV cyclotron has been in operation for proton therapy since 2007 and provides the beam for treating tumours in cancer patients. It was the first superconducting cyclotron worldwide to be used for proton therapy. Previously, part of the proton beam from the Ring Cyclotron was split off for this purpose, but since 2007 the medical facility has been producing its own proton beam independently, which supplies several irradiation stations for therapy. Other components of the facility, the peripheral equipment and the control systems have also been improved in the meantime, so that today the facility is available over 98 percent of the time with more than 7000 operating hours per year. Swiss Light Source (SLS) The Swiss Light Source (SLS), an electron synchrotron, has been in operation since 1 August 2001. It works like a kind of combined X-ray machine and microscope to screen a wide variety of substances. In the circular structure, the electrons move on a circular path 288 m in circumference, emitting synchrotron radiation in a tangential direction. A total of 350 magnets hold the electron beam on its course and focus it. Acceleration cavities ensure that the beam's speed remains constant. Since 2008, the SLS has been the accelerator with the thinnest electron beam in the world. PSI researchers and technicians have been working on this for eight years and have repeatedly adjusted each of the many magnets. The SLS offers a very broad spectrum of synchrotron radiation from infrared light to hard X-rays. This enables researchers to take microscopic pictures inside objects, materials and tissue to, for example, improve materials or develop drugs. In 2017, a new instrument at the SLS made it possible to look inside a computer chip for the first time without destroying it. Structures such as 45 nanometre narrow power lines and 34 nanometre high transistors became visible. This technology enables chip manufacturers to, for example, check whether their products comply with the specifications more easily. Currently, under the working title "SLS 2.0", plans are being made to upgrade the SLS and thus create a fourth-generation synchrotron light source. SwissFEL The SwissFEL free-electron laser was officially opened on 5 December 2016 by the Federal Councillor Johann Schneider-Ammann. In 2018, the first beamline ARAMIS came into operation. The second beamline ATHOS is scheduled to follow in autumn 2020. Worldwide, only four comparable facilities are in operation. Training Centre The PSI Education Centre has over 30 years of experience in training and providing further education in technical and interdisciplinary fields. It trains over 3,000 participants annually. The centre offers a wide range of basic and advanced training courses for both professionals and others working with ionising radiation or radioactive materials. The courses, in which participants acquire the relevant expertise, are recognised by the Federal Office of Public Health (FOPH) and the Swiss Federal Nuclear Safety Inspectorate (ENSI). It also runs basic and advanced training courses for PSI's staff and interested individuals from the ETH Domain. Since 2015, courses on human resources development (such as conflict management, leadership workshops, communication and transferable skills) have also been held. The quality of the PSI Education Centre is certified (ISO 29990:2001). Cooperation with industry PSI holds about 100 active patent families in, for example, medicine, with investigation techniques for proton therapy against cancer or for the detection of prions, the cause of mad cow disease. Other patent families are in the field of photoscience, with special lithography processes for structuring surfaces, in the environmental sciences for recycling rare earths, for catalysts or for the gasification of biomass, in the materials sciences and in other fields. PSI maintains its own technology transfer office for patents. Patents have, for example, been granted for detectors used in high-performance X-ray cameras developed for the Swiss Synchrotron Light Source SLS, which can be used to investigate materials at the atomic level. These provided the basis for founding the company DECTRIS, the largest spin-off to date to emerge from PSI. In 2017, the Lausanne-based company Debiopharm licensed the active substance 177Lu-PSIG-2, which was developed at the Centre for Radiopharmaceutical Sciences at PSI. This substance is effective in treating a type of thyroid cancer. It is to be further developed under the name DEBIO 1124 with the aim to have it approved and get it ready for market launch. Another PSI spin-off, GratXray, works with a method based on phase contrasts in lattice interferometry. The method was originally developed to characterize synchrotron radiation and is expected to become the gold standard in screening for breast cancer. The new technology has already been used in a prototype that PSI developed in collaboration with Philips. See also Science and technology in Switzerland Swiss Innovation Park Proton therapy References External links PSI Homepage Website of SLS Website of SINQ Website of SwissFEL Proton therapy program High-Intensity-Proton-Accelerators at PSI ETH Domain 1988 establishments in Switzerland Physics research institutes Neutron facilities Research institutes in Switzerland Particle physics facilities Accelerator physics Synchrotron radiation Institutes associated with CERN Research institutes established in 1988
Paul Scherrer Institute
[ "Physics" ]
6,287
[ "Accelerator physics", "Applied and interdisciplinary physics", "Experimental physics" ]
1,053,500
https://en.wikipedia.org/wiki/DNA%20extraction
The first isolation of deoxyribonucleic acid (DNA) was done in 1869 by Friedrich Miescher. DNA extraction is the process of isolating DNA from the cells of an organism isolated from a sample, typically a biological sample such as blood, saliva, or tissue. It involves breaking open the cells, removing proteins and other contaminants, and purifying the DNA so that it is free of other cellular components. The purified DNA can then be used for downstream applications such as PCR, sequencing, or cloning. Currently, it is a routine procedure in molecular biology or forensic analyses. This process can be done in several ways, depending on the type of the sample and the downstream application, the most common methods are: mechanical, chemical and enzymatic lysis, precipitation, purification, and concentration. The specific method used to extract the DNA, such as phenol-chloroform extraction, alcohol precipitation, or silica-based purification. For the chemical method, many different kits are used for extraction, and selecting the correct one will save time on kit optimization and extraction procedures. PCR sensitivity detection is considered to show the variation between the commercial kits. There are many different methods for extracting DNA, but some common steps include: Lysis: This step involves breaking open the cells to release the DNA. For example, in the case of bacterial cells, a solution of detergent and salt (such as SDS) can be used to disrupt the cell membrane and release the DNA. For plant and animal cells, mechanical or enzymatic methods are often used. Precipitation: Once the DNA is released, proteins and other contaminants must be removed. This is typically done by adding a precipitating agent, such as alcohol (such as ethanol or isopropanol), or a salt (such as ammonium acetate). The DNA will form a pellet at the bottom of the solution, while the contaminants will remain in the liquid. Purification: After the DNA is precipitated, it is usually further purified by using column-based methods. For example, silica-based spin columns can be used to bind the DNA, while contaminants are washed away. Alternatively, a centrifugation step can be used to purify the DNA by spinning it down to the bottom of a tube. Concentration: Finally, the amount of DNA present is usually increased by removing any remaining liquid. This is typically done by using a vacuum centrifugation or a lyophilization (freeze-drying) step. Some variations on these steps may be used depending on the specific DNA extraction protocol. Additionally, some kits are commercially available that include reagents and protocols specifically tailored to a specific type of sample. What does it deliver? DNA extraction is frequently a preliminary step in many diagnostic procedures used to identify environmental viruses and bacteria and diagnose illnesses and hereditary diseases. These methods consist of, but are not limited to: Fluorescence In Situ Hybridization (FISH) technique was developed in the 1980s. The basic idea is to use a nucleic acid probe to hybridize nuclear DNA from either interphase cells or metaphase chromosomes attached to a microscopic slide. It is a molecular method used, among other things, to recognize and count particular bacterial groupings. To recognize, define, and quantify the geographical and temporal patterns in marine bacterioplankton communities, researchers employ a technique called terminal restriction fragment length polymorphism (T-RFLP). Sequencing: Whole or partial genomes and other chromosomal components, ended for comparison with previously published sequences. Basic procedure Cells that are to be studied need to be collected. Breaking the cell membranes open exposes the DNA along with the cytoplasm within (cell lysis). Lipids from the cell membrane and the nucleus are broken down with detergents and surfactants. Breaking down proteins by adding a protease (optional). Breaking down RNA by adding an RNase (optional). The solution is treated with a concentrated salt solution (saline) to make debris such as broken proteins, lipids, and RNA clump together. Centrifugation of the solution, which separates the clumped cellular debris from the DNA. DNA purification from detergents, proteins, salts, and reagents is used during the cell lysis step. The most commonly used procedures are: Ethanol precipitation usually by ice-cold ethanol or isopropanol. Since DNA is insoluble in these alcohols, it will aggregate together, giving a pellet upon centrifugation. Precipitation of DNA is improved by increasing ionic strength, usually by adding sodium acetate. Phenol–chloroform extraction in which phenol denatures proteins in the sample. After centrifugation of the sample, denatured proteins stay in the organic phase while the aqueous phase containing nucleic acid is mixed with chloroform to remove phenol residues from the solution. Minicolumn purification relies on the fact that the nucleic acids may bind (adsorption) to the solid phase (silica or other) depending on the pH and the salt concentration of the buffer. Cellular and histone proteins bound to the DNA can be removed either by adding a protease or having precipitated the proteins with sodium or ammonium acetate or extracted them with a phenol-chloroform mixture before the DNA precipitation. After isolation, the DNA is dissolved in a slightly alkaline buffer, usually in a TE buffer, or in ultra-pure water. Common chemicals The most common chemicals used for DNA extraction include: Detergents, such as SDS or Tween-20, which are used to break open cells and release the DNA. Protease enzymes, such as Proteinase K, which are used to digest proteins that may be binding to the DNA. Phenol and chloroform, which are used to separate the DNA from other cellular components. Ethanol or isopropanol, which are used to precipitate the DNA. Salt, such as NaCl, which is often used to help dissolve the DNA and maintain its stability. EDTA, which is used to chelate the metals ions that can damage the DNA. Tris-HCL, which is used to maintain the pH at the optimal condition for DNA extraction. Method selection Some of the most common DNA extraction methods include organic extraction, Chelex extraction, and solid phase extraction. These methods consistently yield isolated DNA, but they differ in both the quality and the quantity of DNA yielded. When selecting a DNA extraction method, there are multiple factors to consider, including cost, time, safety, and risk of contamination. Organic extraction involves the addition of incubation in multiple different chemical solutions; including a lysis step, a phenol-chloroform extraction, an ethanol precipitation, and washing steps. Organic extraction is often used in laboratories because it is cheap, and it yields large quantities of pure DNA. Though it is easy, there are many steps involved, and it takes longer than other methods. It also involves the unfavorable use of the toxic chemicals phenol and chloroform, and there is an increased risk of contamination due to transferring the DNA between multiple tubes. Several protocols based on organic extraction of DNA were effectively developed decades ago, though improved and more practical versions of these protocols have also been developed and published in the last years. The chelex extraction method involves adding the Chelex resin to the sample, boiling the solution, then vortexing and centrifuging it. The cellular materials bind to the Chelex beads, while the DNA is available in the supernatant. The Chelex method is much faster and simpler than organic extraction, and it only requires one tube, which decreases the risk of DNA contamination. Unfortunately, Chelex extraction does not yield as much quantity and the DNA yielded is single-stranded, which means it can only be used for PCR-based analyses and not for RFLP. Solid phase extraction such as using a spin-column-based extraction method takes advantage of the fact that DNA binds to silica. The sample containing DNA is added to a column containing a silica gel or silica beads and chaotropic salts. The chaotropic salts disrupt the hydrogen bonding between strands and facilitate the binding of the DNA to silica by causing the nucleic acids to become hydrophobic. This exposes the phosphate residues so they are available for adsorption. The DNA binds to the silica, while the rest of the solution is washed out using ethanol to remove chaotropic salts and other unnecessary constituents. The DNA can then be rehydrated with aqueous low-salt solutions allowing for elution of the DNA from the beads. This method yields high-quality, largely double-stranded DNA which can be used for both PCR and RFLP analysis. This procedure can be automated and has a high throughput, although lower than the phenol-chloroform method. This is a one-step method i.e. the entire procedure is completed in one tube. This lowers the risk of contamination making it very useful for the forensic extraction of DNA. Multiple solid-phase extraction commercial kits are manufactured and marketed by different companies; the only problem is that they are more expensive than organic extraction or Chelex extraction. Special types Specific techniques must be chosen for the isolation of DNA from some samples. Typical samples with complicated DNA isolation are: archaeological samples containing partially degraded DNA, see ancient DNA samples containing inhibitors of subsequent analysis procedures, most notably inhibitors of PCR, such as humic acid from the soil, indigo and other fabric dyes or haemoglobin in blood samples from microorganisms with thick cellular walls, for example, yeast samples containing mixed DNA from multiple sources Extrachromosomal DNA is generally easy to isolate, especially plasmids may be easily isolated by cell lysis followed by precipitation of proteins, which traps chromosomal DNA in insoluble fraction and after centrifugation, plasmid DNA can be purified from soluble fraction. A Hirt DNA Extraction is an isolation of all extrachromosomal DNA in a mammalian cell. The Hirt extraction process gets rid of the high molecular weight nuclear DNA, leaving only low molecular weight mitochondrial DNA and any viral episomes present in the cell. Detection of DNA A diphenylamine (DPA) indicator will confirm the presence of DNA. This procedure involves chemical hydrolysis of DNA: when heated (e.g. ≥95 °C) in acid, the reaction requires a deoxyribose sugar and therefore is specific for DNA. Under these conditions, the 2-deoxyribose is converted to w-hydroxylevulinyl aldehyde, which reacts with the compound, diphenylamine, to produce a blue-colored compound. DNA concentration can be determined by measuring the intensity of absorbance of the solution at the 600 nm with a spectrophotometer and comparing to a standard curve of known DNA concentrations. Measuring the intensity of absorbance of the DNA solution at wavelengths 260 nm and 280 nm is used as a measure of DNA purity. DNA can be quantified by cutting the DNA with a restriction enzyme, running it on an agarose gel, staining with ethidium bromide (EtBr) or a different stain and comparing the intensity of the DNA with a DNA marker of known concentration. Using the Southern blot technique, this quantified DNA can be isolated and examined further using PCR and RFLP analysis. These procedures allow differentiation of the repeated sequences within the genome. It is these techniques which forensic scientists use for comparison, identification, and analysis. High-molecular-weight DNA extraction method In this method, plant nuclei are isolated by physically grinding tissues and reconstituting the intact nuclei in a unique Nuclear Isolation Buffer (NIB). The plastid DNAs are released from organelles and eliminated with an osmotic buffer by washing and centrifugation. The purified nuclei are then lysed and further cleaned by organic extraction, and the genomic DNA is precipitated with a high concentration of CTAB. The highly pure, high molecular weight gDNA is extracted from the nuclei, dissolved in a high pH buffer, allowing for stable long-term storage. DNA storage DNA storage is an important aspect of DNA extraction projects as it ensures the integrity and stability of the extracted DNA for downstream applications. One common method of DNA storage is ethanol precipitation, which involves adding ethanol and a salt, such as sodium chloride or potassium acetate, to the extracted DNA to precipitate it out of solution. The DNA is then pelleted by centrifugation and washed with 70% ethanol to remove any remaining contaminants. The DNA pellet is then air-dried and resuspended in a buffer, such as Tris-EDTA (TE) buffer, for storage. Another method is freezing the DNA in a buffer such as TE buffer, or in a cryoprotectant such as glycerol or DMSO, at -20 or -80 degrees Celsius. This method preserves the integrity of the DNA and slows down the activity of any enzymes that may degrade it. It's important to note that the choice of storage buffer and conditions will depend on the downstream application for which the DNA is intended. For example, if the DNA is to be used for PCR, it may be stored in TE buffer at 4 degrees Celsius, while if it is to be used for long-term storage or shipping, it may be stored in ethanol at -20 degrees Celsius. The extracted DNA should be regularly checked for its quality and integrity, such as by running a gel electrophoresis or spectrophotometry. The storage conditions should be also noted and controlled, such as the temperature and humidity. It's also important to consider the long-term stability of the DNA and the potential for degradation over time. The extracted DNA should be stored for as short a time as possible, and the conditions for storage should be chosen to minimize the risk of degradation. In general, the extracted DNA should be stored under the best possible conditions to ensure its stability and integrity for downstream applications. Quality control There are several quality control techniques used to ensure the quality of extracted DNA, including: Spectrophotometry: This is a widely used method for measuring the concentration and purity of a DNA sample. Spectrophotometry measures the absorbance of a sample at different wavelengths, typically at 260 nm and 280 nm. The ratio of absorbance at 260 nm and 280 nm is used to determine the purity of the DNA sample. Gel electrophoresis: This technique is used to visualize and compare the size and integrity of DNA samples. The DNA is loaded onto an agarose gel and then subjected to an electric field, which causes the DNA to migrate through the gel. The migration of the DNA can be visualized using ethidium bromide, which intercalates into the DNA and fluoresces under UV light. Fluorometry: Fluorometry is a method to determine the concentration of nucleic acids by measuring the fluorescence of the sample when excited by a specific wavelength of light. Fluorometry uses dyes that specifically bind to nucleic acids and have a high fluorescence intensity. PCR: Polymerase Chain Reaction (PCR) is a technique that amplifies a specific region of DNA, it is also used as a QC method by amplifying a small fragment of the DNA, if the amplification is successful, it means the extracted DNA is of good quality and it's not degraded. Qubit Fluorometer: The Qubit Fluorometer is an instrument that uses fluorescent dyes to measure the concentration of DNA and RNA in a sample. It is a quick and sensitive method that can be used to determine the concentration of DNA samples. Bioanalyzer: The bioanalyzer is an instrument that uses electrophoresis to separate and analyze DNA, RNA, and protein samples. It can provide detailed information about the size, integrity, and purity of a DNA sample. See also Boom method DNA fingerprinting DNA sequencing DNA structure Ethanol precipitation Plasmid preparation Polymerase chain reaction SCODA DNA purification References Further reading Li, Richard (2015). Forensic Biology. Boca Raton: CRC Press, Taylor & Francis Group. . Sambrook, Michael R.; Green, Joseph (2012). Molecular Cloning (4th ed.). Cold Spring Harbor, N.Y.: Cold Spring Harbor Laboratory Pr. . . External links How to extract DNA from anything living DNA Extraction Virtual Lab Biochemical separation processes Genetics techniques Molecular biology Laboratory techniques DNA Polymerase chain reaction Forensic genetics lt:DNR išskyrimas
DNA extraction
[ "Chemistry", "Engineering", "Biology" ]
3,499
[ "Biochemistry methods", "Genetics techniques", "Separation processes", "Polymerase chain reaction", "Genetic engineering", "Biochemical separation processes", "nan", "Molecular biology", "Biochemistry" ]
1,053,858
https://en.wikipedia.org/wiki/Functional%20genomics
Functional genomics is a field of molecular biology that attempts to describe gene (and protein) functions and interactions. Functional genomics make use of the vast data generated by genomic and transcriptomic projects (such as genome sequencing projects and RNA sequencing). Functional genomics focuses on the dynamic aspects such as gene transcription, translation, regulation of gene expression and protein–protein interactions, as opposed to the static aspects of the genomic information such as DNA sequence or structures. A key characteristic of functional genomics studies is their genome-wide approach to these questions, generally involving high-throughput methods rather than a more traditional "candidate-gene" approach. Definition and goals In order to understand functional genomics it is important to first define function. In their paper Graur et al. define function in two possible ways. These are "selected effect" and "causal role". The "selected effect" function refers to the function for which a trait (DNA, RNA, protein etc.) is selected for. The "causal role" function refers to the function that a trait is sufficient and necessary for. Functional genomics usually tests the "causal role" definition of function. The goal of functional genomics is to understand the function of genes or proteins, eventually all components of a genome. The term functional genomics is often used to refer to the many technical approaches to study an organism's genes and proteins, including the "biochemical, cellular, and/or physiological properties of each and every gene product" while some authors include the study of nongenic elements in their definition. Functional genomics may also include studies of natural genetic variation over time (such as an organism's development) or space (such as its body regions), as well as functional disruptions such as mutations. The promise of functional genomics is to generate and synthesize genomic and proteomic knowledge into an understanding of the dynamic properties of an organism. This could potentially provide a more complete picture of how the genome specifies function compared to studies of single genes. Integration of functional genomics data is often a part of systems biology approaches. Techniques and applications Functional genomics includes function-related aspects of the genome itself such as mutation and polymorphism (such as single nucleotide polymorphism (SNP) analysis), as well as the measurement of molecular activities. The latter comprise a number of "-omics" such as transcriptomics (gene expression), proteomics (protein production), and metabolomics. Functional genomics uses mostly multiplex techniques to measure the abundance of many or all gene products such as mRNAs or proteins within a biological sample. A more focused functional genomics approach might test the function of all variants of one gene and quantify the effects of mutants by using sequencing as a readout of activity. Together these measurement modalities endeavor to quantitate the various biological processes and improve our understanding of gene and protein functions and interactions. At the DNA level Genetic interaction mapping Systematic pairwise deletion of genes or inhibition of gene expression can be used to identify genes with related function, even if they do not interact physically. Epistasis refers to the fact that effects for two different gene knockouts may not be additive; that is, the phenotype that results when two genes are inhibited may be different from the sum of the effects of single knockouts. DNA/Protein interactions Proteins formed by the translation of the mRNA (messenger RNA, a coded information from DNA for protein synthesis) play a major role in regulating gene expression. To understand how they regulate gene expression it is necessary to identify DNA sequences that they interact with. Techniques have been developed to identify sites of DNA-protein interactions. These include ChIP-sequencing, CUT&RUN sequencing and Calling Cards. DNA accessibility assays Assays have been developed to identify regions of the genome that are accessible. These regions of accessible chromatin are candidate regulatory regions. These assays include ATAC-seq, DNase-Seq and FAIRE-Seq. At the RNA level Microarrays Microarrays measure the amount of mRNA in a sample that corresponds to a given gene or probe DNA sequence. Probe sequences are immobilized on a solid surface and allowed to hybridize with fluorescently labeled "target" mRNA. The intensity of fluorescence of a spot is proportional to the amount of target sequence that has hybridized to that spot and therefore to the abundance of that mRNA sequence in the sample. Microarrays allow for the identification of candidate genes involved in a given process based on variation between transcript levels for different conditions and shared expression patterns with genes of known function. SAGE Serial analysis of gene expression (SAGE) is an alternate method of analysis based on RNA sequencing rather than hybridization. SAGE relies on the sequencing of 10–17 base pair tags which are unique to each gene. These tags are produced from poly-A mRNA and ligated end-to-end before sequencing. SAGE gives an unbiased measurement of the number of transcripts per cell, since it does not depend on prior knowledge of what transcripts to study (as microarrays do). RNA sequencing RNA sequencing has taken over microarray and SAGE technology in recent years, as noted in 2016, and has become the most efficient way to study transcription and gene expression. This is typically done by next-generation sequencing. A subset of sequenced RNAs are small RNAs, a class of non-coding RNA molecules that are key regulators of transcriptional and post-transcriptional gene silencing, or RNA silencing. Next-generation sequencing is the gold standard tool for non-coding RNA discovery, profiling and expression analysis. Massively Parallel Reporter Assays (MPRAs) Massively parallel reporter assays is a technology to test the cis-regulatory activity of DNA sequences. MPRAs use a plasmid with a synthetic cis-regulatory element upstream of a promoter driving a synthetic gene such as Green Fluorescent Protein. A library of cis-regulatory elements is usually tested using MPRAs, a library can contain from hundreds to thousands of cis-regulatory elements. The cis-regulatory activity of the elements is assayed by using the downstream reporter activity. The activity of all the library members is assayed in parallel using barcodes for each cis-regulatory element. One limitation of MPRAs is that the activity is assayed on a plasmid and may not capture all aspects of gene regulation observed in the genome. STARR-seq STARR-seq is a technique similar to MPRAs to assay enhancer activity of randomly sheared genomic fragments. In the original publication, randomly sheared fragments of the Drosophila genome were placed downstream of a minimal promoter. Candidate enhancers amongst the randomly sheared fragments will transcribe themselves using the minimal promoter. By using sequencing as a readout and controlling for input amounts of each sequence the strength of putative enhancers are assayed by this method. Perturb-seq Perturb-seq couples CRISPR mediated gene knockdowns with single-cell gene expression. Linear models are used to calculate the effect of the knockdown of a single gene on the expression of multiple genes. At the protein level Yeast two-hybrid system A yeast two-hybrid screening (Y2H) tests a "bait" protein against many potential interacting proteins ("prey") to identify physical protein–protein interactions. This system is based on a transcription factor, originally GAL4, whose separate DNA-binding and transcription activation domains are both required in order for the protein to cause transcription of a reporter gene. In a Y2H screen, the "bait" protein is fused to the binding domain of GAL4, and a library of potential "prey" (interacting) proteins is recombinantly expressed in a vector with the activation domain. In vivo interaction of bait and prey proteins in a yeast cell brings the activation and binding domains of GAL4 close enough together to result in expression of a reporter gene. It is also possible to systematically test a library of bait proteins against a library of prey proteins to identify all possible interactions in a cell. MS and AP/MS Mass spectrometry (MS) can identify proteins and their relative levels, hence it can be used to study protein expression. When used in combination with affinity purification, mass spectrometry (AP/MS) can be used to study protein complexes, that is, which proteins interact with one another in complexes and in which ratios. In order to purify protein complexes, usually a "bait" protein is tagged with a specific protein or peptide that can be used to pull out the complex from a complex mix. The purification is usually done using an antibody or a compound that binds to the fusion part. The proteins are then digested into short peptide fragments and mass spectrometry is used to identify the proteins based on the mass-to-charge ratios of those fragments. Deep mutational scanning In deep mutational scanning, every possible amino acid change in a given protein is first synthesized. The activity of each of these protein variants is assayed in parallel using barcodes for each variant. By comparing the activity to the wild-type protein, the effect of each mutation is identified. While it is possible to assay every possible single amino-acid change due to combinatorics two or more concurrent mutations are hard to test. Deep mutational scanning experiments have also been used to infer protein structure and protein-protein interactions. Deep Mutational Scanning is an example of a multiplexed assays of variant effect (MAVEs), a family of methods that involve mutagenesis of a DNA-encoded protein or regulatory element followed by a multiplexed assay for some aspect of function. MAVEs enable the generation of ‘variant effect maps’ characterizing aspects of the function of every possible single nucleotide change in a gene or functional element of interest. Mutagenesis and phenotyping An important functional feature of genes is the phenotype caused by mutations. Mutants can be produced by random mutations or by directed mutagenesis, including site-directed mutagenesis, deleting complete genes, or other techniques. Knock-outs (gene deletions) Gene function can be investigated by systematically "knocking out" genes one by one. This is done by either deletion or disruption of function (such as by insertional mutagenesis) and the resulting organisms are screened for phenotypes that provide clues to the function of the disrupted gene. Knock-outs have been produced for whole genomes, i.e. by deleting all genes in a genome. For essential genes, this is not possible, so other techniques are used, e.g. deleting a gene while expressing the gene from a plasmid, using an inducible promoter, so that the level of gene product can be changed at will (and thus a "functional" deletion achieved). Site-directed mutagenesis Site-directed mutagenesis is used to mutate specific bases (and thus amino acids). This is critical to investigate the function of specific amino acids in a protein, e.g. in the active site of an enzyme. RNAi RNA interference (RNAi) methods can be used to transiently silence or knockdown gene expression using ~20 base-pair double-stranded RNA typically delivered by transfection of synthetic ~20-mer short-interfering RNA molecules (siRNAs) or by virally encoded short-hairpin RNAs (shRNAs). RNAi screens, typically performed in cell culture-based assays or experimental organisms (such as C. elegans) can be used to systematically disrupt nearly every gene in a genome or subsets of genes (sub-genomes); possible functions of disrupted genes can be assigned based on observed phenotypes. CRISPR screens CRISPR-Cas9 has been used to delete genes in a multiplexed manner in cell-lines. Quantifying the amount of guide-RNAs for each gene before and after the experiment can point towards essential genes. If a guide-RNA disrupts an essential gene it will lead to the loss of that cell and hence there will be a depletion of that particular guide-RNA after the screen. In a recent CRISPR-cas9 experiment in mammalian cell-lines, around 2000 genes were found to be essential in multiple cell-lines. Some of these genes were essential in only one cell-line. Most of genes are part of multi-protein complexes. This approach can be used to identify synthetic lethality by using the appropriate genetic background. CRISPRi and CRISPRa enable loss-of-function and gain-of-function screens in a similar manner. CRISPRi identified ~2100 essential genes in the K562 cell-line. CRISPR deletion screens have also been used to identify potential regulatory elements of a gene. For example, a technique called ScanDel was published which attempted this approach. The authors deleted regions outside a gene of interest(HPRT1 involved in a Mendelian disorder) in an attempt to identify regulatory elements of this gene. Gassperini et al. did not identify any distal regulatory elements for HPRT1 using this approach, however such approaches can be extended to other genes of interest. Functional annotations for genes Genome annotation Putative genes can be identified by scanning a genome for regions likely to encode proteins, based on characteristics such as long open reading frames, transcriptional initiation sequences, and polyadenylation sites. A sequence identified as a putative gene must be confirmed by further evidence, such as similarity to cDNA or EST sequences from the same organism, similarity of the predicted protein sequence to known proteins, association with promoter sequences, or evidence that mutating the sequence produces an observable phenotype. Rosetta stone approach The Rosetta stone approach is a computational method for de-novo protein function prediction. It is based on the hypothesis that some proteins involved in a given physiological process may exist as two separate genes in one organism and as a single gene in another. Genomes are scanned for sequences that are independent in one organism and in a single open reading frame in another. If two genes have fused, it is predicted that they have similar biological functions that make such co-regulation advantageous. Bioinformatics methods for Functional genomics Because of the large quantity of data produced by these techniques and the desire to find biologically meaningful patterns, bioinformatics is crucial to analysis of functional genomics data. Examples of techniques in this class are data clustering or principal component analysis for unsupervised machine learning (class detection) as well as artificial neural networks or support vector machines for supervised machine learning (class prediction, classification). Functional enrichment analysis is used to determine the extent of over- or under-expression (positive- or negative- regulators in case of RNAi screens) of functional categories relative to a background sets. Gene ontology based enrichment analysis are provided by DAVID and gene set enrichment analysis (GSEA), pathway based analysis by Ingenuity and Pathway studio and protein complex based analysis by COMPLEAT. New computational methods have been developed for understanding the results of a deep mutational scanning experiment. 'phydms' compares the result of a deep mutational scanning experiment to a phylogenetic tree. This allows the user to infer if the selection process in nature applies similar constraints on a protein as the results of the deep mutational scan indicate. This may allow an experimenter to choose between different experimental conditions based on how well they reflect nature. Deep mutational scanning has also been used to infer protein-protein interactions. The authors used a thermodynamic model to predict the effects of mutations in different parts of a dimer. Deep mutational structure can also be used to infer protein structure. Strong positive epistasis between two mutations in a deep mutational scan can be indicative of two parts of the protein that are close to each other in 3-D space. This information can then be used to infer protein structure. A proof of principle of this approach was shown by two groups using the protein GB1. Results from MPRA experiments have required machine learning approaches to interpret the data. A gapped k-mer SVM model has been used to infer the kmers that are enriched within cis-regulatory sequences with high activity compared to sequences with lower activity. These models provide high predictive power. Deep learning and random forest approaches have also been used to interpret the results of these high-dimensional experiments. These models are beginning to help develop a better understanding of non-coding DNA function towards gene-regulation. Consortium projects The ENCODE project The ENCODE (Encyclopedia of DNA elements) project is an in-depth analysis of the human genome whose goal is to identify all the functional elements of genomic DNA, in both coding and non-coding regions. Important results include evidence from genomic tiling arrays that most nucleotides are transcribed as coding transcripts, non-coding RNAs, or random transcripts, the discovery of additional transcriptional regulatory sites, further elucidation of chromatin-modifying mechanisms. The Genotype-Tissue Expression (GTEx) project The GTEx project is a human genetics project aimed at understanding the role of genetic variation in shaping variation in the transcriptome across tissues. The project has collected a variety of tissue samples (> 50 different tissues) from more than 700 post-mortem donors. This has resulted in the collection of >11,000 samples. GTEx has helped understand the tissue-sharing and tissue-specificity of eQTLs. The genomic resource was developed to "enrich our understanding of how differences in our DNA sequence contribute to health and disease." The Atlas of Variant Effects Alliance The Atlas of Variant Effects Alliance (AVE), founded in 2020, is an international consortium aiming to catalog the impact of all possible genetic variants for disease-related functional genomics by creating variant effect maps that reveal the function of every possible single nucleotide change in a gene or regulatory element. AVE is funded in part through the Brotman Baty Institute at the University of Washington and the National Human Genome Research Institute, via funding from the Center of Excellence in Genome Science grant (NHGRI RM1HG010461). See also Systems biology Structural genomics Comparative genomics Pharmacogenomics MGED Society Epigenetics Bioinformatics Epistasis and functional genomics Synthetic viability Protein function prediction References External links European Science Foundation Programme on Frontiers of Functional Genomics MUGEN NoE — Integrated Functional Genomics in Mutant Mouse Models Nature insights: functional genomics ENCODE Molecular biology Genomics
Functional genomics
[ "Chemistry", "Biology" ]
3,832
[ "Biochemistry", "Molecular biology" ]
1,054,394
https://en.wikipedia.org/wiki/Autothysis
Autothysis (from the Greek roots autos- "self" and thysia "sacrifice") or suicidal altruism is the process where an animal destroys itself via an internal rupturing or explosion of an organ which ruptures the skin. The term was proposed by Ulrich Maschwitz and Eleonore Maschwitz in 1974 to describe the defensive mechanism of Colobopsis saundersi, a species of ant. It is caused by a contraction of muscles around a large gland that leads to the breaking of the gland wall. Some termites (such as the soldiers of Globitermes sulphureus) release a sticky secretion by rupturing a gland near the skin of their neck, producing a tar effect in defense against ants. Termites Groups of termites whose soldiers have been found to use autothysis to defend their colonies include: Serritermes serrifer, Dentispicotermes, Genuotermes, and Orthognathotermes. Several species of the soldierless Apicotermitinae, for example those of the Grigiotermes and Ruptitermes genera, have workers that can also use autothysis. This is thought to be one of the most effective forms of defense that termites possess as the ruptured workers block the tunnels running into the nest and it causes a one-to-one exchange between attackers and defenders, meaning attacks have a high energy cost to predators. The soldiers of the Neotropical termite family Serritermitidae have a defense strategy which involves front gland autothysis, with the body rupturing between the head and abdomen. When outside the nest they try to run away from attackers, and only use autothysis when in the nest to block tunnels up, preventing attackers entering. Old workers of Neocapritermes taracua develop blue spots on their abdomens that are filled with copper-containing proteins (blue laccase). These react with a secretion from the labial gland upon autothysis to form a mixture which is toxic to other termites. Ants Some ants belonging to the genera Camponotus and Colobopsis have adapted to using autothysis as an altruistic defensive trait to better fight against arthropods and to possibly deter vertebrate predators for the benefit of the colony as a whole. These ants use autothysis as a self-destructive defense to protect their territory, but they use it differently from termites, in that their primary uses for autothysis do not include blocking the tunnels of their territory from attackers, but more so for combat purposes during territorial battles. Early ants used mechanical means of stinging to defend themselves, but the stings showed to be more useful against large vertebrate predators and not as successful against other arthropods. So selection for autothysis in ants evolved as a way to more effectively kill arthropod enemies. The products of autothysis in ants are sticky and corrosive substances, released by the ants' contraction of their gasters, leading to a burst at an intersegmental fold as well as the mandibular glands. The ants use this self-sacrifice to kill one or more enemies which entangle themselves in this sticky substance. The worker ant has been observed to wrap itself around an opponent, placing its dorsal gaster onto the opponent's head before expelling sticky corrosive material from its mouth and gaster, permanently sticking to the opponent while killing itself and the enemy, as well as any other enemies that become stuck to the products. These ants mostly use autothysis against other arthropods, like invading ant colonies or against termite colonies, and are rather ineffective towards larger vertebrate predators such as lizards or birds. This self-sacrifice is most useful against arthropods because the sticky adhesives in the products work best against the bodies of other arthropods. The compounds used in autothysis, however, have also been explained to have some use in deterring vertebrate predators from eating the ants, because these products are inedible. See also Animal suicide Anti-predator adaptation Apoptosis Autohaemorrhaging Exploding animal Self-destruct References Antipredator adaptations Exploding animals Insect ecology
Autothysis
[ "Chemistry", "Biology" ]
877
[ "Antipredator adaptations", "Biological defense mechanisms", "Exploding animals", "Explosions" ]
1,055,001
https://en.wikipedia.org/wiki/Contra-rotating%20propellers
Aircraft equipped with contra-rotating propellers (CRP) coaxial contra-rotating propellers, or high-speed propellers, apply the maximum power of usually a single piston engine or turboprop engine to drive a pair of coaxial propellers in contra-rotation. Two propellers are arranged one behind the other, and power is transferred from the engine via a planetary gear or spur gear transmission. Although contra-rotating propellers are also known as counter-rotating propellers, the term is much more widely used when referring to airscrews on separate non-coaxial shafts turning in opposite directions. Operation When airspeed is low, the mass of the air flowing through the propeller disk (thrust) causes a significant amount of tangential or rotational air flow to be created by the spinning blades. The energy of this tangential air flow is wasted in a single-propeller design, and causes handling problems at low speed as the air strikes the vertical stabilizer, causing the aircraft to yaw left or right, depending on the direction of propeller rotation. To use this wasted effort, the placement of a second propeller behind the first takes advantage of the disturbed airflow. A well designed contra-rotating propeller will have no rotational air flow, pushing a maximum amount of air uniformly through the propeller disk, resulting in high performance and low induced energy loss. It also serves to counter the asymmetrical torque effect of a conventional propeller (see P-factor). Some contra-rotating systems were designed to be used at takeoff for maximum power and efficiency under such conditions, and allowing one of the propellers to be disabled during cruise to extend flight time. Advantages and disadvantages The torque on the aircraft from a pair of contra-rotating propellers effectively cancels out. Contra-rotating propellers have been found to be between 6% and 16% more efficient than normal propellers. However they can be very noisy, with increases in noise in the axial (forward and aft) direction of up to 30 dB, and tangentially 10 dB. Most of this extra noise can be found in the higher frequencies. These substantial noise problems limit commercial applications. One possibility is to enclose the contra-rotating propellers in a shroud. It is also helpful if the tip speed or the loading of the blades is reduced, if the aft propeller has fewer blades or a smaller diameter than the fore propeller, or if the spacing between the aft and fore propellers is increased. The efficiency of a contra-rotating propeller is somewhat offset by its mechanical complexity and the added weight of this gearing that makes the aircraft heavier, thus some performance is sacrificed to carry it. Nonetheless, coaxial contra-rotating propellers and rotors have been used in several military aircraft, such as the Tupolev Tu-95 "Bear". They are also being examined for use in airliners. Use in aircraft While several nations experimented with contra-rotating propellers in aircraft, only the United Kingdom and Soviet Union produced them in large numbers. The first aircraft to be fitted with a contra-rotating propeller to fly was in the US when two inventors from Ft Worth, Texas tested the concept on an aircraft. United Kingdom A contra-rotating propeller was patented by F. W. Lanchester in 1907. Some of the more successful British aircraft with contra-rotating propellers are the Avro Shackleton, powered by the Rolls-Royce Griffon engine, and the Fairey Gannet, which used the Double Mamba Mk.101 engine. In the Double Mamba two separate power sections drove one propeller each, allowing one power section (engine) to be shut down in flight, increasing endurance. Another naval aircraft, the Westland Wyvern had contra-rotating propellers. The Martin-Baker MB 5 test aircraft also used this propeller type. Later variants of the Supermarine Spitfire and Seafire used the Griffon with contra-rotating props. In the Spitfire/Seafire and Shackleton's case the primary reason for using contra-rotating propellers was to increase the propeller blade-area, and hence absorb greater engine power, within a propeller diameter limited by the height of the aircraft's undercarriage. The Short Sturgeon used two Merlin 140s with contra-rotating propellers. The Bristol Brabazon prototype airliner used eight Bristol Centaurus engines driving four pairs of contra-rotating propellers, each engine driving a single propeller. The post-war SARO Princess prototype flying boat airliner also had eight of its ten engines driving contra-rotating propellers. USSR, Russia and Ukraine In the 1950s, the Soviet Union's Kuznetsov Design Bureau developed the NK-12 turboprop. It drives an eight-blade contra-rotating propeller and, at , it is the most powerful turboprop in service. Four NK-12 engines power the Tupolev Tu-95 Bear, the only turboprop bomber to enter service, as well as one of the fastest propeller-driven aircraft. The Tu-114, an airliner derivative of the Tu-95, holds the world speed record for propeller aircraft. The Tu-95 was also the first Soviet bomber to have intercontinental range. The Tu-126 AEW aircraft and Tu-142 maritime patrol aircraft are two more NK-12 powered designs derived from the Tu-95. The NK-12 engine powers another well-known Soviet aircraft, the Antonov An-22 Antheus, a heavy-lift cargo aircraft. At the time of its introduction, the An-22 was the largest aircraft in the world and is still by far the world's largest turboprop-powered aircraft. From the 1960s through the 1970s, it set several world records in the categories of maximum payload-to-height ratio and maximum payload lifted to altitude. Of lesser note is the use of the NK-12 engine in the A-90 Orlyonok, a mid-size Soviet ekranoplan. The A-90 uses one NK-12 engine mounted at the top of its T-tail, along with two turbofans installed in the nose. In the 1980s, Kuznetsov continued to develop powerful contra-rotating engines. The NK-110, which was tested in the late 1980s, had a contra-rotating propeller configuration with four blades in front and four in back, like the NK-12. Its was smaller than the NK-12's diameter, but it produced a power output of , delivering a takeoff thrust of . Even more powerful was the NK-62, which was in development throughout most of the decade. The NK-62 had an identical propeller diameter and blade configuration to the NK-110, but it offered a higher takeoff thrust of . The associated NK-62M had a takeoff thrust of , and it could deliver of emergency thrust. Unlike the NK-12, however, these later engines were not adopted by any of the aircraft design bureaus. In 1994, Antonov produced the An-70, a heavy transport aircraft. It is powered by four Progress D-27 propfan engines driving contra-rotating propellers. The characteristics of the D-27 engine and its propeller make it a propfan, a hybrid between a turbofan engine and a turboprop engine. United States The United States worked with several prototypes, including the Northrop XB-35, XB-42 Mixmaster, the Douglas XTB2D Skypirate, the Curtiss XBTC, the A2J Super Savage, the Boeing XF8B, the XP-56 Black Bullet, the Fisher P-75 Eagle and the tail-sitting Convair XFY "Pogo" and Lockheed XFV "Salmon" VTOL fighters and the Hughes XF-11 reconnaissance plane. The Convair R3Y Tradewind flying boat entered service with contra-rotating propellers. However, both piston-engined and turboprop-powered propeller-driven aircraft were reaching their zenith and new technological developments such as the advent of the pure turbojet and turbofan engines, both without propellers, meant that the designs were quickly eclipsed. The US propeller manufacturer, Hamilton Standard, bought a Fairey Gannet in 1983 to study the effects of counter rotation on propeller noise and blade vibratory stresses. The Gannet was particularly suitable because the independently-driven propellers provided a comparison between counter and single rotation. Ultralight applications An Austrian company, Sun Flightcraft, distributes a contra-rotating gearbox for use on Rotax 503 and 582 engines on ultralight and microlight aircraft. The Coax-P was developed by Hans Neudorfer of NeuraJet and allows powered hang-gliders and parachutes to develop 15 to 20 percent more power while reducing torque moments. The manufacturer also reports reduced noise levels from dual contra-rotating props using the Coax-P gearbox. See also Contra-rotating marine propellers Toroidal propeller ("looped propeller") References External links Luftfahrtmuseum.com – Further information and pictures of contra rotators for the Fairey Gannet and Shackleton A History of Aircraft Using Contra-Rotating Propellers (Part 1) – Aircraft Engine Historical Society A History of Aircraft Using Contra-Rotating Propellers (Part 2) – Aircraft Engine Historical Society A History of Aircraft Using Contra-Rotating Propellers (Part 3) – Aircraft Engine Historical Society A History of Aircraft Using Contra-Rotating Propellers (Part 4) – Aircraft Engine Historical Society Aircraft engines Aircraft configurations Propellers
Contra-rotating propellers
[ "Technology", "Engineering" ]
1,907
[ "Aerospace engineering", "Aircraft configurations", "Engines", "Aircraft engines" ]
1,055,890
https://en.wikipedia.org/wiki/Sustainable%20energy
Energy is sustainable if it "meets the needs of the present without compromising the ability of future generations to meet their own needs." Definitions of sustainable energy usually look at its effects on the environment, the economy, and society. These impacts range from greenhouse gas emissions and air pollution to energy poverty and toxic waste. Renewable energy sources such as wind, hydro, solar, and geothermal energy can cause environmental damage but are generally far more sustainable than fossil fuel sources. The role of non-renewable energy sources in sustainable energy is controversial. Nuclear power does not produce carbon pollution or air pollution, but has drawbacks that include radioactive waste, the risk of nuclear proliferation, and the risk of accidents. Switching from coal to natural gas has environmental benefits, including a lower climate impact, but may lead to a delay in switching to more sustainable options. Carbon capture and storage can be built into power plants to remove their carbon dioxide () emissions, but this technology is expensive and has rarely been implemented. Fossil fuels provide 85% of the world's energy consumption, and the energy system is responsible for 76% of global greenhouse gas emissions. Around 790 million people in developing countries lack access to electricity, and 2.6 billion rely on polluting fuels such as wood or charcoal to cook. Cooking with biomass plus fossil fuel pollution causes an estimated 7 million deaths each year. Limiting global warming to will require transforming energy production, distribution, storage, and consumption. Universal access to clean electricity can have major benefits to the climate, human health, and the economies of developing countries. Climate change mitigation pathways have been proposed to limit global warming to . These include phasing out coal-fired power plants, conserving energy, producing more electricity from clean sources such as wind and solar, and switching from fossil fuels to electricity for transport and heating buildings. Power output from some renewable energy sources varies depending on when the wind blows and the sun shines. Switching to renewable energy can therefore require electrical grid upgrades, such as the addition of energy storage. Some processes that are difficult to electrify can use hydrogen fuel produced from low-emission energy sources. In the International Energy Agency's proposal for achieving net zero emissions by 2050, about 35% of the reduction in emissions depends on technologies that are still in development as of 2023. Wind and solar market share grew to 8.5% of worldwide electricity in 2019, and costs continue to fall. The Intergovernmental Panel on Climate Change (IPCC) estimates that 2.5% of world gross domestic product (GDP) would need to be invested in the energy system each year between 2016 and 2035 to limit global warming to . Governments can fund the research, development, and demonstration of new clean energy technologies. They can also build infrastructure for electrification and sustainable transport. Finally, governments can encourage clean energy deployment with policies such as carbon pricing, renewable portfolio standards, and phase-outs of fossil fuel subsidies. These policies may also increase energy security. Definitions and background Definitions The United Nations Brundtland Commission described the concept of sustainable development, for which energy is a key component, in its 1987 report Our Common Future. It defined sustainable development as meeting "the needs of the present without compromising the ability of future generations to meet their own needs". This description of sustainable development has since been referenced in many definitions and explanations of sustainable energy. There is no universally accepted interpretation of how the concept of sustainability applies to energy on a global scale. Working definitions of sustainable energy encompass multiple dimensions of sustainability such as environmental, economic, and social dimensions. Historically, the concept of sustainable energy development has focused on emissions and on energy security. Since the early 1990s, the concept has broadened to encompass wider social and economic issues. The environmental dimension of sustainability includes greenhouse gas emissions, impacts on biodiversity and ecosystems, hazardous waste and toxic emissions, water consumption, and depletion of non-renewable resources. Energy sources with low environmental impact are sometimes called green energy or clean energy. The economic dimension of sustainability covers economic development, efficient use of energy, and energy security to ensure that each country has constant access to sufficient energy. Social issues include access to affordable and reliable energy for all people, workers' rights, and land rights. Environmental impacts The current energy system contributes to many environmental problems, including climate change, air pollution, biodiversity loss, the release of toxins into the environment, and water scarcity. As of 2019, 85% of the world's energy needs are met by burning fossil fuels. Energy production and consumption are responsible for 76% of annual human-caused greenhouse gas emissions as of 2018. The 2015 international Paris Agreement on climate change aims to limit global warming to well below and preferably to 1.5 °C (2.7 °F); achieving this goal will require that emissions be reduced as soon as possible and reach net-zero by mid-century. The burning of fossil fuels and biomass is a major source of air pollution, which causes an estimated 7 million deaths each year, with the greatest attributable disease burden seen in low and middle-income countries. Fossil-fuel burning in power plants, vehicles, and factories is the main source of emissions that combine with oxygen in the atmosphere to cause acid rain. Air pollution is the second-leading cause of death from non-infectious disease. An estimated 99% of the world's population lives with levels of air pollution that exceed the World Health Organization recommended limits. Cooking with polluting fuels such as wood, animal dung, coal, or kerosene is responsible for nearly all indoor air pollution, which causes an estimated 1.6 to 3.8 million deaths annually, and also contributes significantly to outdoor air pollution. Health effects are concentrated among women, who are likely to be responsible for cooking, and young children. Environmental impacts extend beyond the by-products of combustion. Oil spills at sea harm marine life and may cause fires which release toxic emissions. Around 10% of global water use goes to energy production, mainly for cooling in thermal energy plants. In dry regions, this contributes to water scarcity. Bioenergy production, coal mining and processing, and oil extraction also require large amounts of water. Excessive harvesting of wood and other combustible material for burning can cause serious local environmental damage, including desertification. Sustainable development goals Meeting existing and future energy demands in a sustainable way is a critical challenge for the global goal of limiting climate change while maintaining economic growth and enabling living standards to rise. Reliable and affordable energy, particularly electricity, is essential for health care, education, and economic development. As of 2020, 790 million people in developing countries do not have access to electricity, and around 2.6 billion rely on burning polluting fuels for cooking. Improving energy access in the least-developed countries and making energy cleaner are key to achieving most of the United Nations 2030 Sustainable Development Goals, which cover issues ranging from climate action to gender equality. Sustainable Development Goal 7 calls for "access to affordable, reliable, sustainable and modern energy for all", including universal access to electricity and to clean cooking facilities by 2030. Energy conservation Energy efficiency—using less energy to deliver the same goods or services, or delivering comparable services with less goods—is a cornerstone of many sustainable energy strategies. The International Energy Agency (IEA) has estimated that increasing energy efficiency could achieve 40% of greenhouse gas emission reductions needed to fulfil the Paris Agreement's goals. Energy can be conserved by increasing the technical efficiency of appliances, vehicles, industrial processes, and buildings. Another approach is to use fewer materials whose production requires a lot of energy, for example through better building design and recycling. Behavioural changes such as using videoconferencing rather than business flights, or making urban trips by cycling, walking or public transport rather than by car, are another way to conserve energy. Government policies to improve efficiency can include building codes, performance standards, carbon pricing, and the development of energy-efficient infrastructure to encourage changes in transport modes. The energy intensity of the global economy (the amount of energy consumed per unit of gross domestic product (GDP)) is a rough indicator of the energy efficiency of economic production. In 2010, global energy intensity was 5.6 megajoules (1.6 kWh) per US dollar of GDP. United Nations goals call for energy intensity to decrease by 2.6% each year between 2010 and 2030. In recent years this target has not been met. For instance, between 2017 and 2018, energy intensity decreased by only 1.1%. Efficiency improvements often lead to a rebound effect in which consumers use the money they save to buy more energy-intensive goods and services. For example, recent technical efficiency improvements in transport and buildings have been largely offset by trends in consumer behaviour, such as selecting larger vehicles and homes. Sustainable energy sources Renewable energy sources Renewable energy sources are essential to sustainable energy, as they generally strengthen energy security and emit far fewer greenhouse gases than fossil fuels. Renewable energy projects sometimes raise significant sustainability concerns, such as risks to biodiversity when areas of high ecological value are converted to bioenergy production or wind or solar farms. Hydropower is the largest source of renewable electricity while solar and wind energy are growing rapidly. Photovoltaic solar and onshore wind are the cheapest forms of new power generation capacity in most countries. For more than half of the 770 million people who currently lack access to electricity, decentralised renewable energy such as solar-powered mini-grids is likely the cheapest method of providing it by 2030. United Nations targets for 2030 include substantially increasing the proportion of renewable energy in the world's energy supply. According to the International Energy Agency, renewable energy sources like wind and solar power are now a commonplace source of electricity, making up 70% of all new investments made in the world's power generation. The Agency expects renewables to become the primary energy source for electricity generation globally in the next three years, overtaking coal. Solar The Sun is Earth's primary source of energy, a clean and abundantly available resource in many regions. In 2019, solar power provided around 3% of global electricity, mostly through solar panels based on photovoltaic cells (PV). Solar PV is expected to be the electricity source with the largest installed capacity worldwide by 2027. The panels are mounted on top of buildings or installed in utility-scale solar parks. Costs of solar photovoltaic cells have dropped rapidly, driving strong growth in worldwide capacity. The cost of electricity from new solar farms is competitive with, or in many places, cheaper than electricity from existing coal plants. Various projections of future energy use identify solar PV as one of the main sources of energy generation in a sustainable mix. Most components of solar panels can be easily recycled, but this is not always done in the absence of regulation. Panels typically contain heavy metals, so they pose environmental risks if put in landfills. It takes fewer than two years for a solar panel to produce as much energy as was used for its production. Less energy is needed if materials are recycled rather than mined. In concentrated solar power, solar rays are concentrated by a field of mirrors, heating a fluid. Electricity is produced from the resulting steam with a heat engine. Concentrated solar power can support dispatchable power generation, as some of the heat is typically stored to enable electricity to be generated when needed. In addition to electricity production, solar energy is used more directly; solar thermal heating systems are used for hot water production, heating buildings, drying, and desalination. Wind power Wind has been an important driver of development over millennia, providing mechanical energy for industrial processes, water pumps, and sailing ships. Modern wind turbines are used to generate electricity and provided approximately 6% of global electricity in 2019. Electricity from onshore wind farms is often cheaper than existing coal plants and competitive with natural gas and nuclear. Wind turbines can also be placed offshore, where winds are steadier and stronger than on land but construction and maintenance costs are higher. Onshore wind farms, often built in wild or rural areas, have a visual impact on the landscape. While collisions with wind turbines kill both bats and to a lesser extent birds, these impacts are lower than from other infrastructure such as windows and transmission lines. The noise and flickering light created by the turbines can cause annoyance and constrain construction near densely populated areas. Wind power, in contrast to nuclear and fossil fuel plants, does not consume water. Little energy is needed for wind turbine construction compared to the energy produced by the wind power plant itself. Turbine blades are not fully recyclable, and research into methods of manufacturing easier-to-recycle blades is ongoing. Hydropower Hydroelectric plants convert the energy of moving water into electricity. In 2020, hydropower supplied 17% of the world's electricity, down from a high of nearly 20% in the mid-to-late 20th century. In conventional hydropower, a reservoir is created behind a dam. Conventional hydropower plants provide a highly flexible, dispatchable electricity supply. They can be combined with wind and solar power to meet peaks in demand and to compensate when wind and sun are less available. Compared to reservoir-based facilities, run-of-the-river hydroelectricity generally has less environmental impact. However, its ability to generate power depends on river flow, which can vary with daily and seasonal weather. Reservoirs provide water quantity controls that are used for flood control and flexible electricity output while also providing security during drought for drinking water supply and irrigation. Hydropower ranks among the energy sources with the lowest levels of greenhouse gas emissions per unit of energy produced, but levels of emissions vary enormously between projects. The highest emissions tend to occur with large dams in tropical regions. These emissions are produced when the biological matter that becomes submerged in the reservoir's flooding decomposes and releases carbon dioxide and methane. Deforestation and climate change can reduce energy generation from hydroelectric dams. Depending on location, large dams can displace residents and cause significant local environmental damage; potential dam failure could place the surrounding population at risk. Geothermal Geothermal energy is produced by tapping into deep underground heat and harnessing it to generate electricity or to heat water and buildings. The use of geothermal energy is concentrated in regions where heat extraction is economical: a combination is needed of high temperatures, heat flow, and permeability (the ability of the rock to allow fluids to pass through). Power is produced from the steam created in underground reservoirs. Geothermal energy provided less than 1% of global energy consumption in 2020. Geothermal energy is a renewable resource because thermal energy is constantly replenished from neighbouring hotter regions and the radioactive decay of naturally occurring isotopes. On average, the greenhouse gas emissions of geothermal-based electricity are less than 5% that of coal-based electricity. Geothermal energy carries a risk of inducing earthquakes, needs effective protection to avoid water pollution, and releases toxic emissions which can be captured. Bioenergy Biomass is renewable organic material that comes from plants and animals. It can either be burned to produce heat and electricity or be converted into biofuels such as biodiesel and ethanol, which can be used to power vehicles. The climate impact of bioenergy varies considerably depending on where biomass feedstocks come from and how they are grown. For example, burning wood for energy releases carbon dioxide; those emissions can be significantly offset if the trees that were harvested are replaced by new trees in a well-managed forest, as the new trees will absorb carbon dioxide from the air as they grow. However, the establishment and cultivation of bioenergy crops can displace natural ecosystems, degrade soils, and consume water resources and synthetic fertilisers. Approximately one-third of all wood used for traditional heating and cooking in tropical areas is harvested unsustainably. Bioenergy feedstocks typically require significant amounts of energy to harvest, dry, and transport; the energy usage for these processes may emit greenhouse gases. In some cases, the impacts of land-use change, cultivation, and processing can result in higher overall carbon emissions for bioenergy compared to using fossil fuels. Use of farmland for growing biomass can result in less land being available for growing food. In the United States, around 10% of motor gasoline has been replaced by corn-based ethanol, which requires a significant proportion of the harvest. In Malaysia and Indonesia, clearing forests to produce palm oil for biodiesel has led to serious social and environmental effects, as these forests are critical carbon sinks and habitats for diverse species. Since photosynthesis captures only a small fraction of the energy in sunlight, producing a given amount of bioenergy requires a large amount of land compared to other renewable energy sources. Second-generation biofuels which are produced from non-food plants or waste reduce competition with food production, but may have other negative effects including trade-offs with conservation areas and local air pollution. Relatively sustainable sources of biomass include algae, waste, and crops grown on soil unsuitable for food production. Carbon capture and storage technology can be used to capture emissions from bioenergy power plants. This process is known as bioenergy with carbon capture and storage (BECCS) and can result in net carbon dioxide removal from the atmosphere. However, BECCS can also result in net positive emissions depending on how the biomass material is grown, harvested, and transported. Deployment of BECCS at scales described in some climate change mitigation pathways would require converting large amounts of cropland. Marine energy Marine energy has the smallest share of the energy market. It includes OTEC, tidal power, which is approaching maturity, and wave power, which is earlier in its development. Two tidal barrage systems in France and in South Korea make up 90% of global production. While single marine energy devices pose little risk to the environment, the impacts of larger devices are less well known. Non-renewable energy sources Fossil fuel switching and mitigation Switching from coal to natural gas has advantages in terms of sustainability. For a given unit of energy produced, the life-cycle greenhouse-gas emissions of natural gas are around 40 times the emissions of wind or nuclear energy but are much less than coal. Burning natural gas produces around half the emissions of coal when used to generate electricity and around two-thirds the emissions of coal when used to produce heat. Natural gas combustion also produces less air pollution than coal. However, natural gas is a potent greenhouse gas in itself, and leaks during extraction and transportation can negate the advantages of switching away from coal. The technology to curb methane leaks is widely available but it is not always used. Switching from coal to natural gas reduces emissions in the short term and thus contributes to climate change mitigation. However, in the long term it does not provide a path to net-zero emissions. Developing natural gas infrastructure risks carbon lock-in and stranded assets, where new fossil infrastructure either commits to decades of carbon emissions, or has to be written off before it makes a profit. The greenhouse gas emissions of fossil fuel and biomass power plants can be significantly reduced through carbon capture and storage (CCS). Most studies use a working assumption that CCS can capture 85–90% of the carbon dioxide () emissions from a power plant. Even if 90% of emitted is captured from a coal-fired power plant, its uncaptured emissions are still many times greater than the emissions of nuclear, solar or wind energy per unit of electricity produced. Since coal plants using CCS are less efficient, they require more coal and thus increase the pollution associated with mining and transporting coal. CCS is one of the most expensive ways of reducing emissions in the energy sector. Deployment of this technology is very limited. As of 2024, CCS is used in only 5 power plants and in 39 other facilities. Nuclear power Nuclear power has been used since the 1950s as a low-carbon source of baseload electricity. Nuclear power plants in over 30 countries generate about 10% of global electricity. As of 2019, nuclear generated over a quarter of all low-carbon energy, making it the second largest source after hydropower. Nuclear power's lifecycle greenhouse gas emissions—including the mining and processing of uranium—are similar to the emissions from renewable energy sources. Nuclear power uses little land per unit of energy produced, compared to the major renewables. Additionally, Nuclear power does not create local air pollution. Although the uranium ore used to fuel nuclear fission plants is a non-renewable resource, enough exists to provide a supply for hundreds to thousands of years. However, uranium resources that can be accessed in an economically feasible manner, at the present state, are limited and uranium production could hardly keep up during the expansion phase. Climate change mitigation pathways consistent with ambitious goals typically see an increase in power supply from nuclear. There is controversy over whether nuclear power is sustainable, in part due to concerns around nuclear waste, nuclear weapon proliferation, and accidents. Radioactive nuclear waste must be managed for thousands of years. For each unit of energy produced, nuclear energy has caused far fewer accidental and pollution-related deaths than fossil fuels, and the historic fatality rate of nuclear is comparable to renewable sources. Public opposition to nuclear energy often makes nuclear plants politically difficult to implement. Reducing the time and the cost of building new nuclear plants have been goals for decades but costs remain high and timescales long. Various new forms of nuclear energy are in development, hoping to address the drawbacks of conventional plants. Fast breeder reactors are capable of recycling nuclear waste and therefore can significantly reduce the amount of waste that requires geological disposal, but have not yet been deployed on a large-scale commercial basis. Nuclear power based on thorium (rather than uranium) may be able to provide higher energy security for countries that do not have a large supply of uranium. Small modular reactors may have several advantages over current large reactors: It should be possible to build them faster and their modularization would allow for cost reductions via learning-by-doing. Several countries are attempting to develop nuclear fusion reactors, which would generate small amounts of waste and no risk of explosions. Although fusion power has taken steps forward in the lab, the multi-decade timescale needed to bring it to commercialization and then scale means it will not contribute to a 2050 net zero goal for climate change mitigation. Energy system transformation Decarbonisation of the global energy system The emissions reductions necessary to keep global warming below 2°C will require a system-wide transformation of the way energy is produced, distributed, stored, and consumed. For a society to replace one form of energy with another, multiple technologies and behaviours in the energy system must change. For example, transitioning from oil to solar power as the energy source for cars requires the generation of solar electricity, modifications to the electrical grid to accommodate fluctuations in solar panel output or the introduction of variable battery chargers and higher overall demand, adoption of electric cars, and networks of electric vehicle charging facilities and repair shops. Many climate change mitigation pathways envision three main aspects of a low-carbon energy system: The use of low-emission energy sources to produce electricity Electrification – that is increased use of electricity instead of directly burning fossil fuels Accelerated adoption of energy efficiency measures Some energy-intensive technologies and processes are difficult to electrify, including aviation, shipping, and steelmaking. There are several options for reducing the emissions from these sectors: biofuels and synthetic carbon-neutral fuels can power many vehicles that are designed to burn fossil fuels, however biofuels cannot be sustainably produced in the quantities needed and synthetic fuels are currently very expensive. For some applications, the most prominent alternative to electrification is to develop a system based on sustainably-produced hydrogen fuel. Full decarbonisation of the global energy system is expected to take several decades and can mostly be achieved with existing technologies. In the IEA's proposal for achieving net zero emissions by 2050, about 35% of the reduction in emissions depends on technologies that are still in development as of 2023. Technologies that are relatively immature include batteries and processes to create carbon-neutral fuels. Developing new technologies requires research and development, demonstration, and cost reductions via deployment. The transition to a zero-carbon energy system will bring strong co-benefits for human health: The World Health Organization estimates that efforts to limit global warming to 1.5 °C could save millions of lives each year from reductions to air pollution alone. With good planning and management, pathways exist to provide universal access to electricity and clean cooking by 2030 in ways that are consistent with climate goals. Historically, several countries have made rapid economic gains through coal usage. However, there remains a window of opportunity for many poor countries and regions to "leapfrog" fossil fuel dependency by developing their energy systems based on renewables, given adequate international investment and knowledge transfer. Integrating variable energy sources To deliver reliable electricity from variable renewable energy sources such as wind and solar, electrical power systems require flexibility. Most electrical grids were constructed for non-intermittent energy sources such as coal-fired power plants. As larger amounts of solar and wind energy are integrated into the grid, changes have to be made to the energy system to ensure that the supply of electricity is matched to demand. In 2019, these sources generated 8.5% of worldwide electricity, a share that has grown rapidly. There are various ways to make the electricity system more flexible. In many places, wind and solar generation are complementary on a daily and a seasonal scale: there is more wind during the night and in winter when solar energy production is low. Linking different geographical regions through long-distance transmission lines allows for further cancelling out of variability. Energy demand can be shifted in time through energy demand management and the use of smart grids, matching the times when variable energy production is highest. With grid energy storage, energy produced in excess can be released when needed. Further flexibility could be provided from sector coupling, that is coupling the electricity sector to the heat and mobility sector via power-to-heat-systems and electric vehicles. Building overcapacity for wind and solar generation can help ensure that enough electricity is produced even during poor weather. In optimal weather, energy generation may have to be curtailed if excess electricity cannot be used or stored. The final demand-supply mismatch may be covered by using dispatchable energy sources such as hydropower, bioenergy, or natural gas. Energy storage Energy storage helps overcome barriers to intermittent renewable energy and is an important aspect of a sustainable energy system. The most commonly used and available storage method is pumped-storage hydroelectricity, which requires locations with large differences in height and access to water. Batteries, especially lithium-ion batteries, are also deployed widely. Batteries typically store electricity for short periods; research is ongoing into technology with sufficient capacity to last through seasons. Costs of utility-scale batteries in the US have fallen by around 70% since 2015, however the cost and low energy density of batteries makes them impractical for the very large energy storage needed to balance inter-seasonal variations in energy production. Pumped hydro storage and power-to-gas (converting electricity to gas and back) with capacity for multi-month usage has been implemented in some locations. Electrification Compared to the rest of the energy system, emissions can be reduced much faster in the electricity sector. As of 2019, 37% of global electricity is produced from low-carbon sources (renewables and nuclear energy). Fossil fuels, primarily coal, produce the rest of the electricity supply. One of the easiest and fastest ways to reduce greenhouse gas emissions is to phase out coal-fired power plants and increase renewable electricity generation. Climate change mitigation pathways envision extensive electrification—the use of electricity as a substitute for the direct burning of fossil fuels for heating buildings and for transport. Ambitious climate policy would see a doubling of energy share consumed as electricity by 2050, from 20% in 2020. One of the challenges in providing universal access to electricity is distributing power to rural areas. Off-grid and mini-grid systems based on renewable energy, such as small solar PV installations that generate and store enough electricity for a village, are important solutions. Wider access to reliable electricity would lead to less use of kerosene lighting and diesel generators, which are currently common in the developing world. Infrastructure for generating and storing renewable electricity requires minerals and metals, such as cobalt and lithium for batteries and copper for solar panels. Recycling can meet some of this demand if product lifecycles are well-designed, however achieving net zero emissions would still require major increases in mining for 17 types of metals and minerals. A small group of countries or companies sometimes dominate the markets for these commodities, raising geopolitical concerns. Most of the world's cobalt, for instance, is mined in the Democratic Republic of the Congo, a politically unstable region where mining is often associated with human rights risks. More diverse geographical sourcing may ensure a more flexible and less brittle supply chain. Hydrogen Hydrogen gas is widely discussed in the context of energy, as an energy carrier with potential to reduce greenhouse gas emissions. This requires hydrogen to be produced cleanly, in quantities to supply in sectors and applications where cheaper and more energy efficient mitigation alternatives are limited. These applications include heavy industry and long-distance transport. Hydrogen can be deployed as an energy source in fuel cells to produce electricity, or via combustion to generate heat. When hydrogen is consumed in fuel cells, the only emission at the point of use is water vapour. Combustion of hydrogen can lead to the thermal formation of harmful nitrogen oxides. The overall lifecycle emissions of hydrogen depend on how it is produced. Nearly all of the world's current supply of hydrogen is created from fossil fuels. The main method is steam methane reforming, in which hydrogen is produced from a chemical reaction between steam and methane, the main component of natural gas. Producing one tonne of hydrogen through this process emits 6.6–9.3 tonnes of carbon dioxide. While carbon capture and storage (CCS) could remove a large fraction of these emissions, the overall carbon footprint of hydrogen from natural gas is difficult to assess , in part because of emissions (including vented and fugitive methane) created in the production of the natural gas itself. Electricity can be used to split water molecules, producing sustainable hydrogen provided the electricity was generated sustainably. However, this electrolysis process is currently more expensive than creating hydrogen from methane without CCS and the efficiency of energy conversion is inherently low. Hydrogen can be produced when there is a surplus of variable renewable electricity, then stored and used to generate heat or to re-generate electricity. It can be further transformed into liquid fuels such as green ammonia and green methanol. Innovation in hydrogen electrolysers could make large-scale production of hydrogen from electricity more cost-competitive. Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. For steelmaking, hydrogen can function as a clean energy carrier and simultaneously as a low-carbon catalyst replacing coal-derived coke. Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and to a lesser extent heavy goods vehicles. For light duty vehicles including passenger cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future. Disadvantages of hydrogen as an energy carrier include high costs of storage and distribution due to hydrogen's explosivity, its large volume compared to other fuels, and its tendency to make pipes brittle. Energy usage technologies Transport Transport accounts for 14% of global greenhouse gas emissions, but there are multiple ways to make transport more sustainable. Public transport typically emits fewer greenhouse gases per passenger than personal vehicles, since trains and buses can carry many more passengers at once. Short-distance flights can be replaced by high-speed rail, which is more efficient, especially when electrified. Promoting non-motorised transport such as walking and cycling, particularly in cities, can make transport cleaner and healthier. The energy efficiency of cars has increased over time, but shifting to electric vehicles is an important further step towards decarbonising transport and reducing air pollution. A large proportion of traffic-related air pollution consists of particulate matter from road dust and the wearing-down of tyres and brake pads. Substantially reducing pollution from these non-tailpipe sources cannot be achieved by electrification; it requires measures such as making vehicles lighter and driving them less. Light-duty cars in particular are a prime candidate for decarbonization using battery technology. 25% of the world's emissions still originate from the transportation sector. Long-distance freight transport and aviation are difficult sectors to electrify with current technologies, mostly because of the weight of batteries needed for long-distance travel, battery recharging times, and limited battery lifespans. Where available, freight transport by ship and rail is generally more sustainable than by air and by road. Hydrogen vehicles may be an option for larger vehicles such as lorries. Many of the techniques needed to lower emissions from shipping and aviation are still early in their development, with ammonia (produced from hydrogen) a promising candidate for shipping fuel. Aviation biofuel may be one of the better uses of bioenergy if emissions are captured and stored during manufacture of the fuel. Buildings Over one-third of energy use is in buildings and their construction. To heat buildings, alternatives to burning fossil fuels and biomass include electrification through heat pumps or electric heaters, geothermal energy, central solar heating, reuse of waste heat, and seasonal thermal energy storage. Heat pumps provide both heat and air conditioning through a single appliance. The IEA estimates heat pumps could provide over 90% of space and water heating requirements globally. A highly efficient way to heat buildings is through district heating, in which heat is generated in a centralised location and then distributed to multiple buildings through insulated pipes. Traditionally, most district heating systems have used fossil fuels, but modern and cold district heating systems are designed to use high shares of renewable energy.Cooling of buildings can be made more efficient through passive building design, planning that minimises the urban heat island effect, and district cooling systems that cool multiple buildings with piped cold water. Air conditioning requires large amounts of electricity and is not always affordable for poorer households. Some air conditioning units still use refrigerants that are greenhouse gases, as some countries have not ratified the Kigali Amendment to only use climate-friendly refrigerants. Cooking In developing countries where populations suffer from energy poverty, polluting fuels such as wood or animal dung are often used for cooking. Cooking with these fuels is generally unsustainable, because they release harmful smoke and because harvesting wood can lead to forest degradation. The universal adoption of clean cooking facilities, which are already ubiquitous in rich countries, would dramatically improve health and have minimal negative effects on climate. Clean cooking facilities, e.g. cooking facilities that produce less indoor soot, typically use natural gas, liquefied petroleum gas (both of which consume oxygen and produce carbon-dioxide) or electricity as the energy source; biogas systems are a promising alternative in some contexts. Improved cookstoves that burn biomass more efficiently than traditional stoves are an interim solution where transitioning to clean cooking systems is difficult. Industry Over one-third of energy use is by industry. Most of that energy is deployed in thermal processes: generating heat, drying, and refrigeration. The share of renewable energy in industry was 14.5% in 2017—mostly low-temperature heat supplied by bioenergy and electricity. The most energy-intensive activities in industry have the lowest shares of renewable energy, as they face limitations in generating heat at temperatures over . For some industrial processes, commercialisation of technologies that have not yet been built or operated at full scale will be needed to eliminate greenhouse gas emissions. Steelmaking, for instance, is difficult to electrify because it traditionally uses coke, which is derived from coal, both to create very high-temperature heat and as an ingredient in the steel itself. The production of plastic, cement, and fertilisers also requires significant amounts of energy, with limited possibilities available to decarbonise. A switch to a circular economy would make industry more sustainable as it involves recycling more and thereby using less energy compared to investing energy to mine and refine new raw materials. Government policies Well-designed government policies that promote energy system transformation can lower greenhouse gas emissions and improve air quality simultaneously, and in many cases can also increase energy security and lessen the financial burden of using energy. Environmental regulations have been used since the 1970s to promote more sustainable use of energy. Some governments have committed to dates for phasing out coal-fired power plants and ending new fossil fuel exploration. Governments can require that new cars produce zero emissions, or new buildings are heated by electricity instead of gas. Renewable portfolio standards in several countries require utilities to increase the percentage of electricity they generate from renewable sources. Governments can accelerate energy system transformation by leading the development of infrastructure such as long-distance electrical transmission lines, smart grids, and hydrogen pipelines. In transport, appropriate infrastructure and incentives can make travel more efficient and less car-dependent. Urban planning that discourages sprawl can reduce energy use in local transport and buildings while enhancing quality of life. Government-funded research, procurement, and incentive policies have historically been critical to the development and maturation of clean energy technologies, such as solar and lithium batteries. In the IEA's scenario for a net zero-emission energy system by 2050, public funding is rapidly mobilised to bring a range of newer technologies to the demonstration phase and to encourage deployment. Carbon pricing (such as a tax on emissions) gives industries and consumers an incentive to reduce emissions while letting them choose how to do so. For example, they can shift to low-emission energy sources, improve energy efficiency, or reduce their use of energy-intensive products and services. Carbon pricing has encountered strong political pushback in some jurisdictions, whereas energy-specific policies tend to be politically safer. Most studies indicate that to limit global warming to 1.5°C, carbon pricing would need to be complemented by stringent energy-specific policies. As of 2019, the price of carbon in most regions is too low to achieve the goals of the Paris Agreement. Carbon taxes provide a source of revenue that can be used to lower other taxes or help lower-income households afford higher energy costs. Some governments, such as the EU and the UK, are exploring the use of carbon border adjustments. These place tariffs on imports from countries with less stringent climate policies, to ensure that industries subject to internal carbon prices remain competitive. The scale and pace of policy reforms that have been initiated as of 2020 are far less than needed to fulfil the climate goals of the Paris Agreement. In addition to domestic policies, greater international cooperation is required to accelerate innovation and to assist poorer countries in establishing a sustainable path to full energy access. Countries may support renewables to create jobs. The International Labour Organization estimates that efforts to limit global warming to 2 °C would result in net job creation in most sectors of the economy. It predicts that 24 million new jobs would be created by 2030 in areas such as renewable electricity generation, improving energy-efficiency in buildings, and the transition to electric vehicles. Six million jobs would be lost, in sectors such as mining and fossil fuels. Governments can make the transition to sustainable energy more politically and socially feasible by ensuring a just transition for workers and regions that depend on the fossil fuel industry, to ensure they have alternative economic opportunities. Finance Raising enough money for innovation and investment is a prerequisite for the energy transition. The IPCC estimates that to limit global warming to 1.5 °C, US$2.4 trillion would need to be invested in the energy system each year between 2016 and 2035. Most studies project that these costs, equivalent to 2.5% of world GDP, would be small compared to the economic and health benefits. Average annual investment in low-carbon energy technologies and energy efficiency would need to be six times more by 2050 compared to 2015. Underfunding is particularly acute in the least developed countries, which are not attractive to the private sector. The United Nations Framework Convention on Climate Change estimates that climate financing totalled $681 billion in 2016. Most of this is private-sector investment in renewable energy deployment, public-sector investment in sustainable transport, and private-sector investment in energy efficiency. The Paris Agreement includes a pledge of an extra $100 billion per year from developed countries to poor countries, to do climate change mitigation and adaptation. This goal has not been met and measurement of progress has been hampered by unclear accounting rules. If energy-intensive businesses like chemicals, fertilizers, ceramics, steel, and non-ferrous metals invest significantly in R&D, its usage in industry might amount to between 5% and 20% of all energy used. Fossil fuel funding and subsidies are a significant barrier to the energy transition. Direct global fossil fuel subsidies were $319 billion in 2017. This rises to $5.2 trillion when indirect costs are priced in, like the effects of air pollution. Ending these could lead to a 28% reduction in global carbon emissions and a 46% reduction in air pollution deaths. Funding for clean energy has been largely unaffected by the COVID-19 pandemic, and pandemic-related economic stimulus packages offer possibilities for a green recovery. References Sources External links Climate change mitigation Climate change policy Emissions reduction Energy economics Environmental impact of the energy industry Sustainable development
Sustainable energy
[ "Physics", "Chemistry", "Environmental_science" ]
8,489
[ "Physical quantities", "Emissions reduction", "Energy economics", "Energy (physics)", "Energy", "Greenhouse gases", "Environmental social science" ]
1,056,003
https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20curves
In differential geometry, the fundamental theorem of space curves states that every regular curve in three-dimensional space, with non-zero curvature, has its shape (and size or scale) completely determined by its curvature and torsion. Use A curve can be described, and thereby defined, by a pair of scalar fields: curvature and torsion , both of which depend on some parameter which parametrizes the curve but which can ideally be the arc length of the curve. From just the curvature and torsion, the vector fields for the tangent, normal, and binormal vectors can be derived using the Frenet–Serret formulas. Then, integration of the tangent field (done numerically, if not analytically) yields the curve. Congruence If a pair of curves are in different positions but have the same curvature and torsion, then they are congruent to each other. See also Differential geometry of curves Gaussian curvature References Further reading Theorems about curves Theorems in differential geometry
Fundamental theorem of curves
[ "Mathematics" ]
206
[ "Theorems in differential geometry", "Theorems about curves", "Theorems in geometry" ]
1,056,460
https://en.wikipedia.org/wiki/Electronic%20brakeforce%20distribution
Electronic brakeforce distribution (EBD or EBFD) or electronic brakeforce limitation (EBL) is an automobile brake technology that automatically varies the amount of force applied to each of a vehicle's wheels, based on road conditions, speed, loading, etc, thus providing intelligent control of both brake balance and overall brake force. Always coupled with anti-lock braking systems (ABS), EBD can apply more or less braking pressure to each wheel in order to maximize stopping power whilst maintaining vehicular control. Typically, the front end carries more weight and EBD distributes less braking pressure to the rear brakes so the rear brakes do not lock up and cause a skid. In some systems, EBD distributes more braking pressure at the rear brakes during initial brake application before the effects of weight transfer become apparent. ABS Vehicle wheels may lock-up due to excessive wheel torque over tire–road friction forces available, caused by too much hydraulic line pressure. The ABS monitors wheel speeds and releases pressure on individual wheel brake lines, rapidly pulsing individual brakes to prevent lock-up. During heavy braking, preventing wheel lock-up helps the driver maintain steering control. Four channel ABS systems have an individual brake line for each of the four wheels, enabling different braking pressure on different road surfaces. Three channel systems are equipped with a sensor for each wheel, but control the rear brakes as a single unit. For example, less braking pressure is needed to lock a wheel on ice than a wheel that is on bare asphalt. If the left wheels are on asphalt and the right wheels are on ice, during an emergency stop, ABS detects the right wheels are about to lock and reduces braking force on the right front wheel. Four channel systems also reduce brake force on the right rear wheel, while a three channel system would also reduce force on both back wheels. Both systems help avoid lock-up and loss of vehicle control. EBD As per the technical paper published by Buschmann et al., "The job of the EBD as a subsystem of the ABS system is to control the effective adhesion utilization by the rear wheels. The pressure of the rear wheels are approximated to the ideal brake force distribution in a partial braking operation. To do so, the conventional brake design is modified in the direction of rear axle overbraking, and the components of the ABS are used. EBD reduces the strain on the hydraulic brake force proportioning valve in the vehicle. EBD optimizes the brake design with regard to: adhesion utilization; driving stability; wear; temperature stress; and pedal force." EBD may work in conjunction with ABS and electronic stability control (ESC) to minimize yaw accelerations during turns. ESC compares the steering wheel angle to vehicle turning rate using a yaw rate sensor. "Yaw" is the vehicle's rotation around its vertical center of gravity (turning left or right). If the yaw sensor detects less(more) yaw than the steering wheel angle should create, the car is understeering(oversteering) and ESC activates one of the front or rear brakes to rotate the car back onto its intended course. For example, if a car is making a left turn and begins to understeer (the car plows forward to the outside of the turn) ESC activates the left rear brake, which will help turn the car left. The sensors are so sensitive and the actuation is so quick that the system may correct direction before the driver reacts. ABS helps prevent wheel lock-up and EBD helps apply appropriate brake force to make ESC work effectively and easily. See also Brake assist Cornering brake control Automobile safety References Vehicle braking technologies Vehicle safety technologies Mechanical power control
Electronic brakeforce distribution
[ "Physics" ]
771
[ "Mechanics", "Mechanical power control" ]
178,702
https://en.wikipedia.org/wiki/Pound%20%28force%29
The pound of force or pound-force (symbol: lbf, sometimes lbf,) is a unit of force used in some systems of measurement, including English Engineering units and the foot–pound–second system. Pound-force should not be confused with pound-mass (lb), often simply called "pound", which is a unit of mass; nor should these be confused with foot-pound (ft⋅lbf), a unit of energy, or pound-foot (lbf⋅ft), a unit of torque. Definitions The pound-force is equal to the gravitational force exerted on a mass of one avoirdupois pound on the surface of Earth. Since the 18th century, the unit has been used in low-precision measurements, for which small changes in Earth's gravity (which varies from equator to pole by up to half a percent) can safely be neglected. The 20th century, however, brought the need for a more precise definition, requiring a standardized value for acceleration due to gravity. Product of avoirdupois pound and standard gravity The pound-force is the product of one avoirdupois pound (exactly ) and the standard acceleration due to gravity, approximately . The standard values of acceleration of the standard gravitational field (gn) and the international avoirdupois pound (lb) result in a pound-force equal to (). This definition can be rephrased in terms of the slug. A slug has a mass of 32.174049 lb. A pound-force is the amount of force required to accelerate a slug at a rate of , so: Conversion to other units Foot–pound–second (FPS) systems of units In some contexts, the term "pound" is used almost exclusively to refer to the unit of force and not the unit of mass. In those applications, the preferred unit of mass is the slug, i.e. lbf⋅s2/ft. In other contexts, the unit "pound" refers to a unit of mass. The international standard symbol for the pound as a unit of mass is lb. In the "engineering" systems (middle column), the weight of the mass unit (pound-mass) on Earth's surface is approximately equal to the force unit (pound-force). This is convenient because one pound mass exerts one pound force due to gravity. Note, however, unlike the other systems the force unit is not equal to the mass unit multiplied by the acceleration unit—the use of Newton's second law, , requires another factor, gc, usually taken to be 32.174049 (lb⋅ft)/(lbf⋅s2). "Absolute" systems are coherent systems of units: by using the slug as the unit of mass, the "gravitational" FPS system (left column) avoids the need for such a constant. The SI is an "absolute" metric system with kilogram and meter as base units. Pound of thrust The term pound of thrust is an alternative name for pound-force in specific contexts. It is frequently seen in US sources on jet engines and rocketry, some of which continue to use the FPS notation. For example, the thrust produced by each of the Space Shuttle's two Solid Rocket Boosters was , together . See also Foot-pound (energy) Ton-force Kip (unit) Mass in general relativity Mass in special relativity Mass versus weight for the difference between the two physical properties Newton Poundal Pounds per square inch, a unit of pressure Notes and references General sources Obert, Edward F. (1948). Thermodynamics. New York: D. J. Leggett Book Company. Chapter I "Survey of Dimensions and Units", pp. 1-24. Customary units of measurement in the United States Imperial units Units of force
Pound (force)
[ "Physics", "Mathematics" ]
787
[ "Force", "Physical quantities", "Quantity", "Units of force", "Units of measurement" ]
179,260
https://en.wikipedia.org/wiki/No-hair%20theorem
The no-hair theorem (which is a hypothesis) states that all stationary black hole solutions of the Einstein–Maxwell equations of gravitation and electromagnetism in general relativity can be completely characterized by only three independent externally observable classical parameters: mass, angular momentum, and electric charge. Other characteristics (such as geometry and magnetic moment) are uniquely determined by these three parameters, and all other information (for which "hair" is a metaphor) about the matter that formed a black hole or is falling into it "disappears" behind the black-hole event horizon and is therefore permanently inaccessible to external observers after the black hole "settles down" (by emitting gravitational and electromagnetic waves). Physicist John Archibald Wheeler expressed this idea with the phrase "black holes have no hair", which was the origin of the name. In a later interview, Wheeler said that Jacob Bekenstein coined this phrase. Richard Feynman objected to the phrase that seemed to me to best symbolize the finding of one of the graduate students: graduate student Jacob Bekenstein had shown that a black hole reveals nothing outside it of what went in, in the way of spinning electric particles. It might show electric charge, yes; mass, yes; but no other features or as he put it, "A black hole has no hair". Richard Feynman thought that was an obscene phrase and he didn't want to use it. But that is a phrase now often used to state this feature of black holes, that they don't indicate any other properties other than a charge and angular momentum and mass. The first version of the no-hair theorem for the simplified case of the uniqueness of the Schwarzschild metric was shown by Werner Israel in 1967. The result was quickly generalized to the cases of charged or spinning black holes. There is still no rigorous mathematical proof of a general no-hair theorem, and mathematicians refer to it as the no-hair conjecture. Even in the case of gravity alone (i.e., zero electric fields), the conjecture has only been partially resolved by results of Stephen Hawking, Brandon Carter, and David C. Robinson, under the additional hypothesis of non-degenerate event horizons and the technical, restrictive and difficult-to-justify assumption of real analyticity of the space-time continuum. Example Suppose two black holes have the same masses, electrical charges, and angular momenta, but the first black hole was made by collapsing ordinary matter whereas the second was made out of antimatter; nevertheless, then the conjecture states they will be completely indistinguishable to an observer outside the event horizon. None of the special particle physics pseudo-charges (i.e., the global charges baryonic number, leptonic number, etc., all of which would be different for the originating masses of matter that created the black holes) are conserved in the black hole, or if they are conserved somehow then their values would be unobservable from the outside. Changing the reference frame Every isolated unstable black hole decays rapidly to a stable black hole; and (excepting quantum fluctuations) stable black holes can be completely described (in a Cartesian coordinate system) at any moment in time by these eleven numbers: mass–energy , electric charge , position (three components), linear momentum (three components), angular momentum (three components). These numbers represent the conserved attributes of an object which can be determined from a distance by examining its gravitational and electromagnetic fields. All other variations in the black hole will either escape to infinity or be swallowed up by the black hole. By changing the reference frame one can set the linear momentum and position to zero and orient the spin angular momentum along the positive z axis. This eliminates eight of the eleven numbers, leaving three which are independent of the reference frame: mass, angular momentum magnitude, and electric charge. Thus any black hole that has been isolated for a significant period of time can be described by the Kerr–Newman metric in an appropriately chosen reference frame. Extensions The no-hair theorem was originally formulated for black holes within the context of a four-dimensional spacetime, obeying the Einstein field equation of general relativity with zero cosmological constant, in the presence of electromagnetic fields, or optionally other fields such as scalar fields and massive vector fields (Proca fields, etc.). It has since been extended to include the case where the cosmological constant is positive (which recent observations are tending to support). Magnetic charge, if detected as predicted by some theories, would form the fourth parameter possessed by a classical black hole. Counterexamples Counterexamples in which the theorem fails are known in spacetime dimensions higher than four; in the presence of non-abelian Yang–Mills fields, non-abelian Proca fields, some non-minimally coupled scalar fields, or skyrmions; or in some theories of gravity other than Einstein's general relativity. However, these exceptions are often unstable solutions and/or do not lead to conserved quantum numbers so that "The 'spirit' of the no-hair conjecture, however, seems to be maintained". It has been proposed that "hairy" black holes may be considered to be bound states of hairless black holes and solitons. In 2004, the exact analytical solution of a (3+1)-dimensional spherically symmetric black hole with minimally coupled self-interacting scalar field was derived. This showed that, apart from mass, electrical charge and angular momentum, black holes can carry a finite scalar charge which might be a result of interaction with cosmological scalar fields such as the inflaton. The solution is stable and does not possess any unphysical properties; however, the existence of a scalar field with the desired properties is only speculative. Observational results The results from the first observation of gravitational waves in 2015 provide some experimental evidence consistent with the uniqueness of the no-hair theorem. This observation is consistent with Stephen Hawking's theoretical work on black holes in the 1970s. Soft hair A study by Sasha Haco, Stephen Hawking, Malcolm Perry and Andrew Strominger postulates that black holes might contain "soft hair", giving the black hole more degrees of freedom than previously thought. This hair permeates at a very low-energy state, which is why it didn't come up in previous calculations that postulated the no-hair theorem. This was the subject of Hawking's final paper which was published posthumously. See also Black hole information paradox Event Horizon Telescope References External links , Stephen Hawking's purported solution to the black hole unitarity paradox, first reported in July 2004. Black holes Theorems in general relativity
No-hair theorem
[ "Physics", "Astronomy", "Mathematics" ]
1,374
[ "Physical phenomena", "Black holes", "Equations of physics", "Physical quantities", "Theorems in general relativity", "Unsolved problems in physics", "Astrophysics", "Theorems in mathematical physics", "Density", "Stellar phenomena", "Astronomical objects", "Physics theorems" ]
179,947
https://en.wikipedia.org/wiki/Doubly%20special%20relativity
Doubly special relativity (DSR) – also called deformed special relativity or, by some, extra-special relativity – is a modified theory of special relativity in which there is not only an observer-independent maximum velocity (the speed of light), but also an observer-independent maximum energy scale (the Planck energy) and/or a minimum length scale (the Planck length). This contrasts with other Lorentz-violating theories, such as the Standard-Model Extension, where Lorentz invariance is instead broken by the presence of a preferred frame. The main motivation for this theory is that the Planck energy should be the scale where as yet unknown quantum gravity effects become important and, due to invariance of physical laws, this scale should remain fixed in all inertial frames. History First attempts to modify special relativity by introducing an observer-independent length were made by Pavlopoulos (1967), who estimated this length at about . In the context of quantum gravity, Giovanni Amelino-Camelia (2000) introduced what is now called doubly special relativity, by proposing a specific realization of preserving invariance of the Planck length . This was reformulated by Kowalski-Glikman (2001) in terms of an observer-independent Planck mass. A different model, inspired by that of Amelino-Camelia, was proposed in 2001 by João Magueijo and Lee Smolin, who also focused on the invariance of Planck energy. It was realized that there are, indeed, three kinds of deformation of special relativity that allow one to achieve an invariance of the Planck energy; either as a maximum energy, as a maximal momentum, or both. DSR models are possibly related to loop quantum gravity in 2+1 dimensions (two space, one time), and it has been conjectured that a relation also exists in 3+1 dimensions. The motivation for these proposals is mainly theoretical, based on the following observation: The Planck energy is expected to play a fundamental role in a theory of quantum gravity; setting the scale at which quantum gravity effects cannot be neglected and new phenomena might become important. If special relativity is to hold up exactly to this scale, different observers would observe quantum gravity effects at different scales, due to the Lorentz–FitzGerald contraction, in contradiction to the principle that all inertial observers should be able to describe phenomena by the same physical laws. This motivation has been criticized, on the grounds that the result of a Lorentz transformation does not itself constitute an observable phenomenon. DSR also suffers from several inconsistencies in formulation that have yet to be resolved. Most notably, it is difficult to recover the standard transformation behavior for macroscopic bodies, known as the soccer ball problem. The other conceptual difficulty is that DSR is a priori formulated in momentum space. There is, as of yet, no consistent formulation of the model in position space. Predictions Experiments to date have not observed contradictions to Special Relativity. It was initially speculated that ordinary special relativity and doubly special relativity would make distinct physical predictions in high-energy processes and, in particular, the derivation of the GZK limit on energies of cosmic rays from distant sources would not be valid. However, it is now established that standard doubly special relativity does not predict any suppression of the GZK cutoff, contrary to the models where an absolute local rest frame exists, such as effective field theories like the Standard-Model Extension. Since DSR generically (though not necessarily) implies an energy-dependence of the speed of light, it has further been predicted that, if there are modifications to first order in energy over the Planck mass, this energy-dependence would be observable in high energetic photons reaching Earth from distant gamma ray bursts. Depending on whether the now energy-dependent speed of light increases or decreases with energy (a model-dependent feature), highly energetic photons would be faster or slower than the lower energetic ones. However, the Fermi-LAT experiment in 2009 measured a 31 GeV photon, which nearly simultaneously arrived with other photons from the same burst, which excluded such dispersion effects even above the Planck energy. Moreover, it has been argued that DSR, with an energy-dependent speed of light, is inconsistent and first order effects are ruled out already because they would lead to non-local particle interactions that would long have been observed in particle physics experiments. De Sitter relativity Since the de Sitter group naturally incorporates an invariant length parameter, de Sitter relativity can be interpreted as an example of doubly special relativity because de Sitter spacetime incorporates invariant velocity, as well as length parameter. There is a fundamental difference, though: whereas in all doubly special relativity models the Lorentz symmetry is violated, in de Sitter relativity it remains as a physical symmetry. A drawback of the usual doubly special relativity models is that they are valid only at the energy scales where ordinary special relativity is supposed to break down, giving rise to a patchwork relativity. On the other hand, de Sitter relativity is found to be invariant under a simultaneous re-scaling of mass, energy and momentum, and is consequently valid at all energy scales. See also Planck scale Planck units Planck epoch Fock–Lorentz symmetry References Further reading Smolin writes for the layman a brief history of the development of DSR and how it ties in with string theory and cosmology. Special relativity Quantum gravity
Doubly special relativity
[ "Physics" ]
1,118
[ "Unsolved problems in physics", "Special relativity", "Quantum gravity", "Theory of relativity", "Physics beyond the Standard Model" ]
179,978
https://en.wikipedia.org/wiki/Antiandrogen
Antiandrogens, also known as androgen antagonists or testosterone blockers, are a class of drugs that prevent androgens like testosterone and dihydrotestosterone (DHT) from mediating their biological effects in the body. They act by blocking the androgen receptor (AR) and/or inhibiting or suppressing androgen production. They can be thought of as the functional opposites of AR agonists, for instance androgens and anabolic steroids (AAS) like testosterone, DHT, and nandrolone and selective androgen receptor modulators (SARMs) like enobosarm. Antiandrogens are one of three types of sex hormone antagonists, the others being antiestrogens and antiprogestogens. Antiandrogens are used to treat an assortment of androgen-dependent conditions. In men, antiandrogens are used in the treatment of prostate cancer, enlarged prostate, scalp hair loss, overly high sex drive, unusual and problematic sexual urges, and early puberty. In women, antiandrogens are used to treat acne, seborrhea, excessive hair growth, scalp hair loss, and high androgen levels, such as those that occur in polycystic ovary syndrome (PCOS). Antiandrogens are also used as a component of feminizing hormone therapy for transgender women and as puberty blockers in transgender girls. Side effects of antiandrogens depend on the type of antiandrogen and the specific antiandrogen in question. In any case, common side effects of antiandrogens in men include breast tenderness, breast enlargement, feminization, hot flashes, sexual dysfunction, infertility, and osteoporosis. In women, antiandrogens are much better tolerated, and antiandrogens that work only by directly blocking androgens are associated with minimal side effects. However, because estrogens are made from androgens in the body, antiandrogens that suppress androgen production can cause low estrogen levels and associated symptoms like hot flashes, menstrual irregularities, and osteoporosis in premenopausal women. There are a few different major types of antiandrogens. These include AR antagonists, androgen synthesis inhibitors, and antigonadotropins. AR antagonists work by directly blocking the effects of androgens, while androgen synthesis inhibitors and antigonadotropins work by lowering androgen levels. AR antagonists can be further divided into steroidal antiandrogens and nonsteroidal antiandrogens; androgen synthesis inhibitors can be further divided mostly into CYP17A1 inhibitors and 5α-reductase inhibitors; and antigonadotropins can be further divided into gonadotropin-releasing hormone modulators (GnRH modulators), progestogens, and estrogens. Medical uses Antiandrogens are used in the treatment of an assortment of androgen-dependent conditions in both males and females. They are used to treat men with prostate cancer, benign prostatic hyperplasia, pattern hair loss, hypersexuality, paraphilias, and priapism, as well as boys with precocious puberty. In women and girls, antiandrogens are used to treat acne, seborrhea, hidradenitis suppurativa, hirsutism, and hyperandrogenism. Antiandrogens are also used in transgender women as a component of feminizing hormone therapy and as puberty blockers in transgender girls. Men and boys Prostate cancer Androgens like testosterone and particularly DHT are importantly involved in the development and progression of prostate cancer. They act as growth factors in the prostate gland, stimulating cell division and tissue growth. In accordance, therapeutic modalities that reduce androgen signaling in the prostate gland, referred to collectively as androgen deprivation therapy, are able to significantly slow the course of prostate cancer and extend life in men with the disease. Although antiandrogens are effective in slowing the progression of prostate cancer, they are not generally curative, and with time, the disease adapts and androgen deprivation therapy eventually becomes ineffective. When this occurs, other treatment approaches, such as chemotherapy, may be considered. The most common methods of androgen deprivation therapy currently employed to treat prostate cancer are castration (with a GnRH modulator or orchiectomy), nonsteroidal antiandrogens, and the androgen synthesis inhibitor abiraterone acetate. Castration may be used alone or in combination with one of the other two treatments. When castration is combined with a nonsteroidal antiandrogen like bicalutamide, this strategy is referred to as combined androgen blockade (also known as complete or maximal androgen blockade). Enzalutamide, apalutamide, and abiraterone acetate are specifically approved for use in combination with castration to treat castration-resistant prostate cancer. Monotherapy with the nonsteroidal antiandrogen bicalutamide is also used in the treatment of prostate cancer as an alternative to castration with comparable effectiveness but with a different and potentially advantageous side effect profile. High-dose estrogen was the first functional antiandrogen used to treat prostate cancer. It was widely used, but has largely been abandoned for this indication in favor of newer agents with improved safety profiles and fewer feminizing side effects. Cyproterone acetate was developed subsequently to high-dose estrogen and is the only steroidal antiandrogen that has been widely used in the treatment of prostate cancer, but it has largely been replaced by nonsteroidal antiandrogens, which are newer and have greater effectiveness, tolerability, and safety. Bicalutamide, as well as enzalutamide, have largely replaced the earlier nonsteroidal antiandrogens flutamide and nilutamide, which are now little used. The earlier androgen synthesis inhibitors aminoglutethimide and ketoconazole have only limitedly been used in the treatment of prostate cancer due to toxicity concerns and have been replaced by abiraterone acetate. In addition to active treatment of prostate cancer, antiandrogens are effective as prophylaxis (preventatives) in reducing the risk of ever developing prostate cancer. Antiandrogens have only limitedly been assessed for this purpose, but the 5α-reductase inhibitors finasteride and dutasteride and the steroidal AR antagonist spironolactone have been associated with significantly reduced risk of prostate cancer. In addition, it is notable that prostate cancer is extremely rare in transgender women who have been on feminizing hormone therapy for an extended period of time. Enlarged prostate The 5α-reductase inhibitors finasteride and dutasteride are used to treat benign prostatic hyperplasia, a condition in which the prostate becomes enlarged and this results in urinary obstruction and discomfort. They are effective because androgens act as growth factors in the prostate gland. The antiandrogens chlormadinone acetate and oxendolone and the functional antiandrogens allylestrenol and gestonorone caproate are also approved in some countries for the treatment of benign prostatic hyperplasia. Scalp hair loss 5α-Reductase inhibitors like finasteride, dutasteride, and alfatradiol and the topical nonsteroidal AR antagonist topilutamide (fluridil) are approved for the treatment of pattern hair loss, also known as scalp hair loss or baldness. This condition is generally caused by androgens, so antiandrogens can slow or halt its progression. Systemic antiandrogens besides 5α-reductase inhibitors are not generally used to treat scalp hair loss in males due to risks like feminization (e.g., gynecomastia) and sexual dysfunction. However, they have been assessed and reported to be effective for this indication. Acne Systemic antiandrogens are generally not used to treat acne in males due to their high risk of feminization (e.g., gynecomastia) and sexual dysfunction. However, they have been studied for acne in males and found to be effective. Clascoterone, a topical antiandrogen, is effective for acne in males and has been approved by the FDA in August 2020. Paraphilia Androgens increase sex drive, and for this reason, antiandrogens are able to reduce sex drive in men. In accordance, antiandrogens are used in the treatment of conditions such as hypersexuality (excessively high sex drive) and paraphilias (atypical and sometimes societally unacceptable sexual interests) like pedophilia (sexual attraction to children). They have been used to decrease sex drive in sex offenders so as to reduce the likelihood of recidivism (repeat offenses). Antiandrogens used for these indications include cyproterone acetate, medroxyprogesterone acetate, and GnRH modulators. Early puberty Antiandrogens are used to treat precocious puberty in boys. They work by opposing the effects of androgens and delaying the development of secondary sexual characteristics and onset of changes in sex drive and function until a more appropriate age. Antiandrogens that have been used for this purpose include cyproterone acetate, medroxyprogesterone acetate, GnRH modulators, spironolactone, bicalutamide, and ketoconazole. Spironolactone and bicalutamide require combination with an aromatase inhibitor to prevent the effects of unopposed estrogens, while the others can be used alone. Long-lasting erections Antiandrogens are effective in the treatment of recurrent priapism (potentially painful penile erections that last more than four hours). Women and girls Skin and hair conditions Antiandrogens are used in the treatment of androgen-dependent skin and hair conditions including acne, seborrhea, hidradenitis suppurativa, hirsutism, and pattern hair loss in women. All of these conditions are dependent on androgens, and for this reason, antiandrogens are effective in treating them. The most commonly used antiandrogens for these indications are cyproterone acetate and spironolactone. Flutamide has also been studied extensively for such uses, but has fallen out of favor due to its association with hepatotoxicity. Bicalutamide, which has a relatively minimal risk of hepatotoxicity, has been evaluated for the treatment of hirsutism and found effective similarly to flutamide and may be used instead of it. In addition to AR antagonists, oral contraceptives containing ethinylestradiol are effective in treating these conditions, and may be combined with AR antagonists. High androgen levels Hyperandrogenism is a condition in women in which androgen levels are excessively and abnormally high. It is commonly seen in women with PCOS, and also occurs in women with intersex conditions like congenital adrenal hyperplasia. Hyperandrogenism is associated with virilization – that is, the development of masculine secondary sexual characteristics like male-pattern facial and body hair growth (or hirsutism), voice deepening, increased muscle mass and strength, and broadening of the shoulders, among others. Androgen-dependent skin and hair conditions like acne and pattern hair loss may also occur in hyperandrogenism, and menstrual disturbances, like amenorrhea, are commonly seen. Although antiandrogens do not treat the underlying cause of hyperandrogenism (e.g., PCOS), they are able to prevent and reverse its manifestation and effects. As with androgen-dependent skin and hair conditions, the most commonly used antiandrogens in the treatment of hyperandrogenism in women are cyproterone acetate and spironolactone. Other antiandrogens, like bicalutamide, may be used alternatively. Gender-affirming hormone therapy Antiandrogens are used to prevent or reverse masculinization and to facilitate feminization in transgender women and some nonbinary individuals who are undergoing hormone therapy and who have not undergone sex reassignment surgery or orchiectomy. Besides estrogens, the main antiandrogens that have been used for this purpose are cyproterone acetate, spironolactone, and GnRH modulators. Nonsteroidal antiandrogens like bicalutamide are also used for this indication. In addition to use in transgender women, antiandrogens, mainly GnRH modulators, are used as puberty blockers to prevent the onset of puberty in transgender girls until they are older and ready to begin hormone therapy. Available forms There are several different types of antiandrogens, including the following: Androgen receptor antagonists: Drugs that bind directly to and block the AR. These drugs include the steroidal antiandrogens cyproterone acetate, megestrol acetate, chlormadinone acetate, spironolactone, oxendolone, and osaterone acetate (veterinary) and the nonsteroidal antiandrogens flutamide, bicalutamide, nilutamide, topilutamide, enzalutamide, apalutamideand Finasteride. Aside from cyproterone acetate and chlormadinone acetate, a few other progestins used in oral contraceptives and/or in menopausal HRT including dienogest, drospirenone, medrogestone, nomegestrol acetate, promegestone, and trimegestone also have varying degrees of AR antagonistic activity. Androgen synthesis inhibitors: Drugs that directly inhibit the enzymatic biosynthesis of androgens like testosterone and/or DHT. Examples include the CYP17A1 inhibitors ketoconazole, abiraterone acetate, and seviteronel, the CYP11A1 (P450scc) inhibitor aminoglutethimide, and the 5α-reductase inhibitors finasteride, dutasteride, epristeride, alfatradiol, and saw palmetto extract (Serenoa repens). A number of other antiandrogens, including cyproterone acetate, spironolactone, medrogestone, flutamide, nilutamide, and bifluranol, are also known to weakly inhibit androgen synthesis. Antigonadotropins: Drugs that suppress the gonadotropin-releasing hormone (GnRH)-induced release of gonadotropins and consequent activation of gonadal androgen production. Examples include GnRH modulators like leuprorelin (a GnRH agonist) and cetrorelix (a GnRH antagonist), progestogens like allylestrenol, chlormadinone acetate, cyproterone acetate, gestonorone caproate, hydroxyprogesterone caproate, medroxyprogesterone acetate, megestrol acetate, osaterone acetate (veterinary), and oxendolone, and estrogens like estradiol, estradiol esters, ethinylestradiol, conjugated estrogens, and diethylstilbestrol. Miscellaneous: Drugs that oppose the effects of androgens by means other than the above. Examples include estrogens, especially oral and synthetic (e.g., ethinylestradiol, diethylstilbestrol), which stimulate sex hormone-binding globulin (SHBG) production in the liver and thereby decrease free and hence bioactive levels of testosterone and DHT; anticorticotropins such as glucocorticoids, which suppress the adrenocorticotropic hormone (ACTH)-induced production of adrenal androgens; and immunogens and vaccines against androstenedione like ovandrotone albumin and androstenedione albumin, which decrease levels of androgens via the generation of antibodies against the androgen and androgen precursor androstenedione (used only in veterinary medicine). Certain antiandrogens combine multiple of the above mechanisms. An example is the steroidal antiandrogen cyproterone acetate, which is a potent AR antagonist, a potent progestogen and hence antigonadotropin, a weak glucocorticoid and hence anticorticotropin, and a weak androgen synthesis inhibitor. Side effects The side effects of antiandrogens vary depending on the type of antiandrogen – namely whether it is a selective AR antagonist or lowers androgen levels – as well as the presence of off-target activity in the antiandrogen in question. For instance, whereas antigonadotropic antiandrogens like GnRH modulators and cyproterone acetate are associated with pronounced sexual dysfunction and osteoporosis in men, selective AR antagonists like bicalutamide are not associated with osteoporosis and have been associated with only minimal sexual dysfunction. These differences are thought related to the fact that antigonadotropins suppress androgen levels and by extension levels of bioactive metabolites of androgens like estrogens and neurosteroids whereas selective AR antagonists similarly neutralize the effects of androgens but leave levels of androgens and hence their metabolites intact (and in fact can even increase them as a result of their progonadotropic effects). As another example, the steroidal antiandrogens cyproterone acetate and spironolactone possess off-target actions including progestogenic, antimineralocorticoid, and/or glucocorticoid activity in addition to their antiandrogen activity, and these off-target activities can result in additional side effects. In males, the major side effects of antiandrogens are demasculinization and feminization. These side effects include breast pain/tenderness and gynecomastia (breast development/enlargement), reduced body hair growth/density, decreased muscle mass and strength, feminine changes in fat mass and distribution, and reduced penile length and testicular size. The rates of gynecomastia in men with selective AR antagonist monotherapy have been found to range from 30 to 85%. In addition, antiandrogens can cause infertility, osteoporosis, hot flashes, sexual dysfunction (including loss of libido and erectile dysfunction), depression, fatigue, anemia, and decreased semen/ejaculate volume in males. Conversely, the side effects of selective AR antagonists in women are minimal. However, antigonadotropic antiandrogens like cyproterone acetate can produce hypoestrogenism, amenorrhea, and osteoporosis in premenopausal women, among other side effects. In addition, androgen receptor antagonists can produce unfavorable effects on cholesterol levels, which long-term may increase the risk of cardiovascular disease. A number of antiandrogens have been associated with hepatotoxicity. These include, to varying extents, cyproterone acetate, flutamide, nilutamide, bicalutamide, aminoglutethimide, and ketoconazole. In contrast, spironolactone, enzalutamide, and other antiandrogens are not associated with significant rates of hepatotoxicity. However, although they do not pose a risk of hepatotoxicity, spironolactone has a risk of hyperkalemia and enzalutamide has a risk of seizures. In women who are pregnant, antiandrogens can interfere with the androgen-mediated sexual differentiation of the genitalia and brain of male fetuses. This manifests primarily as ambiguous genitalia – that is, undervirilized or feminized genitalia, which, anatomically, are a cross between a penis and a vagina – and theoretically also as femininity. As such, antiandrogens are teratogens, and women who are pregnant should not be treated with an antiandrogen. Moreover, women who can or may become pregnant are strongly recommended to take an antiandrogen only in combination with proper contraception. Overdose Antiandrogens are relatively safe in acute overdose. Interactions Inhibitors and inducers of cytochrome P450 enzymes may interact with various antiandrogens. Mechanism of action Androgen receptor antagonists AR antagonists act by directly binding to and competitively displacing androgens like testosterone and DHT from the AR, thereby preventing them from activating the receptor and mediating their biological effects. AR antagonists are classified into two types, based on chemical structure: steroidal and nonsteroidal. Steroidal AR antagonists are structurally related to steroid hormones like testosterone and progesterone, whereas nonsteroidal AR antagonists are not steroids and are structurally distinct. Steroidal AR antagonists tend to have off-target hormonal actions due to their structural similarity to other steroid hormones. In contrast, nonsteroidal AR antagonists are selective for the AR and have no off-target hormonal activity. For this reason, they are sometimes described as "pure" antiandrogens. Although they are described as antiandrogens and indeed show only such effects generally, most or all steroidal AR antagonists are actually not silent antagonists of the AR but rather are weak partial agonists and are able to activate the receptor in the absence of more potent AR agonists like testosterone and DHT. This may have clinical implications in the specific context of prostate cancer treatment. As an example, steroidal AR antagonists are able to increase prostate weight and accelerate prostate cancer cell growth in the absence of more potent AR agonists, and spironolactone has been found to accelerate progression of prostate cancer in case reports. In addition, whereas cyproterone acetate produces ambiguous genitalia via feminization in male fetuses when administered to pregnant animals, it has been found to produce masculinization of the genitalia of female fetuses of pregnant animals. In contrast to steroidal AR antagonists, nonsteroidal AR antagonists are silent antagonists of the AR and do not activate the receptor. This may be why they have greater efficacy than steroidal AR antagonists in the treatment of prostate cancer and is an important reason as to why they have largely replaced them for this indication in medicine. Nonsteroidal antiandrogens have relatively low affinity for the AR compared to steroidal AR ligands. For example, bicalutamide has around 2% of the affinity of DHT for the AR and around 20% of the affinity of CPA for the AR. Despite their low affinity for the AR however, the lack of weak partial agonist activity of NSAAs appears to improve their potency relative to steroidal antiandrogens. For example, although flutamide has about 10-fold lower affinity for the AR than CPA, it shows equal or slightly greater potency to CPA as an antiandrogen in bioassays. In addition, circulating therapeutic concentrations of nonsteroidal antiandrogens are very high, on the order of thousands of times higher than those of testosterone and DHT, and this allows them to efficaciously compete and block AR signaling. AR antagonists may not bind to or block membrane androgen receptors (mARs), which are distinct from the classical nuclear AR. However, the mARs do not appear to be involved in masculinization. This is evidenced by the perfectly female phenotype of women with complete androgen insensitivity syndrome. These women have a 46,XY karyotype (i.e., are genetically "male") and high levels of androgens but possess a defective AR and for this reason never masculinize. They are described as highly feminine, both physically as well as mentally and behaviorally. N-Terminal domain antagonists N-Terminal domain AR antagonists are a new type of AR antagonist that, unlike all currently marketed AR antagonists, bind to the N-terminal domain (NTD) of the AR rather than the ligand-binding domain (LBD). Whereas conventional AR antagonists bind to the LBD of the AR and competitively displace androgens, thereby preventing them from activating the receptor, AR NTD antagonists bind covalently to the NTD of the AR and prevent protein–protein interactions subsequent to activation that are required for transcriptional activity. As such, they are non-competitive and irreversible antagonists of the AR. Examples of AR NTD antagonists include bisphenol A diglycidyl ether (BADGE) and its derivatives EPI-001, ralaniten (EPI-002), and ralaniten acetate (EPI-506). AR NTD antagonists are under investigation for the potential treatment of prostate cancer, and it is thought that they may have greater efficacy as antiandrogens relative to conventional AR antagonists. In accordance with this notion, AR NTD antagonists are active against splice variants of the AR, which conventional AR antagonists are not, and AR NTD antagonists are immune to gain-of-function mutations in the AR LBD that convert AR antagonists into AR agonists and commonly occur in prostate cancer. Androgen receptor degraders Selective androgen receptor degraders (SARDs) are another new type of antiandrogen that has recently been developed. They work by enhancing the degradation of the AR, and are analogous to selective estrogen receptor degraders (SERDs) like fulvestrant (a drug used to treat estrogen receptor-positive breast cancer). Similarly to AR NTD antagonists, it is thought that SARDs may have greater efficacy than conventional AR antagonists, and for this reason, they are under investigation for the treatment of prostate cancer. An example of a SARD is dimethylcurcumin (ASC-J9), which is under development as a topical medication for the potential treatment of acne. SARDs like dimethylcurcumin differ from conventional AR antagonists and AR NTD antagonists in that they may not necessarily bind directly to the AR. Androgen synthesis inhibitors Androgen synthesis inhibitors are enzyme inhibitors that prevent the biosynthesis of androgens. This process occurs mainly in the gonads and adrenal glands, but also occurs in other tissues like the prostate gland, skin, and hair follicles. These drugs include aminoglutethimide, ketoconazole, and abiraterone acetate. Aminoglutethimide inhibits cholesterol side-chain cleavage enzyme, also known as P450scc or CYP11A1, which is responsible for the conversion of cholesterol into pregnenolone and by extension the production of all steroid hormones, including the androgens. Ketoconazole and abiraterone acetate are inhibitors of the enzyme CYP17A1, also known as 17α-hydroxylase/17,20-lyase, which is responsible for the conversion of pregnane steroids into androgens, as well as the conversion of mineralocorticoids into glucocorticoids. Because these drugs all prevent the formation of glucocorticoids in addition to androgens, they must be combined with a glucocorticoid like prednisone to avoid adrenal insufficiency. A newer drug currently under development for treatment of prostate cancer, seviteronel, is selective for inhibition of the 17,20-lyase functionality of CYP17A1, and for this reason, unlike earlier drugs, does not require concomitant treatment with a glucocorticoid. 5α-Reductase inhibitors 5α-Reductase inhibitors such as finasteride and dutasteride are inhibitors of 5α-reductase, an enzyme that is responsible for the formation of DHT from testosterone. DHT is between 2.5- and 10-fold more potent than testosterone as an androgen and is produced in a tissue-selective manner based on expression of 5α-reductase. Tissues in which DHT forms at a high rate include the prostate gland, skin, and hair follicles. In accordance, DHT is involved in the pathophysiology of benign prostatic hyperplasia, pattern hair loss, and hirsutism, and 5α-reductase inhibitors are used to treat these conditions. Antigonadotropins Antigonadotropins are drugs that suppress the GnRH-mediated secretion of gonadotropins from the pituitary gland. Gonadotropins include luteinizing hormone (LH) and follicle-stimulating hormone (FSH) and are peptide hormones that signal the gonads to produce sex hormones. By suppressing gonadotropin secretion, antigonadotropins suppress gonadal sex hormone production and by extension circulating androgen levels. GnRH modulators, including both GnRH agonists and GnRH antagonists, are powerful antigonadotropins that are able to suppress androgen levels by 95% in men. In addition, estrogens and progestogens are antigonadotropins via exertion of negative feedback on the hypothalamic–pituitary–gonadal axis (HPG axis). High-dose estrogens are able to suppress androgen levels to castrate levels in men similarly to GnRH modulators, while high-dose progestogens are able to suppress androgen levels by up to approximately 70 to 80% in men. Examples of GnRH agonists include leuprorelin (leuprolide) and goserelin, while an example of a GnRH antagonist is cetrorelix. Estrogens that are or that have been used as antigonadotropins include estradiol, estradiol esters like estradiol valerate, estradiol undecylate, and polyestradiol phosphate, conjugated estrogens, ethinylestradiol, diethylstilbestrol (no longer widely used), and bifluranol. Progestogens that are used as antigonadotropins include chlormadinone acetate, cyproterone acetate, gestonorone caproate, hydroxyprogesterone caproate, medroxyprogesterone acetate, megestrol acetate, and oxendolone. Miscellaneous Sex hormone-binding globulin modulators In addition to their antigonadotropic effects, estrogens are also functional antiandrogens by decreasing free concentrations of androgens via increasing the hepatic production of sex hormone-binding globulin (SHBG) and by extension circulating SHBG levels. Combined oral contraceptives containing ethinylestradiol have been found to increase circulating SHBG levels by 2- to 4-fold in women and to reduce free testosterone concentrations by 40 to 80%. However, combined oral contraceptives that contain the particularly androgenic progestin levonorgestrel have been found to increase SHBG levels by only 50 to 100%, which is likely because activation of the AR in the liver has the opposite effect of estrogen and suppresses production of SHBG. Levonorgestrel and certain other 19-nortestosterone progestins used in combined oral contraceptives like norethisterone also directly bind to and displace androgens from SHBG, which may additionally antagonize the functional antiandrogenic effects of ethinylestradiol. In men, a study found that treatment with a relatively low dosage of 20 μg/day ethinylestradiol for 5 weeks increased circulating SHBG levels by 150% and, due to the accompanying decrease free testosterone levels, increased total circulating levels of testosterone by 50% (via reduced negative feedback by androgens on the HPG axis). Corticosteroid-binding globulin modulators Estrogens at high doses can partially suppress adrenal androgen production. A study found that treatment with a high-dose ethinylestradiol (100 μg/day) reduced levels of major circulating adrenal androgens by 27 to 48% in transgender women. Decreased adrenal androgens with estrogens is apparent with oral and synthetic estrogens like ethinylestradiol and estramustine phosphate but is minimal with parenteral bioidentical estradiol forms like polyestradiol phosphate. It is thought to be mediated via a hepatic mechanism, probably increased corticosteroid-binding globulin (CBG) production and levels and compensatory changes in adrenal steroid production (e.g., shunting of adrenal androgen synthesis to cortisol production). It is notable in this regard that oral and synthetic estrogens, due to the oral first pass and resistance to hepatic metabolism, have much stronger influences on liver protein synthesis than parenteral estradiol. The decrease in adrenal androgen levels with high-dose estrogen therapy may be beneficial in the treatment of prostate cancer. Anticorticotropins Anticorticotropins such as glucocorticoids and mineralocorticoids work by exerting negative feedback on the hypothalamic–pituitary–adrenal axis (HPA axis), thereby inhibiting the secretion of corticotropin-releasing hormone (CRH) and hence adrenocorticotropic hormone (ACTH; corticotropin) and consequently suppressing the production of androgen prohormones like dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulfate (DHEA-S), and androstenedione in the adrenal gland. They are rarely used clinically as functional antiandrogens, but are used as such in the case of congenital adrenal hyperplasia in girls and women, in which there are excessive production and levels of adrenal androgens due to glucocorticoid deficiency and hence HPA axis overactivity. Insulin sensitizers In women with insulin resistance, such as those with polycystic ovary syndrome, androgen levels are often elevated. Metformin, an insulin-sensitizing medication, has indirect antiandrogenic effects in such women, decreasing testosterone levels by as much as 50% secondary to its beneficial effects on insulin sensitivity. Immunogens and vaccines Ovandrotone albumin (Fecundin, Ovastim) and Androvax (androstenedione albumin) are immunogens and vaccines against androstenedione that are used in veterinary medicine to improve fecundity (reproductive rate) in ewes (adult female sheep). The generation of antibodies against androstenedione by these agents is thought to decrease circulating levels of androstenedione and its metabolites (e.g., testosterone and estrogens), which in turn increases the activity of the HPG axis via reduced negative feedback and increases the rate of ovulation, resulting in greater fertility and fecundity. Chemistry Antiandrogens can be divided into several different types based on chemical structure, including steroidal antiandrogens, nonsteroidal antiandrogens, and peptides. Steroidal antiandrogens include compounds like cyproterone acetate, spironolactone, estradiol, abiraterone acetate, and finasteride; nonsteroidal antiandrogens include compounds like bicalutamide, elagolix, diethylstilbestrol, aminoglutethimide, and ketoconazole; and peptides include GnRH analogues like leuprorelin and cetrorelix. History Antigonadotropins like estrogens and progestogens were both first introduced in the 1930s. The beneficial effects of androgen deprivation via surgical castration or high-dose estrogen therapy on prostate cancer were discovered in 1941. AR antagonists were first discovered in the early 1960s. The steroidal antiandrogen cyproterone acetate was discovered in 1961. and introduced in 1973. and is often described as the first antiandrogen to have been marketed. However, spironolactone was introduced in 1959., although its antiandrogen effects were not recognized or taken advantage of until later and were originally an unintended off-target action of the drug. In addition to spironolactone, chlormadinone acetate and megestrol acetate are steroidal antiandrogens that are weaker than cyproterone acetate but were also introduced earlier, in the 1960s. Other early steroidal antiandrogens that were developed around this time but were never marketed include benorterone (SKF-7690; 17α-methyl-B-nortestosterone), BOMT (Ro 7–2340), cyproterone (SH-80881), and trimethyltrienolone (R-2956). The nonsteroidal antiandrogen flutamide was first reported in 1967. It was introduced in 1983 and was the first nonsteroidal antiandrogen marketed. Another early nonsteroidal antiandrogen, DIMP (Ro 7–8117), which is structurally related to thalidomide and is a relatively weak antiandrogen, was first described in 1973 and was never marketed. Flutamide was followed by nilutamide in 1989. and bicalutamide in 1995. In addition to these three drugs, which have been regarded as first-generation nonsteroidal antiandrogens, the second-generation nonsteroidal antiandrogens enzalutamide and apalutamide were introduced in 2012. and 2018. They differ from the earlier nonsteroidal antiandrogens namely in that they are much more efficacious in comparison. The androgen synthesis inhibitors aminoglutethimide and ketoconazole were first marketed in 1960. and 1977., respectively, and the newer drug abiraterone acetate was introduced in 2011. GnRH modulators were first introduced in the 1980s. The 5α-reductase inhibitors finasteride and dutasteride were introduced in 1992. and 2002. respectively. Elagolix, the first orally active GnRH modulator to be marketed, was introduced in 2018. Timeline The following is a timeline of events in the history of antiandrogens: 1941: Hudgins and Hodges show that androgen deprivation via high-dose estrogen therapy or surgical castration treats prostate cancer 1957: The steroidal antiandrogen spironolactone is first synthesized 1960: Spironolactone is first introduced for medical use, as an antimineralocorticoid 1961: The steroidal antiandrogen cyproterone acetate is first synthesized 1962: Spironolactone is first reported to produce gynecomastia in men 1966: Benorterone is the first known antiandrogen to be studied clinically, to treat acne and hirsutism in women 1963: The antiandrogenic activity of cyproterone acetate is discovered 1967: A known antiandrogen, benorterone, is first reported to induce gynecomastia in males 1967: The first-generation nonsteroidal antiandrogen flutamide is first synthesized 1967: Cyproterone acetate was first studied clinically, to treat sexual deviance in men 1969: Cyproterone acetate was first studied in the treatment of acne, hirsutism, seborrhea, and scalp hair loss in women 1969: The antiandrogenic activity of spironolactone is discovered 1972: The antiandrogenic activity of flutamide is first reported 1973: Cyproterone acetate was first introduced for medical use, to treat sexual deviance 1977: The first-generation antiandrogen nilutamide is first described 1978: Spironolactone is first studied in the treatment of hirsutism in women 1979: Combined androgen blockade is first studied 1980: Medical castration via a GnRH analogue is first achieved 1982: The first-generation antiandrogen bicalutamide is first described 1982: Combined androgen blockade for prostate cancer is developed 1983: Flutamide is first introduced, in Chile, for medical use, to treat prostate cancer 1987: Nilutamide is first introduced, in France, for medical use, to treat prostate cancer 1989: Combined androgen blockade via flutamide and a GnRH analogue is found to be superior to a GnRH analogue alone for prostate cancer 1989: Flutamide is first introduced for medical use in the United States, to treat prostate cancer 1989: Flutamide is first studied in the treatment of hirsutism in women 1992: The androgen synthesis inhibitor abiraterone acetate is first described 1995: Bicalutamide is first introduced for medical use, to treat prostate cancer 1996: Nilutamide is first introduced for medical use in the United States, to treat prostate cancer 2006: The second-generation nonsteroidal antiandrogen enzalutamide is first described 2007: The second-generation nonsteroidal antiandrogen apalutamide is first described 2011: Abiraterone acetate is first introduced for medical use, to treat prostate cancer 2012: Enzalutamide is first introduced for medical use, to treat prostate cancer 2018: Apalutamide is first introduced for medical use, to treat prostate cancer 2018: Elagolix is the first orally active GnRH antagonist to be introduced for medical use 2019: Relugolix is the second orally active GnRH antagonist to be introduced for medical use Society and culture Etymology The term antiandrogen is generally used to refer specifically to AR antagonists, as described by Dorfman (1970): However, in spite of the above, the term may also be used to describe functional antiandrogens like androgen synthesis inhibitors and antigonadotropins, including even estrogens and progestogens. For example, the progestogen and hence antigonadotropin medroxyprogesterone acetate is sometimes described as a steroidal antiandrogen, even though it is not an antagonist of the AR. Research Topical administration There has been much interest and effort in the development of topical AR antagonists to treat androgen-dependent conditions like acne and pattern hair loss in males. Unfortunately, whereas systemic administration of antiandrogens is very effective in treating these conditions, topical administration has disappointingly been found generally to possess limited and only modest effectiveness, even when high-affinity steroidal AR antagonists like cyproterone acetate and spironolactone have been employed. Moreover, in the specific case of acne treatment, topical AR antagonists have been found much less effective compared to established treatments like benzoyl peroxide and antibiotics. A variety of AR antagonists have been developed for topical use but have not completed development and hence have never been marketed. These include the steroidal AR antagonists clascoterone, cyproterone, rosterolone, and topterone and the nonsteroidal AR antagonists cioteronel, inocoterone acetate, RU-22930, RU-58642, and RU-58841. However, one topical AR antagonist, topilutamide (fluridil), has been introduced in a few European countries for the treatment of pattern hair loss in men. In addition, a topical 5α-reductase inhibitor and weak estrogen, alfatradiol, has also been introduced in some European countries for the same indication, although its effectiveness is controversial. Spironolactone has been marketed in Italy in the form of a topical cream under the brand name Spiroderm for the treatment of acne and hirsutism, but this formulation was discontinued and hence is no longer available. Male contraception Antiandrogens, such as cyproterone acetate, have been studied for potential use as male hormonal contraceptives. While effective in suppressing male fertility, their use as monotherapies is precluded by side effects, such as androgen deficiency (e.g., demasculinization, sexual dysfunction, hot flashes, osteoporosis) and feminization (e.g., gynecomastia). The combination of a primary antigonadotropin such as cyproterone acetate to prevent fertility and an androgen like testosterone to prevent systemic androgen deficiency, resulting in a selective antiandrogenic action locally in the testes, has been extensively studied and has shown promising results, but has not been approved for clinical use at this time. Dimethandrolone undecanoate (developmental code name CDB-4521), an orally active dual AAS and progestogen, is under investigation as a potential male contraceptive and as the first male birth control pill. Breast cancer Antiandrogens such as bicalutamide, enzalutamide, and abiraterone acetate are under investigation for the potential treatment of breast cancer, including AR-expressing triple-negative breast cancer and other types of AR-expressing breast cancer. Miscellaneous Antiandrogens may be effective in the treatment of obsessive–compulsive disorder. See also Androgen insensitivity syndrome Antiandrogens in the environment Androgen replacement therapy References Further reading Anaphrodisia Anti-acne preparations Hair loss medications Hair removal Hormonal antineoplastic drugs Prostate cancer Sex hormones Psychoactive drugs
Antiandrogen
[ "Chemistry", "Biology" ]
9,843
[ "Behavior", "Sex hormones", "Psychoactive drugs", "Neurochemistry", "Sexuality" ]
180,121
https://en.wikipedia.org/wiki/Medication
A medication (also called medicament, medicine, pharmaceutical drug, medicinal product, medicinal drug or simply drug) is a drug used to diagnose, cure, treat, or prevent disease. Drug therapy (pharmacotherapy) is an important part of the medical field and relies on the science of pharmacology for continual advancement and on pharmacy for appropriate management. Drugs are classified in many ways. One of the key divisions is by level of control, which distinguishes prescription drugs (those that a pharmacist dispenses only on the medical prescription) from over-the-counter drugs (those that consumers can order for themselves). Medicines may be classified by mode of action, route of administration, biological system affected, or therapeutic effects. The World Health Organization keeps a list of essential medicines. Drug discovery and drug development are complex and expensive endeavors undertaken by pharmaceutical companies, academic scientists, and governments. As a result of this complex path from discovery to commercialization, partnering has become a standard practice for advancing drug candidates through development pipelines. Governments generally regulate what drugs can be marketed, how drugs are marketed, and in some jurisdictions, drug pricing. Controversies have arisen over drug pricing and disposal of used medications. Definition Medication is a medicine or a chemical compound used to treat or cure illness. According to Encyclopædia Britannica, medication is "a substance used in treating a disease or relieving pain". As defined by the National Cancer Institute, dosage forms of medication can include tablets, capsules, liquids, creams, and patches. Medications can be administered in different ways, such as by mouth, by infusion into a vein, or by drops put into the ear or eye. A medication that does not contain an active ingredient and is used in research studies is called a placebo. In Europe, the term is "medicinal product", and it is defined by EU law as: "Any substance or combination of substances presented as having properties for treating or preventing disease in human beings; or" "Any substance or combination of substances which may be used in or administered to human beings either with a view to restoring, correcting, or modifying physiological functions by exerting a pharmacological, immunological or metabolic action or to making a medical diagnosis." In the US, a "drug" is: A substance (other than food) intended to affect the structure or any function of the body. A substance intended for use as a component of a medicine but not a device or a component, part, or accessory of a device. A substance intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease. A substance recognized by an official pharmacopeia or formulary. Biological products are included within this definition and are generally covered by the same laws and regulations, but differences exist regarding their manufacturing processes (chemical process versus biological process). Usage Drug use among elderly Americans has been studied; in a group of 2,377 people with an average age of 71 surveyed between 2005 and 2006, 84% took at least one prescription drug, 44% took at least one over-the-counter (OTC) drug, and 52% took at least one dietary supplement; in a group of 2245 elderly Americans (average age of 71) surveyed over the period 2010 – 2011, those percentages were 88%, 38%, and 64%. Classification One of the key classifications is between traditional small molecule drugs; usually derived from chemical synthesis and biological medical products; which include recombinant proteins, vaccines, blood products used therapeutically (such as IVIG), gene therapy, and cell therapy (for instance, stem cell therapies). Pharmaceuticals or drugs or medicines are classified into various other groups besides their origin on the basis of pharmacological properties like mode of action and their pharmacological action or activity, such as by chemical properties, mode or route of administration, biological system affected, or therapeutic effects. An elaborate and widely used classification system is the Anatomical Therapeutic Chemical Classification System (ATC system). The World Health Organization keeps a list of essential medicines. A sampling of classes of medicine includes: Antipyretics: reducing fever (pyrexia/pyresis) Analgesics: reducing pain (painkillers) Antimalarial drugs: treating malaria Antibiotics: inhibiting germ growth Antiseptics: prevention of germ growth near burns, cuts,and wounds Mood stabilizers: lithium and valproate Hormone replacements: Premarin Oral contraceptives: Enovid, "biphasic" pill, and "triphasic" pill Stimulants: methylphenidate, amphetamine Tranquilizers: meprobamate, chlorpromazine, reserpine, chlordiazepoxide, diazepam, and alprazolam Statins: lovastatin, pravastatin, and simvastatin Pharmaceuticals may also be described as "specialty", independent of other classifications, which is an ill-defined class of drugs that might be difficult to administer, require special handling during administration, require patient monitoring during and immediately after administration, have particular regulatory requirements restricting their use, and are generally expensive relative to other drugs. Types of medicines For the digestive system Lower digestive tract: laxatives, antispasmodics, antidiarrhoeals, bile acid sequestrants, opioids. Upper digestive tract: antacids, reflux suppressants, antiflatulents, antidopaminergics, proton pump inhibitors (PPIs), H2-receptor antagonists, cytoprotectants, prostaglandin analogues. For the cardiovascular system Affecting blood pressure/(antihypertensive drugs): ACE inhibitors, angiotensin receptor blockers, beta-blockers, α blockers, calcium channel blockers, thiazide diuretics, loop diuretics, aldosterone inhibitors. Coagulation: anticoagulants, heparin, antiplatelet drugs, fibrinolytics, anti-hemophilic factors, haemostatic drugs. General: β-receptor blockers ("beta blockers"), calcium channel blockers, diuretics, cardiac glycosides, antiarrhythmics, nitrate, antianginals, vasoconstrictors, vasodilators. HMG-CoA reductase inhibitors (statins) for lowering LDL cholesterol inhibitors: hypolipidaemic agents. For the central nervous system Drugs affecting the central nervous system include psychedelics, hypnotics, anaesthetics, antipsychotics, eugeroics, antidepressants (including tricyclic antidepressants, monoamine oxidase inhibitors, lithium salts, and selective serotonin reuptake inhibitors (SSRIs)), antiemetics, anticonvulsants/antiepileptics, anxiolytics, barbiturates, movement disorder (e.g., Parkinson's disease) drugs, nootropics, stimulants (including amphetamines), benzodiazepines, cyclopyrrolones, dopamine antagonists, antihistamines, cholinergics, anticholinergics, emetics, cannabinoids, and 5-HT (serotonin) antagonists. For pain The main classes of painkillers are NSAIDs, opioids, and local anesthetics. For consciousness (anesthetic drugs) Some anesthetics include benzodiazepines and barbiturates. For musculoskeletal disorders The main categories of drugs for musculoskeletal disorders are: NSAIDs (including COX-2 selective inhibitors), muscle relaxants, neuromuscular drugs, and anticholinesterases. For the eye Anti-allergy: mast cell inhibitors. Anti-fungal: imidazoles, polyenes. Anti-glaucoma: adrenergic agonists, beta-blockers, carbonic anhydrase inhibitors/hyperosmotics, cholinergics, miotics, parasympathomimetics, prostaglandin agonists/prostaglandin inhibitors, nitroglycerin. Anti-inflammatory: NSAIDs, corticosteroids. Antibacterial: antibiotics, topical antibiotics, sulfa drugs, aminoglycosides, fluoroquinolones. Antiviral drugs. Diagnostic: topical anesthetics, sympathomimetics, parasympatholytics, mydriatics, cycloplegics. General: adrenergic neurone blocker, astringent. For the ear, nose, and oropharynx Antibiotics, sympathomimetics, antihistamines, anticholinergics, NSAIDs, corticosteroids, antiseptics, local anesthetics, antifungals, and cerumenolytics. For the respiratory system Bronchodilators, antitussives, mucolytics, decongestants, inhaled and systemic corticosteroids, beta2-adrenergic agonists, anticholinergics, mast cell stabilizers, leukotriene antagonists. For endocrine problems Androgens, antiandrogens, estrogens, gonadotropin, corticosteroids, human growth hormone, insulin, antidiabetics (sulfonylureas, biguanides/metformin, thiazolidinediones, insulin), thyroid hormones, antithyroid drugs, calcitonin, diphosphonate, vasopressin analogues. For the reproductive system or urinary system Antifungal, alkalinizing agents, quinolones, antibiotics, cholinergics, anticholinergics, antispasmodics, 5-alpha reductase inhibitor, selective alpha-1 blockers, sildenafils, fertility medications. For contraception Hormonal contraception. Ormeloxifene. Spermicide. For obstetrics and gynecology NSAIDs, anticholinergics, haemostatic drugs, antifibrinolytics, Hormone Replacement Therapy (HRT), bone regulators, beta-receptor agonists, follicle stimulating hormone, luteinising hormone, LHRH, gamolenic acid, gonadotropin release inhibitor, progestogen, dopamine agonists, oestrogen, prostaglandins, gonadorelin, clomiphene, tamoxifen, diethylstilbestrol. For the skin Emollients, anti-pruritics, antifungals, antiseptics, scabicides, pediculicides, tar products, vitamin A derivatives, vitamin D analogues, keratolytics, abrasives, systemic antibiotics, topical antibiotics, hormones, desloughing agents, exudate absorbents, fibrinolytics, proteolytics, sunscreens, antiperspirants, corticosteroids, immune modulators. For infections and infestations Antibiotics, antifungals, antileprotics, antituberculous drugs, antimalarials, anthelmintics, amoebicides, antivirals, antiprotozoals, probiotics, prebiotics, antitoxins, and antivenoms. For the immune system Vaccines, immunoglobulins, immunosuppressants, interferons, and monoclonal antibodies. For allergic disorders Anti-allergics, antihistamines, NSAIDs, corticosteroids. For nutrition Tonics, electrolytes and mineral preparations (including iron preparations and magnesium preparations), parenteral nutrition, vitamins, anti-obesity drugs, anabolic drugs, haematopoietic drugs, food product drugs. For neoplastic disorders Cytotoxic drugs, therapeutic antibodies, sex hormones, aromatase inhibitors, somatostatin inhibitors, recombinant interleukins, G-CSF, erythropoietin. For diagnostics Contrast media. For euthanasia A euthanaticum is used for euthanasia and physician-assisted suicide. Euthanasia is not permitted by law in many countries, and consequently, medicines will not be licensed for this use in those countries. Administration A single drug may contain single or multiple active ingredients. The administration is the process by which a patient takes medicine. There are three major categories of drug administration: enteral (via the human gastrointestinal tract), injection into the body, and by other routes (dermal, nasal, ophthalmic, otologic, and urogenital). Oral administration, the most common form of enteral administration, can be performed using various dosage forms including tablets or capsules and liquid such as syrup or suspension. Other ways to take the medication include buccally (placed inside the cheek), sublingually (placed underneath the tongue), eye and ear drops (dropped into the eye or ear), and transdermally (applied to the skin). They can be administered in one dose, as a bolus. Administration frequencies are often abbreviated from Latin, such as every 8 hours reading Q8H from Quaque VIII Hora. The drug frequencies are often expressed as the number of times a drug is used per day (e.g., four times a day). It may include event-related information (e.g., 1 hour before meals, in the morning, at bedtime), or complimentary to an interval, although equivalent expressions may have different implications (e.g., every 8 hours versus 3 times a day). Drug discovery In the fields of medicine, biotechnology, and pharmacology, drug discovery is the process by which new drugs are discovered. Historically, drugs were discovered by identifying the active ingredient from traditional remedies or by serendipitous discovery. Later chemical libraries of synthetic small molecules, natural products, or extracts were screened in intact cells or whole organisms to identify substances that have a desirable therapeutic effect in a process known as classical pharmacology. Since sequencing of the human genome which allowed rapid cloning and synthesis of large quantities of purified proteins, it has become common practice to use high throughput screening of large compound libraries against isolated biological targets which are hypothesized to be disease-modifying in a process known as reverse pharmacology. Hits from these screens are then tested in cells and then in animals for efficacy. Even more recently, scientists have been able to understand the shape of biological molecules at the atomic level and to use that knowledge to design (see drug design) drug candidates. Modern drug discovery involves the identification of screening hits, medicinal chemistry, and optimization of those hits to increase the affinity, selectivity (to reduce the potential of side effects), efficacy/potency, metabolic stability (to increase the half-life), and oral bioavailability. Once a compound that fulfills all of these requirements has been identified, it will begin the process of drug development prior to clinical trials. One or more of these steps may, but not necessarily, involve computer-aided drug design. Despite advances in technology and understanding of biological systems, drug discovery is still a lengthy, "expensive, difficult, and inefficient process" with a low rate of new therapeutic discovery. In 2010, the research and development cost of each new molecular entity (NME) was approximately US$1.8 billion. Drug discovery is done by pharmaceutical companies, sometimes with research assistance from universities. The "final product" of drug discovery is a patent on the potential drug. The drug requires very expensive Phase I, II, and III clinical trials, and most of them fail. Small companies have a critical role, often then selling the rights to larger companies that have the resources to run the clinical trials. Drug discovery is different from Drug Development. Drug Discovery is often considered the process of identifying new medicine. At the same time, Drug development is delivering a new drug molecule into clinical practice. In its broad definition, this encompasses all steps from the basic research process of finding a suitable molecular target to supporting the drug's commercial launch. Development Drug development is the process of bringing a new drug to the market once a lead compound has been identified through the process of drug discovery. It includes pre-clinical research (microorganisms/animals) and clinical trials (on humans) and may include the step of obtaining regulatory approval to market the drug. Drug Development Process Discovery: The Drug Development process starts with Discovery, a process of identifying a new medicine. Development: Chemicals extracted from natural products are used to make pills, capsules, or syrups for oral use. Injections for direct infusion into the blood drops for eyes or ears. Preclinical research: Drugs go under laboratory or animal testing, to ensure that they can be used on Humans. Clinical testing: The drug is used on people to confirm that it is safe to use. FDA Review: drug is sent to FDA before launching the drug into the market. FDA post-Market Review: The drug is reviewed and monitored by FDA for the safety once it is available to the public. Regulation The regulation of drugs varies by jurisdiction. In some countries, such as the United States, they are regulated at the national level by a single agency. In other jurisdictions, they are regulated at the state level, or at both state and national levels by various bodies, as is the case in Australia. The role of therapeutic goods regulation is designed mainly to protect the health and safety of the population. Regulation is aimed at ensuring the safety, quality, and efficacy of the therapeutic goods which are covered under the scope of the regulation. In most jurisdictions, therapeutic goods must be registered before they are allowed to be marketed. There is usually some degree of restriction on the availability of certain therapeutic goods depending on their risk to consumers. Depending upon the jurisdiction, drugs may be divided into over-the-counter drugs (OTC) which may be available without special restrictions, and prescription drugs, which must be prescribed by a licensed medical practitioner in accordance with medical guidelines due to the risk of adverse effects and contraindications. The precise distinction between OTC and prescription depends on the legal jurisdiction. A third category, "behind-the-counter" drugs, is implemented in some jurisdictions. These do not require a prescription, but must be kept in the dispensary, not visible to the public, and be sold only by a pharmacist or pharmacy technician. Doctors may also prescribe prescription drugs for off-label use – purposes which the drugs were not originally approved for by the regulatory agency. The Classification of Pharmaco-Therapeutic Referrals helps guide the referral process between pharmacists and doctors. The International Narcotics Control Board of the United Nations imposes a world law of prohibition of certain drugs. They publish a lengthy list of chemicals and plants whose trade and consumption (where applicable) are forbidden. OTC drugs are sold without restriction as they are considered safe enough that most people will not hurt themselves accidentally by taking it as instructed. Many countries, such as the United Kingdom have a third category of "pharmacy medicines", which can be sold only in registered pharmacies by or under the supervision of a pharmacist. Medical errors include over-prescription and polypharmacy, mis-prescription, contraindication and lack of detail in dosage and administration instructions. In 2000 the definition of a prescription error was studied using a Delphi method conference; the conference was motivated by ambiguity in what a prescription error is and a need to use a uniform definition in studies. Drug pricing In many jurisdictions, drug prices are regulated. United Kingdom In the UK, the Pharmaceutical Price Regulation Scheme is intended to ensure that the National Health Service is able to purchase drugs at reasonable prices. The prices are negotiated between the Department of Health, acting with the authority of Northern Ireland and the UK Government, and the representatives of the Pharmaceutical industry brands, the Association of the British Pharmaceutical Industry (ABPI). For 2017 this payment percentage set by the PPRS will be 4,75%. Canada In Canada, the Patented Medicine Prices Review Board examines drug pricing and determines if a price is excessive or not. In these circumstances, drug manufacturers must submit a proposed price to the appropriate regulatory agency. Furthermore, "the International Therapeutic Class Comparison Test is responsible for comparing the National Average Transaction Price of the patented drug product under review" different countries that the prices are being compared to are the following: France, Germany, Italy, Sweden, Switzerland, the United Kingdom, and the United States Brazil In Brazil, the prices are regulated through legislation under the name of Medicamento Genérico (generic drugs) since 1999. India In India, drug prices are regulated by the National Pharmaceutical Pricing Authority. United States In the United States, drug costs are partially unregulated, but instead are the result of negotiations between drug companies and insurance companies. High prices have been attributed to monopolies given to manufacturers by the government. New drug development costs continue to rise as well. Despite the enormous advances in science and technology, the number of new blockbuster drugs approved by the government per billion dollars spent has halved every 9 years since 1950. Blockbuster drug A blockbuster drug is a drug that generates more than $1 billion in revenue for a pharmaceutical company in a single year. Cimetidine was the first drug ever to reach more than $1 billion a year in sales, thus making it the first blockbuster drug. History Prescription drug history Antibiotics first arrived on the medical scene in 1932 thanks to Gerhard Domagk; and were coined the "wonder drugs". The introduction of the sulfa drugs led to the mortality rate from pneumonia in the U.S. to drop from 0.2% each year to 0.05% (, as much) by 1939. Antibiotics inhibit the growth or the metabolic activities of bacteria and other microorganisms by a chemical substance of microbial origin. Penicillin, introduced a few years later, provided a broader spectrum of activity compared to sulfa drugs and reduced side effects. Streptomycin, found in 1942, proved to be the first drug effective against the cause of tuberculosis and also came to be the best known of a long series of important antibiotics. A second generation of antibiotics was introduced in the 1940s: aureomycin and chloramphenicol. Aureomycin was the best known of the second generation. Lithium was discovered in the 19th century for nervous disorders and its possible mood-stabilizing or prophylactic effect; it was cheap and easily produced. As lithium fell out of favor in France, valpromide came into play. This antibiotic was the origin of the drug that eventually created the mood stabilizer category. Valpromide had distinct psychotrophic effects that were of benefit in both the treatment of acute manic states and in the maintenance treatment of manic depression illness. Psychotropics can either be sedative or stimulant; sedatives aim at damping down the extremes of behavior. Stimulants aim at restoring normality by increasing tone. Soon arose the notion of a tranquilizer which was quite different from any sedative or stimulant. The term tranquilizer took over the notions of sedatives and became the dominant term in the West through the 1980s. In Japan, during this time, the term tranquilizer produced the notion of a psyche-stabilizer and the term mood stabilizer vanished. Premarin (conjugated estrogens, introduced in 1942) and Prempro (a combination estrogen-progestin pill, introduced in 1995) dominated the hormone replacement therapy (HRT) during the 1990s. HRT is not a life-saving drug, nor does it cure any disease. HRT has been prescribed to improve one's quality of life. Doctors prescribe estrogen for their older female patients both to treat short-term menopausal symptoms and to prevent long-term diseases. In the 1960s and early 1970s, more and more physicians began to prescribe estrogen for their female patients. Between 1991 and 1999, Premarin was listed as the most popular prescription and best-selling drug in America. The first oral contraceptive, Enovid, was approved by FDA in 1960. Oral contraceptives inhibit ovulation and so prevent conception. Enovid was known to be much more effective than alternatives including the condom and the diaphragm. As early as 1960, oral contraceptives were available in several different strengths by every manufacturer. In the 1980s and 1990s, an increasing number of options arose including, most recently, a new delivery system for the oral contraceptive via a transdermal patch. In 1982, a new version of "the pill" was introduced, known as the biphasic pill. By 1985, a new triphasic pill was approved. Physicians began to think of "the pill" as an excellent means of birth control for young women. Stimulants such as Ritalin (methylphenidate) came to be pervasive tools for behavior management and modification in young children. Ritalin was first marketed in 1955 for narcolepsy; its potential users were middle-aged and the elderly. It was not until some time in the 1980s along with hyperactivity in children that Ritalin came onto the market. Medical use of methylphenidate is predominantly for symptoms of attention deficit hyperactivity disorder (ADHD). Consumption of methylphenidate in the U.S. out-paced all other countries between 1991 and 1999. Significant growth in consumption was also evident in Canada, New Zealand, Australia, and Norway. Currently, 85% of the world's methylphenidate is consumed in America. The first minor tranquilizer was meprobamate. Only fourteen months after it was made available, meprobamate had become the country's largest-selling prescription drug. By 1957, meprobamate had become the fastest-growing drug in history. The popularity of meprobamate paved the way for Librium and Valium, two minor tranquilizers that belonged to a new chemical class of drugs called the benzodiazepines. These were drugs that worked chiefly as anti-anxiety agents and muscle relaxants. The first benzodiazepine was Librium. Three months after it was approved, Librium had become the most prescribed tranquilizer in the nation. Three years later, Valium hit the shelves and was ten times more effective as a muscle relaxant and anti-convulsant. Valium was the most versatile of the minor tranquilizers. Later came the widespread adoption of major tranquilizers such as chlorpromazine and the drug reserpine. In 1970, sales began to decline for Valium and Librium, but sales of new and improved tranquilizers, such as Xanax, introduced in 1981 for the newly created diagnosis of panic disorder, soared. Mevacor (lovastatin) is the first and most influential statin in the American market. The 1991 launch of Pravachol (pravastatin), the second available in the United States, and the release of Zocor (simvastatin) made Mevacor no longer the only statin on the market. In 1998, Viagra was released as a treatment for erectile dysfunction. Ancient pharmacology Using plants and plant substances to treat all kinds of diseases and medical conditions is believed to date back to prehistoric medicine. The Kahun Gynaecological Papyrus, the oldest known medical text of any kind, dates to about 1800 BC and represents the first documented use of any kind of drug. It and other medical papyri describe Ancient Egyptian medical practices, such as using honey to treat infections and the legs of bee-eaters to treat neck pains. Ancient Babylonian medicine demonstrated the use of medication in the first half of the 2nd millennium BC. Medicinal creams and pills were employed as treatments. On the Indian subcontinent, the Atharvaveda, a sacred text of Hinduism whose core dates from the second millennium BC, although the hymns recorded in it are believed to be older, is the first Indic text dealing with medicine. It describes plant-based drugs to counter diseases. The earliest foundations of ayurveda were built on a synthesis of selected ancient herbal practices, together with a massive addition of theoretical conceptualizations, new nosologies and new therapies dating from about 400 BC onwards. The student of Āyurveda was expected to know ten arts that were indispensable in the preparation and application of his medicines: distillation, operative skills, cooking, horticulture, metallurgy, sugar manufacture, pharmacy, analysis and separation of minerals, compounding of metals, and preparation of alkalis. The Hippocratic Oath for physicians, attributed to fifth century BC Greece, refers to the existence of "deadly drugs", and ancient Greek physicians imported drugs from Egypt and elsewhere. The pharmacopoeia , written between 50 and 70 CE by the Greek physician Pedanius Dioscorides, was widely read for more than 1,500 years. Medieval pharmacology Al-Kindi's ninth century AD book, De Gradibus and Ibn Sina (Avicenna)'s The Canon of Medicine, covers a range of drugs known to the practice of medicine in the medieval Islamic world. Medieval medicine of Western Europe saw advances in surgery compared to previously, but few truly effective drugs existed, beyond opium (found in such extremely popular drugs as the "Great Rest" of the Antidotarium Nicolai at the time) and quinine. Folklore cures and potentially poisonous metal-based compounds were popular treatments. Theodoric Borgognoni, (1205–1296), one of the most significant surgeons of the medieval period, responsible for introducing and promoting important surgical advances including basic antiseptic practice and the use of anaesthetics. Garcia de Orta described some herbal treatments that were used. Modern pharmacology For most of the 19th century, drugs were not highly effective, leading Oliver Wendell Holmes Sr. to famously comment in 1842 that "if all medicines in the world were thrown into the sea, it would be all the better for mankind and all the worse for the fishes". During the First World War, Alexis Carrel and Henry Dakin developed the Carrel-Dakin method of treating wounds with an irrigation, Dakin's solution, a germicide which helped prevent gangrene. In the inter-war period, the first anti-bacterial agents such as the sulpha antibiotics were developed. The Second World War saw the introduction of widespread and effective antimicrobial therapy with the development and mass production of penicillin antibiotics, made possible by the pressures of the war and the collaboration of British scientists with the American pharmaceutical industry. Medicines commonly used by the late 1920s included aspirin, codeine, and morphine for pain; digitalis, nitroglycerin, and quinine for heart disorders, and insulin for diabetes. Other drugs included antitoxins, a few biological vaccines, and a few synthetic drugs. In the 1930s, antibiotics emerged: first sulfa drugs, then penicillin and other antibiotics. Drugs increasingly became "the center of medical practice". In the 1950s, other drugs emerged including corticosteroids for inflammation, rauvolfia alkaloids as tranquilizers and antihypertensives, antihistamines for nasal allergies, xanthines for asthma, and typical antipsychotics for psychosis. As of 2007, thousands of approved drugs have been developed. Increasingly, biotechnology is used to discover biopharmaceuticals. Recently, multi-disciplinary approaches have yielded a wealth of new data on the development of novel antibiotics and antibacterials and on the use of biological agents for antibacterial therapy. In the 1950s, new psychiatric drugs, notably the antipsychotic chlorpromazine, were designed in laboratories and slowly came into preferred use. Although often accepted as an advance in some ways, there was some opposition, due to serious adverse effects such as tardive dyskinesia. Patients often opposed psychiatry and refused or stopped taking the drugs when not subject to psychiatric control. Governments have been heavily involved in the regulation of drug development and drug sales. In the U.S., the Elixir Sulfanilamide disaster led to the establishment of the Food and Drug Administration, and the 1938 Federal Food, Drug, and Cosmetic Act required manufacturers to file new drugs with the FDA. The 1951 Humphrey-Durham Amendment required certain drugs to be sold by prescription. In 1962, a subsequent amendment required new drugs to be tested for efficacy and safety in clinical trials. Until the 1970s, drug prices were not a major concern for doctors and patients. As more drugs became prescribed for chronic illnesses, however, costs became burdensome, and by the 1970s nearly every U.S. state required or encouraged the substitution of generic drugs for higher-priced brand names. This also led to the 2006 U.S. law, Medicare Part D, which offers Medicare coverage for drugs. As of 2008, the United States is the leader in medical research, including pharmaceutical development. U.S. drug prices are among the highest in the world, and drug innovation is correspondingly high. In 2000, U.S.-based firms developed 29 of the 75 top-selling drugs; firms from the second-largest market, Japan, developed eight, and the United Kingdom contributed 10. France, which imposes price controls, developed three. Throughout the 1990s, outcomes were similar. Controversies Controversies concerning pharmaceutical drugs include patient access to drugs under development and not yet approved, pricing, and environmental issues. Access to unapproved drugs Governments worldwide have created provisions for granting access to drugs prior to approval for patients who have exhausted all alternative treatment options and do not match clinical trial entry criteria. Often grouped under the labels of compassionate use, expanded access, or named patient supply, these programs are governed by rules which vary by country defining access criteria, data collection, promotion, and control of drug distribution. Within the United States, pre-approval demand is generally met through treatment IND (investigational new drug) applications (INDs), or single-patient INDs. These mechanisms, which fall under the label of expanded access programs, provide access to drugs for groups of patients or individuals residing in the US. Outside the US, Named Patient Programs provide controlled, pre-approval access to drugs in response to requests by physicians on behalf of specific, or "named", patients before those medicines are licensed in the patient's home country. Through these programs, patients are able to access drugs in late-stage clinical trials or approved in other countries for a genuine, unmet medical need, before those drugs have been licensed in the patient's home country. Patients who have not been able to get access to drugs in development have organized and advocated for greater access. In the United States, ACT UP formed in the 1980s, and eventually formed its Treatment Action Group in part to pressure the US government to put more resources into discovering treatments for AIDS and then to speed release of drugs that were under development. The Abigail Alliance was established in November 2001 by Frank Burroughs in memory of his daughter, Abigail. The Alliance seeks broader availability of investigational drugs on behalf of terminally ill patients. In 2013, BioMarin Pharmaceutical was at the center of a high-profile debate regarding expanded access of cancer patients to experimental drugs. Access to medicines and drug pricing Essential medicines, as defined by the World Health Organization (WHO), are "those drugs that satisfy the health care needs of the majority of the population; they should therefore be available at all times in adequate amounts and in appropriate dosage forms, at a price the community can afford." Recent studies have found that most of the medicines on the WHO essential medicines list, outside of the field of HIV drugs, are not patented in the developing world, and that lack of widespread access to these medicines arise from issues fundamental to economic development – lack of infrastructure and poverty. Médecins Sans Frontières also runs a Campaign for Access to Essential Medicines campaign, which includes advocacy for greater resources to be devoted to currently untreatable diseases that primarily occur in the developing world. The Access to Medicine Index tracks how well pharmaceutical companies make their products available in the developing world. World Trade Organization negotiations in the 1990s, including the TRIPS Agreement and the Doha Declaration, have centered on issues at the intersection of international trade in pharmaceuticals and intellectual property rights, with developed world nations seeking strong intellectual property rights to protect investments made to develop new drugs, and developing world nations seeking to promote their generic pharmaceuticals industries and their ability to make medicine available to their people via compulsory licenses. Some have raised ethical objections specifically with respect to pharmaceutical patents and the high prices for drugs that they enable their proprietors to charge, which poor people around the world, cannot afford. Critics also question the rationale that exclusive patent rights and the resulting high prices are required for pharmaceutical companies to recoup the large investments needed for research and development. One study concluded that marketing expenditures for new drugs often doubled the amount that was allocated for research and development. Other critics claim that patent settlements would be costly for consumers, the health care system, and state and federal governments because it would result in delaying access to lower cost generic medicines. Novartis fought a protracted battle with the government of India over the patenting of its drug, Gleevec, in India, which ended up in a Supreme Court in a case known as Novartis v. Union of India & Others. The Supreme Court ruled narrowly against Novartis, but opponents of patenting drugs claimed it as a major victory. Environmental issues Pharmaceutical medications are commonly described as "ubiquitous" in nearly every type of environmental medium (i.e. lakes, rivers, streams, estuaries, seawater, and soil) worldwide. Their chemical components are typically present at relatively low concentrations in the ng/L to μg/L ranges. The primary avenue for medications reaching the environment are through the effluent of wastewater treatment plants, both from industrial plants during production, and from municipal plants after consumption. Agricultural pollution is another significant source derived from the prevalence of antibiotic use in livestock. Scientists generally divide environmental impacts of a chemical into three primary categories: persistence, bioaccumulation, and toxicity. Since medications are inherently bio-active, most are naturally degradable in the environment, however they are classified as "pseudopersistent" because they are constantly being replenished from their sources. These Environmentally Persistent Pharmaceutical Pollutants (EPPPs) rarely reach toxic concentrations in the environment, however they have been known to bioaccumulate in some species. Their effects have been observed to compound gradually across food webs, rather than becoming acute, leading to their classification by the US Geological Survey as "Ecological Disrupting Compounds." See also Adherence Deprescribing Drug nomenclature List of drugs List of pharmaceutical companies Orphan drug Overmedication Pharmaceutical code Pharmacy References External links Drug Reference Site Directory – OpenMD Drugs & Medications Directory – Curlie European Medicines Agency NHS Medicines A–Z U.S. Food & Drug Administration: Drugs WHO Model List of Essential Medicines Chemicals in medicine Pharmaceutical industry Products of chemical industry
Medication
[ "Chemistry", "Engineering", "Biology" ]
8,234
[ "Pharmacology", "Life sciences industry", "Products of chemical industry", "Pharmaceutical industry", "Chemical engineering", "Medicinal chemistry", "Chemicals in medicine", "Drugs" ]
180,234
https://en.wikipedia.org/wiki/Kilowatt-hour
A kilowatt-hour (unit symbol: kW⋅h or kW h; commonly written as kWh) is a non-SI unit of energy equal to 3.6 megajoules (MJ) in SI units, which is the energy delivered by one kilowatt of power for one hour. Kilowatt-hours are a common billing unit for electrical energy supplied by electric utilities. Metric prefixes are used for multiples and submultiples of the basic unit, the watt-hour (3.6 kJ). Definition The kilowatt-hour is a composite unit of energy equal to one kilowatt (kW) sustained for (multiplied by) one hour. The International System of Units (SI) unit of energy meanwhile is the joule (symbol J). Because a watt is by definition one joule per second, and because there are 3,600 seconds in an hour, one kWh equals 3,600 kilojoules or 3.6 MJ. Unit representations A widely used representation of the kilowatt-hour is kWh, derived from its component units, kilowatt and hour. It is commonly used in billing for delivered energy to consumers by electric utility companies, and in commercial, educational, and scientific publications, and in the media. It is also the usual unit representation in electrical power engineering. This common representation, however, does not comply with the style guide of the International System of Units (SI). Other representations of the unit may be encountered: kW⋅h and kW h are less commonly used, but they are consistent with the SI. The SI brochure states that in forming a compound unit symbol, "Multiplication must be indicated by a space or a half-high (centred) dot (⋅), since otherwise some prefixes could be misinterpreted as a unit symbol." This is supported by a standard issued jointly by an international (IEEE) and national (ASTM) organization, and by a major style guide. However, the IEEE/ASTM standard allows kWh (but does not mention other multiples of the watt-hour). One guide published by NIST specifically recommends against kWh "to avoid possible confusion". In 2014, the United States official fuel-economy window sticker for electric vehicles used the abbreviation kW-hrs. Variations in capitalization are sometimes encountered: KWh, KWH, kwh, etc., which are inconsistent with the International System of Units. The notation kW/h for the kilowatt-hour is incorrect, as it denotes kilowatt per hour. The hour is a unit of time listed among the non-SI units accepted by the International Bureau of Weights and Measures for use with the SI. An electric heater consuming 1,000 watts (1 kilowatt) operating for one hour uses one kilowatt-hour of energy. A television consuming 100 watts operating continuously for 10 hours uses one kilowatt-hour. A 40-watt electric appliance operating continuously for 25 hours uses one kilowatt-hour. Electricity sales Electrical energy is typically sold to consumers in kilowatt-hours. The cost of running an electrical device is calculated by multiplying the device's power consumption in kilowatts by the operating time in hours, and by the price per kilowatt-hour. The unit price of electricity charged by utility companies may depend on the customer's consumption profile over time. Prices vary considerably by locality. In the United States prices in different states can vary by a factor of three. While smaller customer loads are usually billed only for energy, transmission services, and the rated capacity, larger consumers also pay for peak power consumption, the greatest power recorded in a fairly short time, such as 15 minutes. This compensates the power company for maintaining the infrastructure needed to provide peak power. These charges are billed as demand changes. Industrial users may also have extra charges according to the power factor of their load. Major energy production or consumption is often expressed as terawatt-hours (TWh) for a given period that is often a calendar year or financial year. A 365-day year equals 8,760 hours, so over a period of one year, power of one gigawatt equates to 8.76 terawatt-hours of energy. Conversely, one terawatt-hour is equal to a sustained power of about 114 megawatts for a period of one year. Examples In 2020, the average household in the United States consumed 893 kWh per month. Raising the temperature of 1 litre of water from room temperature to the boiling point with an electric kettle takes about 0.1 kWh. A 12-watt LED lamp lit constantly uses about 0.3 kWh per 24 hours and about 9 kWh per month. In terms of human power, a healthy adult male manual laborer performs work equal to about half a kilowatt-hour over an eight-hour day. Conversions To convert a quantity measured in a unit in the left column to the units in the top row, multiply by the factor in the cell where the row and column intersect. Watt-hour multiples All the SI prefixes are commonly applied to the watt-hour: a kilowatt-hour (kWh) is 1,000 Wh; a megawatt-hour (MWh) is 1 million Wh; a milliwatt-hour (mWh) is and so on. The kilowatt-hour is commonly used by electrical energy providers for purposes of billing, since the monthly energy consumption of a typical residential customer ranges from a few hundred to a few thousand kilowatt-hours. Megawatt-hours (MWh), gigawatt-hours (GWh), and terawatt-hours (TWh) are often used for metering larger amounts of electrical energy to industrial customers and in power generation. The terawatt-hour and petawatt-hour (PWh) units are large enough to conveniently express the annual electricity generation for whole countries and the world energy consumption. Distinction between kWh (energy) and kW (power) A kilowatt is a unit of power (rate of flow of energy per unit of time). A kilowatt-hour is a unit of energy. Kilowatt per hour would be a rate of change of power flow with time. Work is the amount of energy transferred to a system; power is the rate of delivery of energy. Energy is measured in joules, or watt-seconds. Power is measured in watts, or joules per second. For example, a battery stores energy. When the battery delivers its energy, it does so at a certain power, that is, the rate of delivery of the energy. The higher the power, the quicker the battery's stored energy is delivered. A higher power output will cause the battery's stored energy to be depleted in a shorter time period. Annualized power Electric energy production and consumption are sometimes reported on a yearly basis, in units such as megawatt-hours per year (MWh/yr) gigawatt-hours/year (GWh/yr) or terawatt-hours per year (TWh/yr). These units have dimensions of energy divided by time and thus are units of power. They can be converted to SI power units by dividing by the number of hours in a year, about . Thus, = ≈ . Misuse of watts per hour Many compound units for various kinds of rates explicitly mention units of time to indicate a change over time. For example: miles per hour, kilometres per hour, dollars per hour. Power units, such as kW, already measure the rate of energy per unit time (kW=kJ/s). Kilowatt-hours are a product of power and time, not a rate of change of power with time. Watts per hour (W/h) is a unit of a change of power per hour, i.e. an acceleration in the delivery of energy. It is used to measure the daily variation of demand (e.g. the slope of the duck curve), or ramp-up behavior of power plants. For example, a power plant that reaches a power output of from in 15 minutes has a ramp-up rate of . Other uses of terms such as watts per hour are likely to be errors. Other related energy units Several other units related to kilowatt-hour are commonly used to indicate power or energy capacity or use in specific application areas. Average annual energy production or consumption can be expressed in kilowatt-hours per year. This is used with loads or output that vary during the year but whose annual totals are similar from one year to the next. For example, it is useful to compare the energy efficiency of household appliances whose power consumption varies with time or the season of the year. Another use is to measure the energy produced by a distributed power source. One kilowatt-hour per year equals about 114.08 milliwatts applied constantly during one year. The energy content of a battery is usually expressed indirectly by its capacity in ampere-hours; to convert ampere-hour (Ah) to watt-hours (Wh), the ampere-hour value must be multiplied by the voltage of the power source. This value is approximate, since the battery voltage is not constant during its discharge, and because higher discharge rates reduce the total amount of energy that the battery can provide. In the case of devices that output a different voltage than the battery, it is the battery voltage (typically 3.7 V for Li-ion) that must be used to calculate rather than the device output (for example, usually 5.0 V for USB portable chargers). This results in a 500 mA USB device running for about 3.7 hours on a 2,500 mAh battery, not five hours. The Board of Trade unit (B.T.U.) is an obsolete UK synonym for kilowatt-hour. The term derives from the name of the Board of Trade which regulated the electricity industry until 1942 when the Ministry of Power took over. It is distinct from a British Thermal Unit (BTU) which is 1055 J. In India, the kilowatt-hour is often simply called a unit of energy. A million units, designated MU, is a gigawatt-hour and a BU (billion units) is a terawatt-hour. See also Ampere-hour Electric vehicle battery Electric energy consumption IEEE Std 260.1-2004 Orders of magnitude (energy) References Units of energy Electric power Non-SI metric units
Kilowatt-hour
[ "Physics", "Mathematics", "Engineering" ]
2,193
[ "Physical quantities", "Non-SI metric units", "Quantity", "Units of energy", "Power (physics)", "Electric power", "Electrical engineering", "Units of measurement" ]
180,236
https://en.wikipedia.org/wiki/Greisen%E2%80%93Zatsepin%E2%80%93Kuzmin%20limit
The Greisen–Zatsepin–Kuzmin limit (GZK limit or GZK cutoff) is a theoretical upper limit on the energy of cosmic ray protons traveling from other galaxies through the intergalactic medium to our galaxy. The limit is (50 EeV), or about 8 joules (the energy of a proton travelling at ≈ % the speed of light). The limit is set by the slowing effect of interactions of the protons with the microwave background radiation over long distances (≈ 160 million light-years). The limit is at the same order of magnitude as the upper limit for energy at which cosmic rays have experimentally been detected, although indeed some detections appear to have exceeded the limit, as noted below. For example, one extreme-energy cosmic ray, the Oh-My-God Particle, which has been found to possess a record-breaking (50 joules) of energy (about the same as the kinetic energy of a 95 km/h baseball). In the past, the apparent violation of the GZK limit has inspired cosmologists and theoretical physicists to suggest other ways that circumvent the limit. These theories propose that ultra-high energy cosmic rays are produced near our galaxy or that Lorentz covariance is violated in such a way that protons do not lose energy on their way to our galaxy. Computation The limit was independently computed in 1966 by Kenneth Greisen, Georgy Zatsepin, and Vadim Kuzmin based on interactions between cosmic rays and the photons of the cosmic microwave background radiation (CMB). They predicted that cosmic rays with energies over the threshold energy of would interact with cosmic microwave background photons relatively blueshifted by the speed of the cosmic rays, to produce pions through the resonance, or Pions produced in this manner proceed to decay in the standard pion channels – ultimately to photons for neutral pions, and photons, positrons, and various neutrinos for positive pions. Neutrons also decay to similar products, so that ultimately the energy of any cosmic ray proton is drained off by production of high-energy photons plus (in some cases) high-energy electron–positron pairs and neutrino pairs. The pion production process begins at a higher energy than ordinary electron-positron pair production (lepton production) from protons impacting the CMB, which starts at cosmic-ray proton energies of only about . However, pion production events drain 20% of the energy of a cosmic-ray proton, as compared with only 0.1% of its energy for electron–positron pair production. This factor comes from two causes: The pion has a mass only about ~130 times the leptons, but the extra energy appears as different kinetic energies of the pion or leptons, and results in relatively more kinetic energy transferred to a heavier product pion, in order to conserve momentum. The much larger total energy losses from pion production result in pion production becoming the process limiting high-energy cosmic-ray travel, rather than the lower-energy process of light-lepton production. The pion production process continues until the cosmic ray energy falls below the threshold for pion production. Due to the mean path associated with this interaction, extragalactic cosmic ray protons traveling over distances larger than () and with energies greater than the threshold should never be observed on Earth. This distance is also known as GZK horizon. The precise GZK limit is derived under the assumption that ultra-high energy cosmic rays, those with energies above , are protons. Measurements by the largest cosmic-ray observatory, the Pierre Auger Observatory, suggest that most ultra-high energy cosmic rays are heavier elements known as HZE ions. In this case, the argument behind the GZK limit does not apply in the originally simple form: however, as Greisen noted, the giant dipole resonance also occurs roughly in this energy range (at 10 EeV/nucleon) and similarly restricts very long-distance propagation. GZK paradox A number of observations have been made by the largest cosmic-ray experiments Akeno Giant Air Shower Array (AGASA), High Resolution Fly's Eye Cosmic Ray Detector, the Pierre Auger Observatory and Telescope Array Project that appeared to show cosmic rays with energies above the GZK limit. These observations appear to contradict the predictions of special relativity and particle physics as they are presently understood. However, there are a number of possible explanations for these observations that may resolve this inconsistency. The observed EECR particles can be heavier nuclei instead of protons The observations could be due to an instrument error or an incorrect interpretation of the experiment, especially wrong energy assignment. The cosmic rays could have local sources within the GZK horizon (although it is unclear what these sources could be). Weakly interacting particles Another suggestion involves ultra-high-energy weakly interacting particles (for instance, neutrinos), which might be created at great distances and later react locally to give rise to the particles observed. In the proposed Z-burst model, an ultra-high-energy cosmic neutrino collides with a relic anti-neutrino in our galaxy and annihilates to hadrons. This process proceeds through a (virtual) Z-boson: The cross-section for this process becomes large if the center-of-mass energy of the neutrino antineutrino pair is equal to the Z-boson mass (such a peak in the cross-section is called "resonance"). Assuming that the relic anti-neutrino is at rest, the energy of the incident cosmic neutrino has to be where is the mass of the Z-boson, and the mass of the neutrino. Controversy about cosmic rays above the GZK limit A suppression of the cosmic-ray flux that can be explained with the GZK limit has been confirmed by the latest generation of cosmic-ray observatories. A former claim by the AGASA experiment that there is no suppression was overruled. It remains controversial whether the suppression is due to the GZK effect. The GZK limit only applies if ultra-high-energy cosmic rays are mostly protons. In July 2007, during the 30th International Cosmic Ray Conference in Mérida, Yucatán, México, the High Resolution Fly's Eye Experiment (HiRes) and the Pierre Auger Observatory (Auger) presented their results on ultra-high-energy cosmic rays (UHECR). HiRes observed a suppression in the UHECR spectrum at just the right energy, observing only 13 events with an energy above the threshold, while expecting 43 with no suppression. This was interpreted as the first observation of the GZK limit. Auger confirmed the flux suppression, but did not claim it to be the GZK limit: instead of the 30 events necessary to confirm the AGASA results, Auger saw only two, which are believed to be heavy-nuclei events. The flux suppression was previously brought into question when the AGASA experiment found no suppression in their spectrum. According to Alan Watson, former spokesperson for the Auger Collaboration, AGASA results have been shown to be incorrect, possibly due to the systematic shift in energy assignment. In 2010 and the following years, both the Pierre Auger Observatory and HiRes confirmed again a flux suppression, in case of the Pierre Auger Observatory the effect is statistically significant at the level of 20 standard deviations. After the flux suppression was established, a heated debate ensued whether cosmic rays that violate the GZK limit are protons. The Pierre Auger Observatory, the world's largest observatory, found with high statistical significance that ultra-high-energy cosmic rays are not purely protons, but a mixture of elements, which is getting heavier with increasing energy. The Telescope Array Project, a joint effort from members of the HiRes and AGASA collaborations, agrees with the former HiRes result that these cosmic rays look like protons. The claim is based on data with lower statistical significance, however. The area covered by Telescope Array is about one third of the area covered by the Pierre Auger Observatory, and the latter has been running for a longer time. The controversy was partially resolved in 2017, when a joint working group formed by members of both experiments presented a report at the 35th International Cosmic Ray Conference. According to the report, the raw experimental results are not in contradiction with each other. The different interpretations are mainly based on the use of different theoretical models and the fact that Telescope Array has not collected enough events yet to distinguish the pure-proton hypothesis from the mixed-nuclei hypothesis. Extreme Universe Space Observatory on Japanese Experiment Module (JEM-EUSO) EUSO, which was scheduled to fly on the International Space Station (ISS) in 2009, was designed to use the atmospheric-fluorescence technique to monitor a huge area and boost the statistics of UHECRs considerably. EUSO is to make a deep survey of UHECR-induced extensive air showers (EASs) from space, extending the measured energy spectrum well beyond the GZK cutoff. It is to search for the origin of UHECRs, determine the nature of the origin of UHECRs, make an all-sky survey of the arrival direction of UHECRs, and seek to open the astronomical window on the extreme-energy universe with neutrinos. The fate of the EUSO Observatory is still unclear, since NASA is considering early retirement of the ISS. The Fermi Gamma-ray Space Telescope Launched in June 2008, the Fermi Gamma-ray Space Telescope (formerly GLAST) will also provide data that will help resolve these inconsistencies. With the Fermi Gamma-ray Space Telescope, one has the possibility of detecting gamma rays from the freshly accelerated cosmic-ray nuclei at their acceleration site (the source of the UHECRs). UHECR protons accelerated (see also Centrifugal mechanism of acceleration) in astrophysical objects produce secondary electromagnetic cascades during propagation in the cosmic microwave and infrared backgrounds, of which the GZK process of pion production is one of the contributors. Such cascades can contribute between about 1% and 50% of the GeV–TeV diffuse photon flux measured by the EGRET experiment. The Fermi Gamma-ray Space Telescope may discover this flux. See also References External links Rutgers University experimental high energy physics HIRES research page Pierre Auger Observatory page Cosmic-ray.org History of Cosmic Ray Research Cosmic rays Physical paradoxes Energy Special relativity Astroparticle physics Unsolved problems in physics Unsolved problems in astronomy
Greisen–Zatsepin–Kuzmin limit
[ "Physics", "Astronomy" ]
2,192
[ "Physical phenomena", "Unsolved problems in astronomy", "Physical quantities", "Concepts in astronomy", "Astroparticle physics", "Unsolved problems in physics", "Astrophysics", "Special relativity", "Energy (physics)", "Energy", "Radiation", "Particle physics", "Astronomical controversies", ...
180,607
https://en.wikipedia.org/wiki/Boeing%202707
The Boeing 2707 was an American supersonic passenger airliner project during the 1960s. After winning a competition for a government-funded contract to build an American supersonic airliner, Boeing began development at its facilities in Seattle, Washington. The design emerged as a large aircraft with seating for 250 to 300 passengers and cruise speeds of approximately Mach 3. It was intended to be much larger and faster than competing supersonic transport (SST) designs such as the Concorde. The SST was the topic of considerable concern within and outside the aviation industry. From the start, the airline industry noted that the economics of the design were questionable, concerns that were only partially addressed during development. Outside the field, the entire SST concept was the subject of considerable negative press, centered on the issue of sonic booms and effects on the ozone layer. A key design feature of the 2707 was its use of a swing-wing configuration. During development, the required weight and size of this mechanism continued to grow, forcing the team to switch to a conventional delta wing. Rising costs, environmental concerns, noise, and the lack of a clear market led to its cancellation in 1971 before two prototypes were completed. Development Early studies Boeing had worked on a number of small-scale SST studies since 1952. In 1958, it established a permanent research committee, which grew to a $1 million effort by 1960. The committee proposed a variety of alternative designs, all under the name Model 733. Most of the designs featured a large delta wing, but in 1959 another design was offered as an offshoot of Boeing's efforts in the swing-wing TFX program (which led to the purchase of the General Dynamics F-111 instead of the Boeing offering). In 1960, an internal competition was run on a baseline 150-seat aircraft for trans-Atlantic routes, and the swing-wing version won. Shortly after taking office, President John F. Kennedy tasked the Federal Aviation Administration with preparing a report on "national aviation goals for the period between now and 1970". The study was prompted in the wake of several accidents, which led to the belief that the industry was becoming moribund. Two projects were started, Project Beacon on new navigational systems and air traffic control, and Project Horizon on advanced civil aviation developments. Only one month later the FAA's new director, Najeeb Halaby, produced the Commission on National Aviation Goals, better known as Project Horizon. Among other suggestions, the report was used as a platform to promote the SST. Halaby argued that a failure to enter this market would be a "stunning setback". The report was met with skepticism by most others. Kennedy had put Lyndon Johnson on the SST file, and he turned to Robert McNamara for guidance. McNamara was highly skeptical of the SST project and savaged Halaby's predictions; he was also afraid the project might be turned over to the DoD and was careful to press for further studies. The basic concept behind the SST was that its fast flight would allow them to fly more trips than a subsonic aircraft, leading to higher utilization. However, it did this at the cost of greatly increased fuel use. If fuel costs were to change dramatically, SSTs would not be competitive. These problems were well understood within the industry; the IATA released a set of "design imperatives" for an SST that were essentially impossible to meet—the release was a warning to promoters of the SST within the industry. Concorde By mid-1962, it was becoming clear that tentative talks earlier that year between the British Aircraft Corporation and Sud Aviation (later Aérospatiale) on a merger of their SST projects were more serious than originally thought. In November 1962, still to the surprise of many, the Concorde project was announced. In spite of marginal economics, nationalistic and political arguments had led to wide support for the project, especially from Charles de Gaulle. This set off something of a wave of panic in other countries, as it was widely believed that almost all future commercial aircraft would be supersonic, and it looked like the Europeans would start off with a huge lead. As if this were not enough, it soon became known that the Soviets were also working on a similar design. Three days after the Concorde announcement, Halaby wrote a letter to Kennedy suggesting that if they did not immediately start their own SST effort, the US would lose 50,000 jobs, $4 billion in income, and $3 billion in capital as local carriers turned to foreign suppliers. A report from the Supersonic Transport Advisory Group (STAG) followed, noting that the European team was in the lead in basic development, and suggested competing by developing a more advanced design with better economics. At the time, more advanced generally meant higher speed. The baseline design in the report called for an aircraft with Mach 3 performance with range in order to serve the domestic market. They felt that there was no way to build a transatlantic design with that performance in time to catch the Concorde's introduction, abandoning the trans-Atlantic market to the Europeans. In spite of vocal opponents, questions about the technical requirements, and extremely negative reports about its economic viability, the SST project gathered strong backing from industry and the FAA. Johnson sent a report to the president asking for $100 million in funding for FY 1964. This might have been delayed, but in May, Pan Am announced they had placed 6 options on the Concorde. Juan Trippe leaked the information earlier that month, stating that the airline would not ignore the SST market, and would buy from Europe if need be. Pan Am's interest in Concorde angered Kennedy, who called his administration to get Pan Am to redirect its potential funding back to the US SST program. Kennedy introduced the National Supersonic Transport program on June 5, 1963, in a speech at the US Air Force Academy. Design competition Requests for proposals were sent out to airframe manufacturers Boeing, Lockheed, and North American for the airframes; and Curtiss-Wright, General Electric and Pratt & Whitney for engines. The FAA estimated that there would be a market for 500 SSTs by 1990. Despite not having a selected design, orders from air carriers started flowing in immediately. Preliminary designs were submitted to the FAA on January 15, 1964. Boeing's entry was essentially identical to the swing-wing Model 733 studied in 1960; it was known officially as the Model , but also referred to both as the 1966 Model and the Model 2707. The latter name became the best known in public, while Boeing continued to use 733 model numbers internally. The design resembled the future B-1 Lancer bomber, with the exception that the four engines were mounted in individual nacelles instead of paired pods used on the Lancer. The blended wing root spanned almost all of cabin area, and this early version had a much more stubby look than the models that would ultimately evolve. The wing featured extensive high-lift devices on both the leading and trailing edges, minimizing the thrust required, and thus noise created, during climb out. The proposal also included optional fuselage stretches that increased capacity from the normal 150 seats to 227. Lockheed's entry, designated CL-823, was essentially an enlarged Concorde. Like the Concorde, it featured a long and skinny fuselage, engines under the wing, and a compound delta planform. The only major design difference was the use of individual pods for the engines, rather than pairs. The CL-823 lacked any form of high-lift devices on the wings, relying on engine power and long runways for liftoff, ensuring a huge noise footprint. The CL-823 was the largest of the first-round entries, with typical seating for 218. The North American NAC-60 was essentially a scaled-up B-70 with a less tapered fuselage and new compound-delta wing. The design retained the high-mounted canard above the cockpit area, and the box-like engine area under the fuselage. The use of high-lift devices on the leading edge of the wing lowered the landing angles to the point where the "drooping nose" was not required, and a more conventional rounded design was used. Compared to the other designs, the rounded nose profile and more cylindrical cross-section gave the NAC-60 a decidedly more conventional look than the other entries. This also meant it would fly slower, at Mach 2.65. A "downselect" of the proposed models resulted in the NAC-60 and Curtiss-Wright efforts being dropped from the program, with both Boeing and Lockheed asked to offer SST models meeting the more demanding FAA requirements and able to use either of the remaining engine designs from GE or P&W. In November, another design review was held, and by this time Boeing had scaled up the original design into a 250-seat model, the Model 733-290. Due to concerns about jet blast, the four engines were moved to a position underneath an enlarged tailplane. When the wings were in their swept-back position, they merged with the tailplane to produce a delta-wing planform. Both companies were now asked for considerably more detailed proposals, to be presented for final selection in 1966. When this occurred, Boeing's design was now the 300-seat Model 733-390. Both the Boeing and Lockheed L-2000 designs were presented in September 1966 along with full-scale mock-ups. After a lengthy review the Boeing design was announced as the winner on January 1, 1967. The design would be powered by the General Electric GE4/J5 engines. Lockheed's L-2000 was judged simpler to produce and less risky, but its performance was slightly lower and its noise levels slightly higher. Refining the design The 733-390 would have been an advanced aircraft even if it had been only subsonic. It was one of the earliest wide-body aircraft designs, with 2-3-2 row seating arrangement at its widest section in a fuselage that was considerably wider than aircraft then in service. The SST mock-up included both overhead storage for smaller items with restraining nets, as well as large drop-in bins between sections of the aircraft. In the main 247-seat tourist-class cabin, the entertainment system consisted of retractable televisions placed between every sixth row in the overhead storage. In the 30-seat first-class area, every pair of seats included smaller televisions in a console between the seats. Windows were only due to the high altitudes the aircraft flew at maximizing the pressure on them, but the internal pane was to give an illusion of size. Boeing predicted that if the go-ahead were given, construction of the SST prototypes would begin in early 1967 and the first flight could be made in early 1970. Production aircraft could start being built in early 1969, with the flight testing in late 1972 and certification by mid-1974. A major change in the design came when Boeing added canards behind the nose—which added weight. Boeing also faced insurmountable weight problems due to the swing-wing mechanism, a titanium pivot section having been fabricated with a weight of and measuring long and thick, and the design could not achieve sufficient range. Flexing of the fuselage (it would have been the longest ever built) threatened to make control difficult. In October 1968, the company was finally forced to abandon the variable geometry wing. The Boeing team fell back on a tailed delta fixed wing. The new design was also smaller, seating 234, and known as the Model 2707-300. Work began on a full-sized mock-up and two prototypes in September 1969, now two years behind schedule. A promotional film claimed that airlines would soon pay back the federal investment in the project, and it was projected that SSTs would dominate the skies with subsonic jumbo jets (such as Boeing's 747) being only a passing intermediate fad. By October 1969, there were delivery positions reserved for 122 Boeing SSTs by 26 airlines, including Alitalia, Canadian Pacific Airlines, Delta Air Lines, Iberia, KLM, Northwest Airlines, and World Airways. Environmental concerns By this point, the opposition to the project was becoming increasingly vocal. Environmentalists were the most influential group, voicing concerns about possible depletion of the ozone layer due to the high altitude flights, and about noise at airports, as well as from sonic booms. The latter became the most significant rallying point, especially after the publication of the anti-SST paperback, SST and Sonic Boom Handbook edited by William Shurcliff, which claimed that a single flight would "leave a 'bang-zone' wide by long" along with a host of associated problems. During tests in 1964 with the XB-70 near Oklahoma City, the path had a maximum width of , but still resulted in 9,594 complaints of damage to buildings, 4,629 formal damage claims, and 229 claims for a total of $12,845.32, mostly for broken glass and cracked plaster. As the opposition widened, the claimed negative effects increased, including upsetting people who do delicate work (e.g., brain surgeons), and harming persons with nervous ailments. One concern was that the water vapor released by the engines into the stratosphere would envelop the earth in a "global gloom". Presidential Adviser Russell Train warned that a fleet of 500 SSTs flying at for a period of years could raise stratospheric water content by as much as 50% to 100%. According to Train, this could lead to greater ground-level heat and hamper the formation of ozone. Later, an additional threat to the ozone was found in the exhaust's nitrogen oxides, a threat that was later validated by MIT. More recent analysis in 1995 by David W. Fahey, an atmospheric scientist at the National Oceanic and Atmospheric Administration, and others found that the drop in ozone would be from 1 to 2% if a fleet of 500 supersonic aircraft was operated. Fahey expressed the opinion that this would not be a fatal obstacle for an advanced SST development. During the 1970s the alleged potential for serious ozone damage and the sonic boom worries were picked up by the Sierra Club, the National Wildlife Federation and the Wilderness Society. Supersonic flight over land in the United States was eventually banned, and several states added additional restrictions or banned Concorde outright. Senator William Proxmire (D-Wisconsin) criticized the SST program as frivolous federal spending. Halaby attempted to dismiss these concerns, stating "The supersonics are coming−as surely as tomorrow. You will be flying one version or another by 1980 and be trying to remember what the great debate was all about." Government funding cut In March 1971, despite the project's strong support by the administration of President Richard Nixon, the U.S. Senate rejected further funding. A counterattack was organized under the banner of the "National Committee for an American SST", which urged supporters to send in $1 to keep the program alive. Afterward, letters of support from aviation buffs, containing nearly $1 million worth of contributions, poured in. Labor unions also supported the SST project, worried that the winding down of both the Vietnam War and Apollo program would lead to mass unemployment in the aerospace sector. AFL–CIO President George Meany suggested that the race to develop a first-generation SST was already lost, but the US should "enter the competition for the second generation—the SSTs of the 1980s and 1990s". Despite this newfound support, the House of Representatives also voted to end SST funding on May 20, 1971. The vote was highly contentious. Gerald Ford, then Republican Leader, shouted Meany's claims that "If you vote for the SST, you are ensuring 13,000 jobs today plus 50,000 jobs in the second tier and 150,000 jobs each year over the next ten years." Sidney Yates, leading the "no" camp, offered a then-uncommon motion to instruct conferees and eventually won the vote against further funding, 215 to 204. At the time, there were 115 unfilled orders by 25 airlines, while Concorde had 74 orders from 16 customers. The two prototypes were never completed. Due to the loss of several government contracts and a downturn in the civilian aviation market, Boeing reduced its number of employees by more than 60,000. The SST became known as "the airplane that almost ate Seattle." As a result of the mass layoffs and so many people moving away from the city in search of work, a billboard was erected near Seattle–Tacoma International Airport in 1971 that read, "Will the last person leaving Seattle – turn out the lights". Aftermath The SST race has had several lasting effects on the industry as a whole. The supercritical wing was originally developed as part of the SST efforts in the U.S., but is now widely used on most jet aircraft. In Europe, the cooperation that allowed Concorde led to the formation of Airbus, Boeing's foremost competitor, with Aérospatiale becoming a main component of Airbus. When Concorde was launched, sales were predicted to be 150 aircraft, but only 14 aircraft were built for commercial service. Service entry was only secured through large government funding subsidy. These few aircraft went on to have a very long in-service flight life and were claimed to be ultimately commercially successful for their operators, until finally removed from service in the aftermath of the type's only crash in 2000 and the 9/11 terrorist attacks when Airbus decided to end servicing arrangements. Its Soviet counterpart, the Tupolev Tu-144, was less successful, operating for only 55 passenger flights before being permanently grounded for various reasons. With the ending of the 2707 project, the entire SST field in the U.S. was moribund for some time. By the mid-1970s, minor advances, combined, appeared to offer greatly improved performance. Through the second half of the 1970s, NASA provided funding for the Advanced Supersonic Transport (AST) project at several companies, including McDonnell Douglas, Boeing, and Lockheed. Considerable wind tunnel testing of the various models was carried out at NASA's Langley Research Center. Ultimately, supersonic passenger service was not economically competitive, and ceased with the retirement of Concorde in 2003; , no commercial supersonic aircraft operate in the world, due largely to poor fuel economy and high maintenance costs. Legacy The Museum of Flight in Seattle parks a British Airways Concorde a few blocks from the building where the original 2707 mockup was housed in Seattle. While the Soviet Tu-144 had a short service life, Concorde was successful enough to fly as a small luxury fleet from 1976 until 2003, with British Airways lifetime costs of £1bn producing £1.75bn in revenues in the niche transatlantic market. As the most advanced supersonic transports became some of the oldest airframes in the fleet, profits eventually fell, due to rising maintenance costs. The final-configuration Boeing 2707 mockup was sold to a museum and displayed at the SST Aviation Exhibit Center in Kissimmee, Florida, from 1973 to 1981. In 1983, the building, complete with SST, was purchased by the Faith World church. For years the Osceola New Life Assembly of God held services there with the airplane still standing above. In 1990, the mock-up was sold to aircraft restorer Charles Bell, who moved it, in pieces, to Merritt Island, in order to preserve it while it waited for a new home as the church now wanted the space for expansion. The forward fuselage was on display at the Hiller Aviation Museum of San Carlos, California, for many years, but in early 2013, was moved back to Seattle, where it is undergoing restoration at the Museum of Flight. Seattle's NBA basketball team, formed in 1967, was named the Seattle SuperSonics (shortened to "Sonics"). The name was inspired by the newly won SST contract. Variants 2707-100 Variable sweep wing 2707-200 Same as -100, but with canards 2707-300 Stationary wing References Sources Abandoned civil aircraft projects of the United States 2707 Supersonic transports Variable-sweep-wing aircraft Quadjets Low-wing aircraft 1960s United States airliners Aircraft with retractable tricycle landing gear
Boeing 2707
[ "Physics" ]
4,164
[ "Physical systems", "Transport", "Supersonic transports" ]
180,624
https://en.wikipedia.org/wiki/Vehicle%20dynamics
Vehicle dynamics is the study of vehicle motion, e.g., how a vehicle's forward movement changes in response to driver inputs, propulsion system outputs, ambient conditions, air/surface/water conditions, etc. Vehicle dynamics is a part of engineering primarily based on classical mechanics. It may be applied for motorized vehicles (such as automobiles), bicycles and motorcycles, aircraft, and watercraft. Factors affecting vehicle dynamics The aspects of a vehicle's design which affect the dynamics can be grouped into drivetrain and braking, suspension and steering, distribution of mass, aerodynamics and tires. Drivetrain and braking Automobile layout (i.e. location of engine and driven wheels) Powertrain Braking system Suspension and steering Some attributes relate to the geometry of the suspension, steering and chassis. These include: Ackermann steering geometry Axle track Camber angle Caster angle Ride height Roll center Scrub radius Steering ratio Toe Wheel alignment Wheelbase Distribution of mass Some attributes or aspects of vehicle dynamics are purely due to mass and its distribution. These include: Center of mass Moment of inertia Roll moment Sprung mass Unsprung mass Weight distribution Aerodynamics Some attributes or aspects of vehicle dynamics are purely aerodynamic. These include: Automobile drag coefficient Automotive aerodynamics Center of pressure Downforce Ground effect in cars Tires Some attributes or aspects of vehicle dynamics can be attributed directly to the tires. These include: Camber thrust Circle of forces Contact patch Cornering force Ground pressure Pacejka's Magic Formula Pneumatic trail Radial Force Variation Relaxation length Rolling resistance Self aligning torque Skid Slip angle Slip (vehicle dynamics) Spinout Steering ratio Tire load sensitivity Vehicle behaviours Some attributes or aspects of vehicle dynamics are purely dynamic. These include: Body flex Body roll Bump Steer Bundorf analysis Directional stability Critical speed Noise, vibration, and harshness Pitch Ride quality Roll Speed wobble Understeer, oversteer, lift-off oversteer, and fishtailing Weight transfer and load transfer Yaw Analysis and simulation The dynamic behavior of vehicles can be analysed in several different ways. This can be as straightforward as a simple spring mass system, through a three-degree of freedom (DoF) bicycle model, to a large degree of complexity using a multibody system simulation package such as MSC ADAMS or Modelica. As computers have gotten faster, and software user interfaces have improved, commercial packages such as CarSim have become widely used in industry for rapidly evaluating hundreds of test conditions much faster than real time. Vehicle models are often simulated with advanced controller designs provided as software in the loop (SIL) with controller design software such as Simulink, or with physical hardware in the loop (HIL). Vehicle motions are largely due to the shear forces generated between the tires and road, and therefore the tire model is an essential part of the math model. In current vehicle simulator models, the tire model is the weakest and most difficult part to simulate. The tire model must produce realistic shear forces during braking, acceleration, cornering, and combinations, on a range of surface conditions. Many models are in use. Most are semi-empirical, such as the Pacejka Magic Formula model. Racing car games or simulators are also a form of vehicle dynamics simulation. In early versions many simplifications were necessary in order to get real-time performance with reasonable graphics. However, improvements in computer speed have combined with interest in realistic physics, leading to driving simulators that are used for vehicle engineering using detailed models such as CarSim. It is important that the models should agree with real world test results, hence many of the following tests are correlated against results from instrumented test vehicles. Techniques include: Linear range constant radius understeer Fishhook Frequency response Lane change Moose test Sinusoidal steering Skidpad Swept path analysis See also Automotive suspension design Automobile handling Hunting oscillation Multi-axis shaker table Vehicular metrics 4-poster 7 post shaker References Further reading A new way of representing tyre data obtained from measurements in pure cornering and pure braking conditions. Mathematically oriented derivation of standard vehicle dynamics equations, and definitions of standard terms. Vehicle dynamics as developed by Maurice Olley from the 1930s onwards. First comprehensive analytical synthesis of vehicle dynamics. Latest and greatest, also the standard reference for automotive suspension engineers. Vehicle dynamics and chassis design from a race car perspective. Handling, Braking, and Ride of Road and Race Cars. Lecture Notes to the MOOC Vehicle Dynamics of iversity Automotive engineering Automotive technologies Dynamics (mechanics) Vehicle technology
Vehicle dynamics
[ "Physics", "Engineering" ]
916
[ "Physical phenomena", "Classical mechanics", "Automotive engineering", "Motion (physics)", "Vehicle technology", "Mechanical engineering by discipline", "Dynamics (mechanics)" ]
180,855
https://en.wikipedia.org/wiki/Kalman%20filter
In statistics and control theory, Kalman filtering (also known as linear quadratic estimation) is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unknown variables that tend to be more accurate than those based on a single measurement, by estimating a joint probability distribution over the variables for each time-step. The filter is constructed as a mean squared error minimiser, but an alternative derivation of the filter is also provided showing how the filter relates to maximum likelihood statistics. The filter is named after Rudolf E. Kálmán. Kalman filtering has numerous technological applications. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and ships positioned dynamically. Furthermore, Kalman filtering is much applied in time series analysis tasks such as signal processing and econometrics. Kalman filtering is also important for robotic motion planning and control, and can be used for trajectory optimization. Kalman filtering also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, the use of Kalman filters provides a realistic model for making estimates of the current state of a motor system and issuing updated commands. The algorithm works via a two-phase process: a prediction phase and an update phase. In the prediction phase, the Kalman filter produces estimates of the current state variables, including their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a weighted average, with more weight given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required. Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "The following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear." Regardless of Gaussianity, however, if the process and measurement covariances are known, then the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense, although there may be better nonlinear estimators. It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian. Extensions and generalizations of the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The basis is a hidden Markov model such that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully in multi-sensor fusion, and distributed sensor networks to develop distributed or consensus Kalman filtering. History The filtering method is named for Hungarian émigré Rudolf E. Kálmán, although Thorvald Nicolai Thiele and Peter Swerling developed a similar algorithm earlier. Richard S. Bucy of the Johns Hopkins Applied Physics Laboratory contributed to the theory, causing it to be known sometimes as Kalman–Bucy filtering. Kalman was inspired to derive the Kalman filter by applying state variables to the Wiener filtering problem. Stanley F. Schmidt is generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements. It was during a visit by Kálmán to the NASA Ames Research Center that Schmidt saw the applicability of Kálmán's ideas to the nonlinear problem of trajectory estimation for the Apollo program resulting in its incorporation in the Apollo navigation computer. This digital filter is sometimes termed the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, nonlinear filter developed by the Soviet mathematician Ruslan Stratonovich. In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before the summer of 1961, when Kalman met with Stratonovich during a conference in Moscow. This Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961). Kalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the U.S. Air Force's Air Launched Cruise Missile. They are also used in the guidance and navigation systems of reusable launch vehicles and the attitude control and navigation systems of spacecraft which dock at the International Space Station. Overview of the calculation Kalman filtering uses a system's dynamic model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (its state) that is better than the estimate obtained by using only one measurement alone. As such, it is a common sensor fusion and data fusion algorithm. Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using a weighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the covariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter works recursively and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state. The measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter's gain. The Kalman gain is the weight given to the measurements and current-state estimate, and can be "tuned" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain (close to one) will result in a more jumpy estimated trajectory, while a low gain (close to zero) will smooth out noise but decrease the responsiveness. When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into matrices because of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances. Example application As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with a GPS unit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known as dead reckoning. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it will drift over time as small errors accumulate. For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping. Technical description and context The Kalman filter is an efficient recursive filter estimating the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and econometric applications from radar and computer vision to estimation of structural macroeconomic models, and is an important topic in control theory and control systems engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear–quadratic–Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator, and the linear–quadratic–Gaussian controller are solutions to what arguably are the most fundamental problems of control theory. In most applications, the internal state is much larger (has more degrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state. For the Dempster–Shafer theory, each state equation or observation is considered a special case of a linear belief function and the Kalman filtering is a special case of combining linear belief functions on a join-tree or Markov tree. Additional methods include belief filtering which use Bayes or evidential updates to the state equations. A wide variety of Kalman filters exists by now: Kalman's original formulation - now termed the "simple" Kalman filter, the Kalman–Bucy filter, Schmidt's "extended" filter, the information filter, and a variety of "square-root" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is the phase-locked loop, which is now ubiquitous in radios, especially frequency modulation (FM) radios, television sets, satellite communications receivers, outer space communications systems, and nearly any other electronic communications equipment. Underlying dynamic system model Kalman filtering is based on linear dynamic systems discretized in the time domain. They are modeled on a Markov chain built on linear operators perturbed by errors that may include Gaussian noise. The state of the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999) and Hamilton (1994), Chapter 13. In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-step , following: , the state-transition model; , the observation model; , the covariance of the process noise; , the covariance of the observation noise; and sometimes , the control-input model as described below; if is included, then there is also , the control vector, representing the controlling input into control-input model. As seen below, it is common in many applications that the matrices , , , , and are constant across time, in which case their index may be dropped. The Kalman filter model assumes the true state at time is evolved from the state at according to where is the state transition model which is applied to the previous state xk−1; is the control-input model which is applied to the control vector ; is the process noise, which is assumed to be drawn from a zero mean multivariate normal distribution, , with covariance, : . If is independent of time, one may, following Roweis and Ghahramani (op. cit.), write instead of to emphasize that the noise has no explicit knowledge of time. At time an observation (or measurement) of the true state is made according to where is the observation model, which maps the true state space into the observed space and is the observation noise, which is assumed to be zero mean Gaussian white noise with covariance : . Analogously to the situation for , one may write instead of if is independent of time. The initial state, and the noise vectors at each step are all assumed to be mutually independent. Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory using robust control. Details The Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation represents the estimate of at time n given observations up to and including at time . The state of the filter is represented by two variables: , the a posteriori state estimate mean at time k given observations up to and including at time k; , the a posteriori estimate covariance matrix (a measure of the estimated accuracy of the state estimate). The algorithm structure of the Kalman filter resembles that of Alpha beta filter. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the innovation (the pre-fit residual), i.e. the difference between the current a priori prediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed the a posteriori state estimate. Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matrices Hk). Predict Update The formula for the updated (a posteriori) estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in the derivations section, where the formula valid for any Kk is also shown. A more intuitive way to express the updated state estimate () is: This expression reminds us of a linear interpolation, for between [0,1]. In our case: is the matrix that takes values from (high error in the sensor) to or a projection (low error). is the internal state estimated from the model. is the internal state estimated from the measurement, assuming is nonsingular. This expression also resembles the alpha beta filter update step. Invariants If the model is accurate, and the values for and accurately reflect the distribution of the initial state values, then the following invariants are preserved: where is the expected value of . That is, all estimates have a mean error of zero. Also: so covariance matrices accurately reflect the covariance of estimates. Estimation of the noise covariances Qk and Rk Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matrices Qk and Rk. Extensive research has been done to estimate these covariances from data. One practical method of doing this is the autocovariance least-squares (ALS) technique that uses the time-lagged autocovariances of routine operating data to estimate the covariances. The GNU Octave and Matlab code used to calculate the noise covariance matrices using the ALS technique is available online using the GNU General Public License. Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed. The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods. Another approach is the Optimized Kalman Filter (OKF), which considers the covariance matrices not as representatives of the noise, but rather, as parameters aimed to achieve the most accurate state estimation. These two views coincide under the KF assumptions, but often contradict each other in real systems. Thus, OKF's state estimation is more robust to modeling inaccuracies. Optimality and performance It follows from theory that the Kalman filter provides an optimal state estimation in cases where a) the model matches the real system perfectly, b) the entering noise is "white" (uncorrelated), and c) the covariances of the noise are known exactly. Correlated noise can also be treated using Kalman filters. Several methods for the noise covariance estimation have been proposed during past decades, including ALS, mentioned in the section above. More generally, if the model assumptions do not match the real system perfectly, then optimal state estimation is not necessarily obtained by setting Qk and Rk to the covariances of the noise. Instead, in that case, the parameters Qk and Rk may be set to explicitly optimize the state estimation, e.g., using standard supervised learning. After the covariances are set, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the innovations measures filter performance. Several different methods can be used for this purpose. If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature. Example application, technical Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δt seconds, but these measurements are imprecise; we want to maintain a model of the truck's position and velocity. We show here how we derive the model from which we create our Kalman filter. Since are constant, their time indices are dropped. The position and velocity of the truck are described by the linear state space where is the velocity, that is, the derivative of position with respect to time. We assume that between the (k − 1) and k timestep, uncontrolled forces cause a constant acceleration of ak that is normally distributed with mean 0 and standard deviation σa. From Newton's laws of motion we conclude that (there is no term since there are no known control inputs. Instead, ak is the effect of an unknown input and applies that effect to the state vector) where so that where The matrix is not full rank (it is of rank one if ). Hence, the distribution is not absolutely continuous and has no probability density function. Another way to express this, avoiding explicit degenerate distributions is given by At each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise vk is also distributed normally, with mean 0 and standard deviation σz. where and We know the initial starting state of the truck with perfect precision, so we initialize and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix: If the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal: The filter will then prefer the information from the first measurements over the information already in the model. Asymptotic form For simplicity, assume that the control input . Then the Kalman filter may be written: A similar equation holds if we include a non-zero control input. Gain matrices evolve independently of the measurements . From above, the four equations needed for updating the Kalman gain are as follows: Since the gain matrices depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matrices to an asymptotic matrix applies for conditions established in Walrand and Dimakis. Simulations establish the number of steps to convergence. For the moving truck example described above, with . and , simulation shows convergence in iterations. Using the asymptotic gain, and assuming and are independent of , the Kalman filter becomes a linear time-invariant filter: The asymptotic gain , if it exists, can be computed by first solving the following discrete Riccati equation for the asymptotic state covariance : The asymptotic gain is then computed as before. Additionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by where This leads to an estimator of the form Derivations The Kalman filter can be derived as a generalized least squares method operating on previous data. Deriving the posteriori estimate covariance matrix Starting with our invariant on the error covariance Pk | k as above substitute in the definition of and substitute and and by collecting the error vectors we get Since the measurement error vk is uncorrelated with the other terms, this becomes by the properties of vector covariance this becomes which, using our invariant on Pk | k−1 and the definition of Rk becomes This formula (sometimes known as the Joseph form of the covariance update equation) is valid for any value of Kk. It turns out that if Kk is the optimal Kalman gain, this can be simplified further as shown below. Kalman gain derivation The Kalman filter is a minimum mean-square error (MMSE) estimator. The error in the a posteriori state estimation is We seek to minimize the expected value of the square of the magnitude of this vector, . This is equivalent to minimizing the trace of the a posteriori estimate covariance matrix . By expanding out the terms in the equation above and collecting, we get: The trace is minimized when its matrix derivative with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that Solving this for Kk yields the Kalman gain: This gain, which is known as the optimal Kalman gain, is the one that yields MMSE estimates when used. Simplification of the posteriori error covariance formula The formula used to calculate the a posteriori error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right by SkKkT, it follows that Referring back to our expanded formula for the a posteriori error covariance, we find the last two terms cancel out, giving This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with numerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the a posteriori error covariance formula as derived above (Joseph form) must be used. Sensitivity analysis The Kalman filtering equations provide an estimate of the state and its error covariance recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter. In the absence of reliable statistics or the true values of noise covariance matrices and , the expression no longer provides the actual error covariance. In other words, . In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices. This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matrices and that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator. This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by and respectively, whereas the design values used in the estimator are and respectively. The actual error covariance is denoted by and as computed by the Kalman filter is referred to as the Riccati variable. When and , this means that . While computing the actual error covariance using , substituting for and using the fact that and , results in the following recursive equations for : and While computing , by design the filter implicitly assumes that and . The recursive expressions for and are identical except for the presence of and in place of the design values and respectively. Researches have been done to analyze Kalman filter system's robustness. Factored form One problem with the Kalman filter is its numerical stability. If the process noise covariance Qk is small, round-off error often causes a small positive eigenvalue of the state covariance matrix P to be computed as a negative number. This renders the numerical representation of P indefinite, while its true form is positive-definite. Positive definite matrices have the property that they have a factorization into the product of a non-singular, lower-triangular matrix S and its transpose : P = S·ST . The factor S can be computed efficiently using the Cholesky factorization algorithm. This product form of the covariance matrix P is guaranteed to be symmetric, and for all 1 <= k <= n, the k-th diagonal element Pkk is equal to the euclidean norm of the k-th row of S, which is necessarily positive. An equivalent form, which avoids many of the square root operations involved in the Cholesky factorization algorithm, yet preserves the desirable numerical properties, is the U-D decomposition form, P = U·D·UT, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix. Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used triangular factorization. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions, while on 21st-century computers they are only slightly more expensive.) Efficient algorithms for the Kalman prediction and update steps in the factored form were developed by G. J. Bierman and C. L. Thornton. The L·D·LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter. The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage (LAPACK). These results are further factored into the L·D·LT structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix. Any singular covariance matrix is pivoted so that the first diagonal partition is nonsingular and well-conditioned. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables Hk·xk|k-1 that are associated with auxiliary observations in yk. The l·d·lt square-root filter requires orthogonalization of the observation vector. This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263). Parallel form The Kalman filter is efficient for sequential data processing on central processing units (CPUs), but in its original form it is inefficient on parallel architectures such as graphics processing units (GPUs). It is however possible to express the filter-update routine in terms of an associative operator using the formulation in Särkkä and García-Fernández (2021). The filter solution can then be retrieved by the use of a prefix sum algorithm which can be efficiently implemented on GPU. This reduces the computational complexity from in the number of time steps to . Relationship to recursive Bayesian estimation The Kalman filter can be presented as one of the simplest dynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly, recursive Bayesian estimation calculates estimates of an unknown probability density function (PDF) recursively over time using incoming measurements and a mathematical process model. In recursive Bayesian estimation, the true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model (HMM). Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state. Similarly, the measurement at the k-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state. Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as: However, when a Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set. This results in the predict and update phases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (k − 1)-th timestep to the k-th and the probability distribution associated with the previous state, over all possible . The measurement set up to time t is The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state. The denominator is a normalization term. The remaining probability density functions are The PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for given the measurements is the Kalman filter estimate. Marginal likelihood Related to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as a generative model, i.e., a process for generating a stream of random observations z = (z0, z1, z2, ...). Specifically, the process is Sample a hidden state from the Gaussian prior distribution . Sample an observation from the observation model . For , do Sample the next hidden state from the transition model Sample an observation from the observation model This process has identical structure to the hidden Markov model, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions. In some applications, it is useful to compute the probability that a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the marginal likelihood because it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models using Bayesian model comparison. It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By the chain rule, the likelihood can be factored as the product of the probability of each observation given previous observations, , and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimate Thus the marginal likelihood is given by i.e., a product of Gaussian densities, each corresponding to the density of one observation zk under the current filtering distribution . This can easily be computed as a simple recursive update; however, to avoid numeric underflow, in a practical implementation it is usually desirable to compute the log marginal likelihood instead. Adopting the convention , this can be done via the recursive update rule where is the dimension of the measurement vector. An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found. Information filter In cases where the dimension of the observation vector y is bigger than the dimension of the state space vector x, the information filter can avoid the inversion of a bigger matrix in the Kalman gain calculation at the price of inverting a smaller matrix in the prediction step, thus saving computing time. Additionally, the information filter allows for system information initialization according to , which would not be possible for the regular Kalman filter. In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and information vector respectively. These are defined as: Similarly the predicted covariance and state have equivalent information forms, defined as: and the measurement covariance and measurement vector, which are defined as: The information update now becomes a trivial sum. The main advantage of the information filter is that N measurements can be filtered at each time step simply by summing their information matrices and vectors. To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used. Fixed-lag smoother The optimal fixed-lag smoother provides the optimal estimate of for a given fixed-lag using the measurements from to . It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following: where: is estimated via a standard Kalman filter; is the innovation produced considering the estimate of the standard Kalman filter; the various with are new variables; i.e., they do not appear in the standard Kalman filter; the gains are computed via the following scheme: and where and are the prediction error covariance and the gains of the standard Kalman filter (i.e., ). If the estimation error covariance is defined so that then we have that the improvement on the estimation of is given by: Fixed-interval smoothers The optimal fixed-interval smoother provides the optimal estimate of () using the measurements from a fixed interval to . This is also called "Kalman Smoothing". There are several smoothing algorithms in common use. Rauch–Tung–Striebel The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing. The forward pass is the same as the regular Kalman filter algorithm. These filtered a-priori and a-posteriori state estimates , and covariances , are saved for use in the backward pass (for retrodiction). In the backward pass, we compute the smoothed state estimates and covariances . We start at the last time step and proceed backward in time using the following recursive equations: where is the a-posteriori state estimate of timestep and is the a-priori state estimate of timestep . The same notation applies to the covariance. Modified Bryson–Frazier smoother An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman. This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive computation of data which are used at each observation time to compute the smoothed state and covariance. The recursive equations are where is the residual covariance and . The smoothed state and covariance can then be found by substitution in the equations or An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix. Bierman's derivation is based on the RTS smoother, which assumes that the underlying distributions are Gaussian. However, a derivation of the MBF based on the concept of the fixed point smoother, which does not require the Gaussian assumption, is given by Gibbs. The MBF can also be used to perform consistency checks on the filter residuals and the difference between the value of a filter state after an update and the smoothed value of the state, that is . Minimum-variance smoother The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely. This smoother is a time-varying state-space generalization of the optimal non-causal Wiener filter. The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass may be calculated by operating the forward equations on the time-reversed and time reversing the result. In the case of output estimation, the smoothed estimate is given by Taking the causal part of this minimum-variance smoother yields which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly. A continuous-time version of the above smoother is described in. Expectation–maximization algorithms may be employed to calculate approximate maximum likelihood estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation. In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (extended Kalman filtering). Frequency-weighted Kalman filters Pioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest. Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Let denote the output estimation error exhibited by a conventional Kalman filter. Also, let denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance of arises by simply constructing . The design of remains an open question. One way of proceeding is to identify a system which generates the estimation error and setting equal to the inverse of that system. This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers. Nonlinear filters The basic Kalman filter is limited to a linear assumption. More complex systems, however, can be nonlinear. The nonlinearity can be associated either with the process model or with the observation model or with both. The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model. Extended Kalman filter In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are of differentiable type. The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed. At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate. Unscented Kalman filter When the state transition and observation models—that is, the predict and update functions and —are highly nonlinear, the extended Kalman filter can give particularly poor performance. This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF)  uses a deterministic sampling technique known as the unscented transformation (UT) to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are then formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way. For certain systems, the resulting UKF more accurately estimates the true mean and covariance. This can be verified with Monte Carlo sampling or Taylor series expansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable). Sigma points For a random vector , sigma points are any set of vectors attributed with first-order weights that fulfill for all : second-order weights that fulfill for all pairs . A simple choice of sigma points and weights for in the UKF algorithm is where is the mean estimate of . The vector is the jth column of where . Typically, is obtained via Cholesky decomposition of . With some care the filter equations can be expressed in such a way that is evaluated directly without intermediate calculations of . This is referred to as the square-root unscented Kalman filter. The weight of the mean value, , can be chosen arbitrarily. Another popular parameterization (which generalizes the above) is and control the spread of the sigma points. is related to the distribution of . Note that this is an overparameterization in the sense that any one of , and can be chosen arbitrarily. Appropriate values depend on the problem at hand, but a typical recommendation is , , and . If the true distribution of is Gaussian, is optimal. Predict As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa. Given estimates of the mean and covariance, and , one obtains sigma points as described in the section above. The sigma points are propagated through the transition function f. . The propagated sigma points are weighed to produce the predicted mean and covariance. where are the first-order weights of the original sigma points, and are the second-order weights. The matrix is the covariance of the transition noise, . Update Given prediction estimates and , a new set of sigma points with corresponding first-order weights and second-order weights is calculated. These sigma points are transformed through the measurement function . . Then the empirical mean and covariance of the transformed points are calculated. where is the covariance matrix of the observation noise, . Additionally, the cross covariance matrix is also needed The Kalman gain is The updated mean and covariance estimates are Discriminative Kalman filter When the observation model is highly non-linear and/or non-Gaussian, it may prove advantageous to apply Bayes' rule and estimate where for nonlinear functions . This replaces the generative specification of the standard Kalman filter with a discriminative model for the latent states given observations. Under a stationary state model where , if then given a new observation , it follows that where Note that this approximation requires to be positive-definite; in the case that it is not, is used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states and can be used build filters that are particularly robust to nonstationarities in the observation model. Adaptive Kalman filter Adaptive Kalman filters allow to adapt for process dynamics which are not modeled in the process model , which happens for example in the context of a maneuvering target when a constant velocity (reduced order) Kalman filter is employed for tracking. Kalman–Bucy filter Kalman–Bucy filtering (named for Richard Snowden Bucy) is a continuous time version of Kalman filtering. It is based on the state space model where and represent the intensities of the two white noise terms and , respectively. The filter consists of two differential equations, one for the state estimate and one for the covariance: where the Kalman gain is given by Note that in this expression for the covariance of the observation noise represents at the same time the covariance of the prediction error (or innovation) ; these covariances are equal only in the case of continuous time. The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time. The second differential equation, for the covariance, is an example of a Riccati equation. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter. Hybrid Kalman filter Most physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor. Therefore, the system model and measurement model are given by where . Initialize Predict The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., . The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step. For the case of linear time invariant systems, the continuous time dynamics can be exactly discretized into a discrete time system using matrix exponentials. Update The update equations are identical to those of the discrete-time Kalman filter. Variants for the recovery of sparse signals The traditional Kalman filter has also been employed for the recovery of sparse, possibly dynamic, signals from noisy observations. Recent works utilize notions from the theory of compressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems. Relation to Gaussian processes Since linear Gaussian state-space models lead to Gaussian processes, Kalman filters can be viewed as sequential solvers for Gaussian process regression. Applications Attitude and heading reference systems Autopilot Electric battery state of charge (SoC) estimation Brain–computer interfaces Tracking and vertex fitting of charged particles in particle detectors Tracking of objects in computer vision Dynamic positioning in shipping Economics, in particular macroeconomics, time series analysis, and econometrics Inertial guidance system Nuclear medicine – single photon emission computed tomography image restoration Orbit determination Power system state estimation Radar tracker Satellite navigation systems Seismology Sensorless control of AC motor variable-frequency drives Simultaneous localization and mapping Speech enhancement Visual odometry Weather forecasting Navigation system 3D modeling Structural health monitoring Human sensorimotor processing See also Alpha beta filter Inverse-variance weighting Covariance intersection Data assimilation Ensemble Kalman filter Extended Kalman filter Fast Kalman filter Filtering problem (stochastic processes) Generalized filtering Invariant extended Kalman filter Kernel adaptive filter Masreliez's theorem Moving horizon estimation Particle filter estimator PID controller Predictor–corrector method Recursive least squares filter Schmidt–Kalman filter Separation principle Sliding mode control State-transition matrix Stochastic differential equations Switching Kalman filter References Further reading External links A New Approach to Linear Filtering and Prediction Problems, by R. E. Kalman, 1960 Kalman and Bayesian Filters in Python. Open source Kalman filtering textbook. How a Kalman filter works, in pictures. Illuminates the Kalman filter with pictures and colors Kalman–Bucy Filter, a derivation of the Kalman–Bucy Filter Kalman filter in Javascript. Open source Kalman filter library for node.js and the web browser. An Introduction to the Kalman Filter , SIGGRAPH 2001 Course, Greg Welch and Gary Bishop Kalman Filter webpage, with many links Kalman Filter Explained Simply, Step-by-Step Tutorial of the Kalman Filter with Equations Gerald J. Bierman's Estimation Subroutine Library: Corresponds to the code in the research monograph "Factorization Methods for Discrete Sequential Estimation" originally published by Academic Press in 1977. Republished by Dover. Matlab Toolbox implementing parts of Gerald J. Bierman's Estimation Subroutine Library: UD / UDU' and LD / LDL' factorization with associated time and measurement updates making up the Kalman filter. Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping: Vehicle moving in 1D, 2D and 3D The Kalman Filter in Reproducing Kernel Hilbert Spaces A comprehensive introduction. Matlab code to estimate Cox–Ingersoll–Ross interest rate model with Kalman Filter : Corresponds to the paper "estimating and testing exponential-affine term structure models by kalman filter" published by Review of Quantitative Finance and Accounting in 1999. Online demo of the Kalman Filter. Demonstration of Kalman Filter (and other data assimilation methods) using twin experiments. kalman-filter.com. Insights into the use of Kalman Filters in different domains. Examples and how-to on using Kalman Filters with MATLAB A Tutorial on Filtering and Estimation Explaining Filtering (Estimation) in One Hour, Ten Minutes, One Minute, and One Sentence by Yu-Chi Ho Simo Särkkä (2013). "Bayesian Filtering and Smoothing". Cambridge University Press. Full text available on author's webpage https://users.aalto.fi/~ssarkka/. Control theory Linear filters Markov models Nonlinear filters Robot control Signal estimation Stochastic differential equations Hungarian inventions
Kalman filter
[ "Mathematics", "Engineering" ]
11,193
[ "Robotics engineering", "Applied mathematics", "Control theory", "Robot control", "Dynamical systems" ]
11,096,735
https://en.wikipedia.org/wiki/Static%20light%20scattering
Static light scattering is a technique in physical chemistry that measures the intensity of the scattered light to obtain the average molecular weight Mw of a macromolecule like a polymer or a protein in solution. Measurement of the scattering intensity at many angles allows calculation of the root mean square radius, also called the radius of gyration Rg. By measuring the scattering intensity for many samples of various concentrations, the second virial coefficient, A2, can be calculated. Static light scattering is also commonly utilized to determine the size of particle suspensions in the sub-μm and supra-μm ranges, via the Lorenz-Mie (see Mie scattering) and Fraunhofer diffraction formalisms, respectively. For static light scattering experiments, a high-intensity monochromatic light, usually a laser, is launched into a solution containing the macromolecules. One or many detectors are used to measure the scattering intensity at one or many angles. The angular dependence is required to obtain accurate measurements of both molar mass and size for all macromolecules of radius above 1–2% of the incident wavelength. Hence simultaneous measurements at several angles relative to the direction of the incident light, known as multi-angle light scattering (MALS) or multi-angle laser light scattering (MALLS), are generally regarded as the standard implementation of static light scattering. Additional details on the history and theory of MALS may be found in multi-angle light scattering. To measure the average molecular weight directly without calibration from the light scattering intensity, the laser intensity, the quantum efficiency of the detector, and the full scattering volume and solid angle of the detector need to be known. Since this is impractical, all commercial instruments are calibrated using a strong, known scatterer like toluene since the Rayleigh ratio of toluene and a few other solvents were measured using an absolute light scattering instrument. Theory For a light scattering instrument composed of many detectors placed at various angles, all the detectors need to respond the same way. Usually, detectors will have slightly different quantum efficiency, different gains, and are looking at different geometrical scattering volumes. In this case, a normalization of the detectors is absolutely needed. To normalize the detectors, a measurement of a pure solvent is made first. Then an isotropic scatterer is added to the solvent. Since isotropic scatterers scatter the same intensity at any angle, the detector efficiency and gain can be normalized with this procedure. It is convenient to normalize all the detectors to the 90° angle detector. where IR(90) is the scattering intensity measured for the Rayleigh scatterer by the 90° angle detector. The most common equation to measure the weight-average molecular weight, Mw, is the Zimm equation (the right-hand side of the Zimm equation is provided incorrectly in some texts, as noted by Hiemenz and Lodge): where and with and the scattering vector for vertically polarized light is with n0 the refractive index of the solvent, λ the wavelength of the light source, NA the Avogadro constant, c the solution concentration, and dn/dc the change in the refractive index of the solution with change in concentration. The intensity of the analyte measured at an angle is IA(θ). In these equations, the subscript A is for analyte (the solution) and T is for the toluene with the Rayleigh ratio of toluene, RT being 1.35×10−5 cm−1 for a HeNe laser. As described above, the radius of gyration, Rg, and the second virial coefficient, A2, are also calculated from this equation. The refractive index increment dn/dc characterizes the change of the refractive index n with the concentration c and can be measured with a differential refractometer. A Zimm plot is built from a double extrapolation to zero angle and zero concentration from many angles and many concentration measurements. In its simplest form, the Zimm equation is reduced to: for measurements made at low angle and infinite dilution since P(0) = 1. There are typically several analyses developed to analyze the scattering of particles in solution to derive the above-named physical characteristics of particles. A simple static light scattering experiment entails the average intensity of the sample that is corrected for the scattering of the solvent will yield the Rayleigh ratio, R as a function of the angle or the wave vector q as follows: Data analyses Guinier plot The scattered intensity can be plotted as a function of the angle to give information on the Rg which can simply be calculated using the Guinier approximation (developed by André Guinier) as follows:where ln(ΔR(θ)) = lnP(θ) also known as the form factor with q = 4πn0sin(θ/2)/λ. Hence a plot of the corrected Rayleigh ratio, ΔR(θ) vs sin2(θ/2) or q2 will yield a slope Rg2/3. However, this approximation is only true for qRg < 1. Note that for a Guinier plot, the value of dn/dc and the concentration is not needed. Kratky plot The Kratky plot is typically used to analyze the conformation of proteins but can be used to analyze the random walk model of polymers. A Kratky plot can be made by plotting sin2(θ/2)ΔR(θ) vs sin(θ/2) or q2ΔR(θ) vs q. Zimm plot For polymers and polymer complexes that are monodisperse () as determined by static light scattering, a Zimm plot is a conventional means of deriving the parameters such as Rg, molecular mass Mw and the second virial coefficient A2. One must note that if the material constant K is not implemented, a Zimm plot will only yield Rg. Hence implementing K will yield the following equation: The analysis performed with the Zimm plot uses a double-extrapolation to zero concentration and zero scattering angle resulting in a characteristic rhomboid plot. As the angular information is available, it is also possible to obtain the radius of gyration (Rg). Experiments are performed at several angles, which satisfy the condition and at least 4 concentrations. Performing a Zimm analysis on a single concentration is known as a partial Zimm analysis and is only valid for dilute solutions of strong point scatterers. The partial Zimm however, does not yield the second virial coefficient, due to the absence of the variable concentration of the sample. More specifically, the value of the second virial coefficient is either assumed to equal zero or is inputted as a known value in order to perform the partial Zimm analysis. Debye plot If the measured particles are smaller than λ/20, the form factor P(θ) can be neglected (P(θ)→1). Therefore, the Zimm equation is simplified to the Debye equation, as follows: Note that this is also the result of an extrapolation to zero scattering angle. By acquiring data on concentration and scattering intensity, the Debye plot is constructed by plotting Kc/ΔR(θ) vs. concentration. The intercept of the fitted line gives the molecular mass, while the slope corresponds to the 2nd virial coefficient. As the Debye plot is a simplification of the Zimm equation, the same limitations of the latter apply, i.e., samples should present a monodisperse nature. For polydisperse samples, the resulting molecular mass from a static light-scattering measurement will represent an average value. An advantage of the Debye plot is the possibility to determine the second virial coefficient. This parameter describes the interaction between particles and the solvent. In macromolecule solutions, for instance, it can assume negative (particle-particle interactions are favored), zero, or positive values (particle-solvent interactions are favored). Multiple scattering Static light scattering assumes that each detected photon has only been scattered exactly once. Therefore, analysis according to the calculations stated above will only be correct if the sample has been diluted sufficiently to ensure that photons are not scattered multiple times by the sample before being detected. Accurate interpretation becomes exceedingly difficult for systems with non-negligible contributions from multiple scattering. In many commercial instruments where analysis of the scattering signal is automatically performed, the error may never be noticed by the user. Particularly for larger particles and those with high refractive index contrast, this limits the application of standard static light scattering to very low particle concentrations. On the other hand, for soluble macromolecules that exhibit a relatively low refractive index contrast versus the solvent, including most polymers and biomolecules in their respective solvents, multiple scattering is rarely a limiting factor even at concentrations that approach the limits of solubility. However, as shown by Schaetzel, it is possible to suppress multiple scattering in static light scattering experiments via a cross-correlation approach. The general idea is to isolate singly scattered light and suppress undesired contributions from multiple scattering in a static light scattering experiment. Different implementations of cross-correlation light scattering have been developed and applied. Currently, the most widely used scheme is the so-called 3D-dynamic light scattering method,. The same method can also be used to correct dynamic light scattering data for multiple scattering contributions. Composition-gradient static light scattering Samples that change their properties after dilution may not be analyzed via static light scattering in terms of the simple model presented here as the Zimm equation. A more sophisticated analysis known as 'composition-gradient static (or multi-angle) light scattering' (CG-SLS or CG-MALS) is an important class of methods to investigate protein–protein interactions, colligative properties, and other macromolecular interactions as it yields, in addition to size and molecular weight, information on the affinity and stoichiometry of molecular complexes formed by one or more associating macromolecular/biomolecular species. In particular, static light scattering from a dilution series may be analyzed to quantify self-association, reversible oligomerization, and non-specific attraction or repulsion, while static light scattering from mixtures of species may be analyzed to quantify hetero-association. Applications One of the main applications of static light scattering for molecular mass determination is in the field of macromolecules, such as proteins and polymers, as it is possible to measure the molecular mass of proteins without any assumption about their shape. Static light scattering is usually combined with other particle characterization techniques, such as size-exclusion chromatography (SEC), dynamic light scattering (DLS), and electrophoretic light scattering (ELS). See also Differential static light scatter (DSLS) Dynamic light scattering Light scattering Protein–protein interactions References External links Application of static light scattering Litesizer Scattering, absorption and radiative transfer (optics) Scattering Polymer chemistry Polymer physics Physical chemistry
Static light scattering
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,303
[ "Polymer physics", " absorption and radiative transfer (optics)", "Applied and interdisciplinary physics", "Materials science", "Scattering", "Particle physics", "Condensed matter physics", "Nuclear physics", "nan", "Polymer chemistry", "Physical chemistry" ]
11,101,049
https://en.wikipedia.org/wiki/Obstetrical%20forceps
Obstetrical forceps are a medical instrument used in childbirth. Their use can serve as an alternative to the ventouse (vacuum extraction) method. Medical uses Forceps births, like all assisted births, should only be undertaken to help promote the health of the mother or baby. In general, a forceps birth is likely to be safer for both the mother and baby than the alternatives – either a ventouse birth or a caesarean section – although caveats such as operator skill apply. Advantages of forceps use include avoidance of caesarean section (and the short and long-term complications that accompany this), reduction of delivery time, and general applicability with cephalic presentation (head presentation). Common complications include the possibility of bruising the baby and causing more severe vaginal tears (perineal laceration) than would otherwise be the case. Severe and rare complications (occurring less frequently than 1 in 200) include nerve damage, Descemet's membrane rupture, skull fractures, and cervical cord injury. Maternal factors for use of forceps: Maternal exhaustion. Prolonged second stage of labour. Maternal illness such as heart disease, hypertension, glaucoma, aneurysm, or other conditions that make pushing difficult or dangerous. Hemorrhaging. Analgesic drug-related inhibition of maternal effort (especially with epidural/spinal anaesthesia). Fetal factors for use of forceps: Non-reassuring fetal heart tracing. Fetal distress. After-coming head in breech delivery. Complications Baby Cuts and bruises. Increased risk of facial nerve injury (usually temporary). Increased risk of clavicle fracture (rare). Increased risk of intracranial hemorrhage - sometimes leading to death: 4/10,000. Increased risk of damage to cranial nerve VI, resulting in strabismus. Mother Increased risk of perineal lacerations, pelvic organ prolapse, and incontinence. Increased risk of injury to vagina and cervix. Increased postnatal recovery time and pain. Increased difficulty evacuating during recovery time. Structure Obstetric forceps consist of two branches (blades) that are positioned around the head of the fetus. These branches are defined as left and right depending on which side of the mother's pelvis they will be applied. The branches usually, but not always, cross at a midpoint, which is called the articulation. Most forceps have a locking mechanism at the articulation, but a few have a sliding mechanism instead that allows the two branches to slide along each other. Forceps with a fixed lock mechanism are used for deliveries where little or no rotation is required, as when the fetal head is in line with the mother's pelvis. Forceps with a sliding lock mechanism are used for deliveries requiring more rotation. The blade of each forceps branch is the curved portion that is used to grasp the fetal head. The forceps should surround the fetal head firmly, but not tightly. The blade characteristically has two curves, the cephalic and the pelvic curves. The cephalic curve is shaped to conform to the fetal head. The cephalic curve can be rounded or rather elongated depending on the shape of the fetal head. The pelvic curve is shaped to conform to the birth canal and helps direct the force of the traction under the pubic bone. Forceps used for rotation of the fetal head should have almost no pelvic curve. The handles are connected to the blades by shanks of variable lengths. Forceps with longer shanks are used if rotation is being considered. Anglo-American types All American forceps are derived from French forceps (long forceps) or English forceps (short forceps). Short forceps are applied on the fetal head already descended significantly in the maternal pelvis (i.e., proximal to the vagina). Long forceps are able to reach a fetal head still in the middle or even in the upper part of the maternal pelvis. At present practice, it is uncommon to use forceps to access a fetal head in the upper pelvis. So, short forceps are preferred in the UK and USA. Long forceps are still in use elsewhere. Simpson forceps (1848) are the most commonly used among the types of forceps and has an elongated cephalic curve. These are used when there is substantial molding, that is, temporary elongation of the fetal head as it moves through the birth canal. Elliot forceps (1860) are similar to Simpson forceps but with an adjustable pin in the end of the handles which can be drawn out as a means of regulating the lateral pressure on the handles when the instrument is positioned for use. They are used most often with women who have had at least one previous vaginal delivery because the muscles and ligaments of the birth canal provide less resistance during second and subsequent deliveries. In these cases, the fetal head may remain rounder. Kielland forceps (1915, Norwegian) are distinguished by having no angle between the shanks and the blades and a sliding lock. The pelvic curve of the blades is identical to all other forceps. The common misperception that there is no pelvic curve has become so entrenched in the obstetric literature that it may never be able to be overcome, but it can be proved by holding a blade of Kielland's against any other forceps of one's choice. Kielland forceps are probably the most common forceps used for rotation. The sliding mechanism at the articulation can be helpful in asynclitic births (when the fetal head is tilted to the side) since it is no longer in line with the birth canal. Because the handles, shanks, and blades are all in the same plane the forceps can be applied in any position to affect rotation. Because the shanks and handles are not angled, the forceps cannot be applied to a high station as readily as those with the angle since the shanks impinge on the perineum. Wrigley's forceps, named after Arthur Joseph Wrigley, are used in low or outlet deliveries (see explanations below), when the maximum diameter is about above the vulva. Wrigley's forceps were designed for use by general practitioner obstetricians, having the safety feature of an inability to reach high into the pelvis. Obstetricians now use these forceps most commonly in cesarean section delivery where manual traction is proving difficult. The short length results in a lower chance of uterine rupture. Piper's forceps has a perineal curve to allow application to the after-coming head in breech delivery. Technique The cervix must be fully dilated and retracted and the membranes ruptured. The urinary bladder should be empty, perhaps with the use of a catheter. High forceps are never indicated in the modern era. Mid forceps can occasionally be indicated but require operator skill and caution. The station of the head must be at the level of the ischial spines. The woman is placed on her back, usually with the aid of stirrups or assistants to support her legs. A regional anaesthetic (usually either a spinal, epidural or pudendal block) is used to help the mother remain comfortable during the birth. Ascertaining the precise position of the fetal head is paramount, and though historically was accomplished by feeling the fetal skull suture lines and fontanelles, in the modern era, confirmation with ultrasound is essentially mandatory. At this point, the two blades of the forceps are individually inserted, the left blade first for the commonest occipito-anterior position; posterior blade first if a transverse position, then locked. The position on the baby's head is checked. The fetal head is then rotated to the occiput anterior position if it is not already in that position. An episiotomy may be performed if necessary. The baby is then delivered with gentle (maximum 30 lbf or 130 Newton) traction in the axis of the pelvis. Outlet, low, mid or high The accepted clinical standard classification system for forceps deliveries according to station and rotation was developed by the American College of Obstetricians and Gynecologists and consists of: Outlet forceps delivery, where the forceps are applied when the fetal head has reached the perineal floor and its scalp is visible between contractions. This type of assisted delivery is performed only when the fetal head is in a straight forward or backward vertex position or in slight rotation (less than 45 degrees to the right or left) from one of these positions. Low forceps delivery, when the baby's head is at +2 station or lower. There is no restriction on rotation for this type of delivery. Midforceps delivery, when the baby's head is above +2 station. There must be head engagement before it can be carried out. High forceps delivery is not performed in modern obstetrics practice. It would be a forceps-assisted vaginal delivery performed when the baby's head is not yet engaged. History The obstetric forceps were invented by the eldest son of the Chamberlen family of surgeons. The Chamberlens were French Huguenots from Normandy who worked in Paris before they migrated to England in 1569 to escape the religious violence in France. William Chamberlen, the patriarch of the family, was most likely a surgeon; he had two sons, both named Pierre, who became maverick surgeons and specialists in midwifery. William and the eldest son practiced in Southampton and then settled in London. The inventor was probably the eldest Peter Chamberlen the elder, who became obstetrician-surgeon of Queen Henriette, wife of King Charles I of England and daughter of Henry IV, King of France. He was succeeded by his nephew, Dr. Peter Chamberlen (barbers-surgeons were not doctors in the sense of physician), as royal obstetrician. The success of this dynasty of obstetricians with the royal family and high nobles was related in part to the use of this "secret" instrument allowing delivery of a live child in difficult cases. In fact, the instrument was kept secret for 150 years by the Chamberlen family, although there is evidence for its presence as far back as 1634. Hugh Chamberlen the elder, grandnephew of Peter the eldest, tried to sell the instrument in Paris in 1670, but the demonstration he performed in front of François Mauriceau, responsible for Paris Hotel-Dieu maternity, was a failure which resulted in the death of mother and child. The secret may have been sold by Hugh Chamberlen to Dutch obstetricians at the start of the 18th century in Amsterdam, but there are doubts about the authenticity of what was actually provided to buyers. The forceps were used most notably in difficult childbirths. The forceps could avoid some infant deaths when previous approaches (involving hooks and other instruments) extracted them in parts. In the interest of secrecy, the forceps were carried into the birthing room in a lined box and would only be used once everyone was out of the room and the mother blindfolded. Models derived from the Chamberlen instrument finally appeared gradually in England and Scotland in 1735. About 100 years after the invention of the forceps by Peter Chamberlen Sr. a surgeon by the name of Jan Palfijn presented his obstetric forceps to the Paris Academy of Sciences in 1723. They contained parallel blades and were called the Hands of Palfijn. These "hands" were possibly the instruments described and used in Paris by Gregoire father and son, Dussée, and Jacques Mesnard. In 1813, Peter Chamberlen's midwifery tools were discovered at Woodham Mortimer Hall near Maldon (UK) in the attic of the house. The instruments were found along with gloves, old coins and trinkets. The tools discovered also contained a pair of forceps that were assumed to have been invented by the father of Peter Chamberlen because of the nature of the design. The Chamberlen family's forceps were based on the idea of separating the two branches of "sugar clamp" (as those used to remove "stones" from bladder), which were put in place one after another in the birth canal. This was not possible with conventional tweezers previously tested. However, they could only succeed in a maternal pelvis of normal dimensions and on fetal heads already well engaged (i.e. well lowered into maternal pelvis). Abnormalities of pelvis were much more common in the past than today, which complicated the use of Chamberlen forceps. The absence of pelvic curvature of the branches (vertical curvature to accommodate the anatomical curvature of maternal sacrum) prohibited blades from reaching the upper-part of the pelvis and exercising traction in the natural axis of pelvic excavation. In 1747, French obstetrician Andre Levret, published (Observations on the Causes and Accidents of Several difficult Deliveries), in which he described his modification of the instrument to follow the curvature of the maternal pelvis, this "pelvic curve" allowing a grip on a fetal head still high in the pelvic excavation, which could assist in more difficult cases. This improvement was published in 1751 in England by William Smellie in the book A Treatise on the theory and practice of midwifery. After this fundamental improvement, the forceps would become a common obstetrical instrument for more than two centuries. The last improvement of the instrument was added in 1877 by a French obstetrician, Stephan Tarnier in "descriptions of two new forceps." This instrument featured a traction system misaligned with the instrument itself, sometimes called the "third curvature of the forceps". This particularly ingenious traction system, allowed the forceps to exercise traction on the head of the child following the axis of the maternal pelvic excavation, which had never been possible before. Tarnier's idea was to "split" mechanically the grabbing of the fetal head (between the forceps blades) on which the operator does not intervene after their correct positioning, from a mechanical accessory set on the forceps itself, the "tractor" on which the operator exercises traction needed to pull down the fetal head in the correct axis of the pelvic excavation. Tarnier forceps (and its multiple derivatives under other names) remained the most widely used system in the world until the development of the cesarean section. Forceps had a profound influence on obstetrics as it allowed for the speedy delivery of the baby in cases of difficult or obstructed labour. Over the course of the 19th century, many practitioners attempted to redesign the forceps, so much so that the Royal College of Obstetrics and Gynecologists' collection has several hundred examples. In the last decades, however, with the ability to perform a cesarean section relatively safely, and the introduction of the ventouse or vacuum extractor, the use of forceps and training in the technique of its use has sharply declined. Historical role in the medicalisation of childbirth The introduction of the obstetrical forceps provided huge advances in the medicalisation of childbirth. Before the 18th century, childbirth was thought of as a medical phase that could be overseen by a female relative. Usually, if a doctor had to get involved that meant something had gone wrong. Around this era in the 18th century, there were no female doctors. Since males were exclusively called in under extreme circumstances, the act of childbirth was thought to be better known to a midwife or female relative than a male doctor. Usually the male doctor's job was to save the mother's life if, for example, the baby had become stuck on his or her way exiting the mother. Before the obstetrical forceps, this had to be done by cutting the baby out piece by piece. In other cases, if the baby was deemed undeliverable, then the doctor would use a tool called a crochet. This was used to crush the baby's skull, allowing the baby to be pulled out of the mother's womb. Still in other cases, a caesarean section (c section) could be performed, but this would almost always result in the mother's death. "In addition, women who had forceps deliveries had shorter after-childbirth complications than those who had caesarean sections performed." These procedures came with various risks to the mother's health, along with the death of the baby. However, with the introduction of the obstetrical forceps, the male doctor had a more important role. In many cases, they could actually save the baby's life if called early enough. Although the use of the forceps in childbirth came with its own set of risks, the positives included a significant decrease in risk to the mother, a decrease in child morbidity, and a decreased risk to the baby. Since the forceps in childbirth were made public around 1720, they gave male doctors a way to assist and even oversee childbirths. Around this time, in large cities such as London and Paris, some men would become devoted to obstetrical practices. It became stylish among wealthy women of the era to have their childbirth overseen by male midwives. A notable male midwife was William Hunter. He popularised obstetrics. "In 1762, he was appointed as obstetrician to Queen Charlotte." In addition, with the use of forceps, male doctors invented lying in hospitals to provide safe, somewhat advanced obstetrical care because of the use of the obstetrical forceps. Historical complications Child birth was not considered a medical practice before the 18th century. It was mostly overseen by a midwife, mother, stepmother, neighbor, or any female relative. Around the 19th and 20th centuries, childbirth was considered dangerous for women. With the introduction of obstetrical forceps, this allowed non-medical professionals, such as the aforementioned individuals, to continue to oversee childbirths. In addition, this gave some of the public more comfort in trusting childbirth oversight to common people. However, the introduction of obstetrical forceps also had a negative effect, because there was no medical oversight of childbirth by any kind of medical professional, this exposed the practice to unnecessary risks and complications for the fetus and mother. These risks could range from minimal effects to lifetime consequences for both individuals. The baby could develop cuts and bruises in various body parts due to the forcible squeezing of his or her body through the mother's vagina. In addition, there could be bruising on the baby's face if the forceps' handler were to squeeze too tight. In some extreme cases, this could cause temporary or permanent facial nerve injury. Furthermore, if the forceps' handler were to twist his or her wrist while the grip was on the baby's head, this would twist the baby's neck and cause damage to a cranial nerve, resulting in strabismus. In rare cases, a clavicle fracture to the baby could occur. The addition of obstetrical forceps came with complication to the mother during and after childbirth. The use of the forceps gave rise to an increased risk in cuts and lacerations along the vaginal wall. This, in turn, would cause an increase in post-operative recovery time and increase the pain experienced by the mother. In addition, the use of forceps could cause more difficulty evacuating during the recovery time as compared to a mother who did not use the forceps. While some of these risks and complications were very common, in general, many people overlooked them and continued to use them. See also Instruments used in general surgery References External links GLOWM video demonstrating forceps delivery technique Equipment used in childbirth Obstetrical procedures Medical equipment Surgical instruments
Obstetrical forceps
[ "Biology" ]
4,096
[ "Medical equipment", "Medical technology" ]
11,101,129
https://en.wikipedia.org/wiki/Film%20temperature
In fluid thermodynamics, the film temperature () is an approximation of the temperature of a fluid inside a convection boundary layer. It is calculated as the arithmetic mean of the temperature at the surface of the solid boundary wall () and the free-stream temperature (): The film temperature is often used as the temperature at which fluid properties are calculated when using the Prandtl number, Nusselt number, Reynolds number or Grashof number to calculate a heat transfer coefficient, because it is a reasonable first approximation to the temperature within the convection boundary layer. Somewhat confusing terminology may be encountered in relation to boilers and heat exchangers, where the same term is used to refer to the limit (hot) temperature of a fluid in contact with a hot surface. References Fluid dynamics Heat transfer
Film temperature
[ "Physics", "Chemistry", "Engineering" ]
162
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Chemical engineering", "Thermodynamics", "Piping", "Fluid dynamics" ]
4,265,892
https://en.wikipedia.org/wiki/Method%20of%20matched%20asymptotic%20expansions
In mathematics, the method of matched asymptotic expansions is a common approach to finding an accurate approximation to the solution to an equation, or system of equations. It is particularly used when solving singularly perturbed differential equations. It involves finding several different approximate solutions, each of which is valid (i.e. accurate) for part of the range of the independent variable, and then combining these different solutions together to give a single approximate solution that is valid for the whole range of values of the independent variable. In the Russian literature, these methods were known under the name of "intermediate asymptotics" and were introduced in the work of Yakov Zeldovich and Grigory Barenblatt. Method overview In a large class of singularly perturbed problems, the domain may be divided into two or more subdomains. In one of these, often the largest, the solution is accurately approximated by an asymptotic series found by treating the problem as a regular perturbation (i.e. by setting a relatively small parameter to zero). The other subdomains consist of one or more small regions in which that approximation is inaccurate, generally because the perturbation terms in the problem are not negligible there. These areas are referred to as transition layers in general, and specifically as boundary layers or interior layers depending on whether they occur at the domain boundary (as is the usual case in applications) or inside the domain, respectively. An approximation in the form of an asymptotic series is obtained in the transition layer(s) by treating that part of the domain as a separate perturbation problem. This approximation is called the inner solution, and the other is the outer solution, named for their relationship to the transition layer(s). The outer and inner solutions are then combined through a process called "matching" in such a way that an approximate solution for the whole domain is obtained. A simple example Consider the boundary value problem where is a function of independent time variable , which ranges from 0 to 1, the boundary conditions are and , and is a small parameter, such that . Outer solution, valid for t = O(1) Since is very small, our first approach is to treat the equation as a regular perturbation problem, i.e. make the approximation , and hence find the solution to the problem Alternatively, consider that when and are both of size O(1), the four terms on the left hand side of the original equation are respectively of sizes , O(1), and O(1). The leading-order balance on this timescale, valid in the distinguished limit , is therefore given by the second and fourth terms, i.e., This has solution for some constant . Applying the boundary condition , we would have ; applying the boundary condition , we would have . It is therefore impossible to satisfy both boundary conditions, so is not a valid approximation to make across the whole of the domain (i.e. this is a singular perturbation problem). From this we infer that there must be a boundary layer at one of the endpoints of the domain where needs to be included. This region will be where is no longer negligible compared to the independent variable , i.e. and are of comparable size, i.e. the boundary layer is adjacent to . Therefore, the other boundary condition applies in this outer region, so , i.e. is an accurate approximate solution to the original boundary value problem in this outer region. It is the leading-order solution. Inner solution, valid for t = O(ε) In the inner region, and are both tiny, but of comparable size, so define the new O(1) time variable . Rescale the original boundary value problem by replacing with , and the problem becomes which, after multiplying by and taking , is Alternatively, consider that when has reduced to size , then is still of size O(1) (using the expression for ), and so the four terms on the left hand side of the original equation are respectively of sizes , , O(1) and O(1). The leading-order balance on this timescale, valid in the distinguished limit , is therefore given by the first and second terms, i.e. This has solution for some constants and . Since applies in this inner region, this gives , so an accurate approximate solution to the original boundary value problem in this inner region (it is the leading-order solution) is Matching We use matching to find the value of the constant . The idea of matching is that the inner and outer solutions should agree for values of in an intermediate (or overlap) region, i.e. where . We need the outer limit of the inner solution to match the inner limit of the outer solution, i.e., which gives . The above problem is the simplest of the simple problems dealing with matched asymptotic expansions. One can immediately calculate that is the entire asymptotic series for the outer region whereas the correction to the inner solution is and the constant of integration must be obtained from inner-outer matching. Notice, the intuitive idea for matching of taking the limits i.e. doesn't apply at this level. This is simply because the underlined term doesn't converge to a limit. The methods to follow in these types of cases are either to go for a) method of an intermediate variable or using b) the Van-Dyke matching rule. The former method is cumbersome and works always whereas the Van-Dyke matching rule is easy to implement but with limited applicability. A concrete boundary value problem having all the essential ingredients is the following. Consider the boundary value problem The conventional outer expansion gives , where must be obtained from matching. The problem has boundary layers both on the left and on the right. The left boundary layer near has a thickness whereas the right boundary layer near has thickness . Let us first calculate the solution on the left boundary layer by rescaling , then the differential equation to satisfy on the left is and accordingly, we assume an expansion . The inhomogeneous condition on the left provides us the reason to start the expansion at . The leading order solution is . This with van-Dyke matching gives . Let us now calculate the solution on the right rescaling , then the differential equation to satisfy on the right is and accordingly, we assume an expansion The inhomogeneous condition on the right provides us the reason to start the expansion at . The leading order solution is . This with van-Dyke matching gives . Proceeding in a similar fashion if we calculate the higher order-corrections we get the solutions as Composite solution To obtain our final, matched, composite solution, valid on the whole domain, one popular method is the uniform method. In this method, we add the inner and outer approximations and subtract their overlapping value, , which would otherwise be counted twice. The overlapping value is the outer limit of the inner boundary layer solution, and the inner limit of the outer solution; these limits were above found to equal . Therefore, the final approximate solution to this boundary value problem is, Note that this expression correctly reduces to the expressions for and when is and O(1), respectively. Accuracy This final solution satisfies the problem's original differential equation (shown by substituting it and its derivatives into the original equation). Also, the boundary conditions produced by this final solution match the values given in the problem, up to a constant multiple. This implies, due to the uniqueness of the solution, that the matched asymptotic solution is identical to the exact solution up to a constant multiple. This is not necessarily always the case, any remaining terms should go to zero uniformly as . Not only does our solution successfully approximately solve the problem at hand, it closely approximates the problem's exact solution. It happens that this particular problem is easily found to have exact solution which has the same form as the approximate solution, by the multiplying constant. The approximate solution is the first term in a binomial expansion of the exact solution in powers of . Location of boundary layer Conveniently, we can see that the boundary layer, where and are large, is near , as we supposed earlier. If we had supposed it to be at the other endpoint and proceeded by making the rescaling , we would have found it impossible to satisfy the resulting matching condition. For many problems, this kind of trial and error is the only way to determine the true location of the boundary layer. Harder problems The problem above is a simple example because it is a single equation with only one dependent variable, and there is one boundary layer in the solution. Harder problems may contain several co-dependent variables in a system of several equations, and/or with several boundary and/or interior layers in the solution. It is often desirable to find more terms in the asymptotic expansions of both the outer and the inner solutions. The appropriate form of these expansions is not always clear: while a power-series expansion in may work, sometimes the appropriate form involves fractional powers of , functions such as , et cetera. As in the above example, we will obtain outer and inner expansions with some coefficients which must be determined by matching. Second-order differential equations Schrödinger-like second-order differential equations A method of matched asymptotic expansions - with matching of solutions in the common domain of validity - has been developed and used extensively by Dingle and Müller-Kirsten for the derivation of asymptotic expansions of the solutions and characteristic numbers (band boundaries) of Schrödinger-like second-order differential equations with periodic potentials - in particular for the Mathieu equation (best example), Lamé and ellipsoidal wave equations, oblate and prolate spheroidal wave equations, and equations with anharmonic potentials. Convection–diffusion equations Methods of matched asymptotic expansions have been developed to find approximate solutions to the Smoluchowski convection–diffusion equation, which is a singularly perturbed second-order differential equation. The problem has been studied particularly in the context of colloid particles in linear flow fields, where the variable is given by the pair distribution function around a test particle. In the limit of low Péclet number, the convection–diffusion equation also presents a singularity at infinite distance (where normally the far-field boundary condition should be placed) due to the flow field being linear in the interparticle separation. This problem can be circumvented with a spatial Fourier transform as shown by Jan Dhont. A different approach to solving this problem was developed by Alessio Zaccone and coworkers and consists in placing the boundary condition right at the boundary layer distance, upon assuming (in a first-order approximation) a constant value of the pair distribution function in the outer layer due to convection being dominant there. This leads to an approximate theory for the encounter rate of two interacting colloid particles in a linear flow field in good agreement with the full numerical solution. When the Péclet number is significantly larger than one, the singularity at infinite separation no longer occurs and the method of matched asymptotics can be applied to construct the full solution for the pair distribution function across the entire domain. See also Asymptotic analysis Multiple-scale analysis Activation energy asymptotics References Differential equations Asymptotic analysis
Method of matched asymptotic expansions
[ "Mathematics" ]
2,349
[ "Mathematical analysis", "Mathematical objects", "Differential equations", "Equations", "Asymptotic analysis" ]
4,267,984
https://en.wikipedia.org/wiki/Rydberg%20state
The Rydberg states of an atom or molecule are electronically excited states with energies that follow the Rydberg formula as they converge on an ionic state with an ionization energy. Although the Rydberg formula was developed to describe atomic energy levels, it has been used to describe many other systems that have electronic structure roughly similar to atomic hydrogen. In general, at sufficiently high principal quantum numbers, an excited electron-ionic core system will have the general character of a hydrogenic system and the energy levels will follow the Rydberg formula. Rydberg states have energies converging on the energy of the ion. The ionization energy threshold is the energy required to completely liberate an electron from the ionic core of an atom or molecule. In practice, a Rydberg wave packet is created by a laser pulse on a hydrogenic atom and thus populates a superposition of Rydberg states. Modern investigations using pump-probe experiments show molecular pathways – e.g. dissociation of (NO)2 – via these special states. Rydberg series Rydberg series describe the energy levels associated with partially removing an electron from the ionic core. Each Rydberg series converges on an ionization energy threshold associated with a particular ionic core configuration. These quantized Rydberg energy levels can be associated with the quasiclassical Bohr atomic picture. The closer you get to the ionization threshold energy, the higher the principal quantum number, and the smaller the energy difference between "near threshold Rydberg states." As the electron is promoted to higher energy levels, the spatial excursion of the electron from the ionic core increases and the system is more like the Bohr quasiclassical picture. Energy of Rydberg states The energy of Rydberg states can be refined by including a correction called the quantum defect in the Rydberg formula. The "quantum defect" correction is associated with the presence of a distributed ionic core. Even for many electronically excited molecular systems, the ionic core interaction with an excited electron can take on the general aspects of the interaction between the proton and the electron in the hydrogen atom. The spectroscopic assignment of these states follows the Rydberg formula and they are called Rydberg states of molecules. Molecular Rydberg states Although the energy formula of Rydberg series is a result of hydrogen-like atom structure, Rydberg states are also present in molecules. Wave functions of high Rydberg states are very diffuse and span diameters that approach infinity. As a result, any isolated neutral molecule behaves like a hydrogen-like atom at the Rydberg limit. For molecules with multiple stable monovalent cations, multiple Rydberg series may exist. Because of the complexity of molecular spectra, low-lying Rydberg states of molecules are often mixed with valence states with similar energy and are thus not pure Rydberg states. See also Rydberg atom Rydberg matter Orbital state References Atomic Spectra and Atomic Structure, Gerhard Herzberg, Prentice-Hall, 1937. Atoms and Molecules, Martin Karplus and Richard N. Porter, Benjamin & Company, Inc., 1970. External links Army Creates Quantum Sensor That Detects Entire Radio-Frequency Spectrum; Defense One. Rydberg Atoms and the Quantum Defect; Physics Department, Davidson College. Rydberg Transitions; Chemistry and Biochemistiry, Georgia Tech. Atomic physics Atomic, molecular, and optical physics
Rydberg state
[ "Physics", "Chemistry" ]
669
[ "Quantum mechanics", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
4,268,240
https://en.wikipedia.org/wiki/Resonance-enhanced%20multiphoton%20ionization
Resonance-enhanced multiphoton ionization (REMPI) is a technique applied to the spectroscopy of atoms and small molecules. In practice, a tunable laser can be used to access an excited intermediate state. The selection rules associated with a two-photon or other multiphoton photoabsorption are different from the selection rules for a single photon transition. The REMPI technique typically involves a resonant single or multiple photon absorption to an electronically excited intermediate state followed by another photon which ionizes the atom or molecule. The light intensity to achieve a typical multiphoton transition is generally significantly larger than the light intensity to achieve a single photon photoabsorption. Because of this, subsequent photoabsorption is often very likely. An ion and a free electron will result if the photons have imparted enough energy to exceed the ionization threshold energy of the system. In many cases, REMPI provides spectroscopic information that can be unavailable to single photon spectroscopic methods, for example rotational structure in molecules is easily seen with this technique. REMPI is usually generated by a focused frequency tunable laser beam to form a small-volume plasma. In REMPI, first m photons are simultaneously absorbed by an atom or molecule in the sample to bring it to an excited state. Other n photons are absorbed afterwards to generate an electron and ion pair. The so-called m+n REMPI is a nonlinear optical process, which can only occur within the focus of the laser beam. A small-volume plasma is formed near the laser focal region. If the energy of m photons does not match any state, an off-resonant transition can occur with an energy defect ΔE, however, the electron is very unlikely to remain in that state. For large detuning, it resides there only during the time Δt. The uncertainty principle is satisfied for Δt, where ћ=h/2π and h is the Planck constant (6.6261×10^-34 J∙s). Such transition and states are called virtual, unlike real transitions to states with long lifetimes. The real transition probability is many orders of magnitude higher than the virtual transition one, which is called resonance enhanced effect. Rydberg states High photon intensity experiments can involve multiphoton processes with the absorption of integer multiples of the photon energy. In experiments that involve a multiphoton resonance, the intermediate is often a low-lying Rydberg state, and the final state is often an ion. The initial state of the system, photon energy, angular momentum and other selection rules can help in determining the nature of the intermediate state. This approach is exploited in resonance-enhanced multiphoton ionization spectroscopy (REMPI). The technique is in wide use in both atomic and molecular spectroscopy. An advantage of the REMPI technique is that the ions can be detected with almost complete efficiency and even time resolved for their mass. It is also possible to gain additional information by performing experiments to look at the energy of the liberated photoelectron in these experiments. Microwave detection Coherent microwave scattering from electrons in REMPI-induced plasma filaments adds the capability to measure selectively-ionized species with a high spatial and temporal resolution - allowing for nonintrusive determinations of concentration profiles without the use of physical probes or electrodes. It has been applied for the detection of species such as argon, xenon, nitric oxide, carbon monoxide, atomic oxygen, and methyl radicals both within enclosed cells, open air, and atmospheric flames. Microwave detection is based on homodyne or heterodyne technologies. They can significantly increase the detection sensitivity by suppressing the noise and follow sub-nanosecond plasma generation and evolution. The homodyne detection method mixes the detected microwave electric field with its own source to produce a signal proportional to the product of the two. The signal frequency is converted down from tens of gigahertz to below one gigahertz so that the signal can be amplified and observed with standard electronic devices. Because of the high sensitivity associated with the homodyne detection method, the lack of background noise in the microwave regime, and the capability of time gating of the detection electronics synchronous with the laser pulse, very high SNRs are possible even with milliwatt microwave sources. These high SNRs allow the temporal behavior of the microwave signal to be followed on a sub-nanosecond time scale. Thus the lifetime of electrons within the plasma can be recorded. By utilizing a microwave circulator, a single microwave horn transceiver has been built, which significantly simplifies the experimental setup. Detection in the microwave region has numerous advantages over optical detection. Using homodyne or heterodyne technologies, the electric field rather than the power can be detected, so much better noise rejection can be achieved. In contrast to optical heterodyne techniques, no alignment or mode matching of the reference is necessary. The long wavelength of the microwaves leads to effective point coherent scattering from the plasma in the laser focal volume, so phase matching is unimportant and scattering in the backward direction is strong. Many microwave photons can be scattered from a single electron, so the amplitude of the scattering can be increased by increasing the power of the microwave transmitter. The low energy of the microwave photons corresponds to thousands of more photons per unit energy than in the visible region, so shot noise is drastically reduced. For weak ionization characteristic of trace species diagnostics, the measured electric field is a linear function of the number of electrons which is directly proportional to the trace species concentration. Furthermore, there is very little solar or other natural background radiation in the microwave spectral region. See also Rydberg ionization spectroscopy Compare with laser-induced fluorescence (LIF) References Spectroscopy Ionization
Resonance-enhanced multiphoton ionization
[ "Physics", "Chemistry" ]
1,198
[ "Ionization", "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Spectroscopy" ]
4,268,488
https://en.wikipedia.org/wiki/Sand%20bath
A sand bath is a common piece of laboratory equipment made from a container filled with heated sand. It is used to evenly heat another container, most often during a chemical reaction. A sand bath is most commonly used in conjunction with a hot plate or heating mantle. A beaker is filled with sand or metal pellets (called shot) and is placed on the plate or mantle. The reaction vessel is then partially covered by sand or pellets. The sand or shot then conducts the heat from the plate to all sides of the reaction vessel. This technique allows a reaction vessel to be heated throughout with minimal stirring, as opposed to heating the bottom of the vessel and waiting for convection to heat the remainder, cutting down on both the duration of the reaction and the possibility of side reactions that may occur at higher temperatures. A variation on this theme is the water bath in which the sand is replaced with water. It can be used to keep a reaction vessel at the temperature of boiling water until all water is evaporated (see Standard enthalpy change of vaporization). Sand baths are one of the oldest known pieces of laboratory equipment, having been used by the alchemists. In Arabic alchemy, a sand bath was known as a qadr. In Latin alchemy, a sand bath was called balneum siccum, balneum cineritium, or balneum arenosum. See also Heat bath Water bath Oil bath Notes References External links https://web.archive.org/web/20110604144037/http://digicoll.library.wisc.edu/cgi-bin/HistSciTech/HistSciTech-idx?type=turn&entity=HistSciTech000900240229&isize=L Laboratory equipment Thermodynamics Alchemical tools
Sand bath
[ "Physics", "Chemistry", "Mathematics" ]
389
[ "Thermodynamics", "Dynamical systems" ]
4,269,567
https://en.wikipedia.org/wiki/Dog
The dog (Canis familiaris or Canis lupus familiaris) is a domesticated descendant of the wolf. Also called the domestic dog, it was selectively bred from an extinct population of wolves during the Late Pleistocene by hunter-gatherers. The dog was the first species to be domesticated by humans, over 14,000 years ago and before the development of agriculture. Experts estimate that due to their long association with humans, dogs have gained the ability to thrive on a starch-rich diet that would be inadequate for other canids. Dogs have been bred for desired behaviors, sensory capabilities, and physical attributes. Dog breeds vary widely in shape, size, and color. They have the same number of bones (with the exception of the tail), powerful jaws that house around 42 teeth, and well-developed senses of smell, hearing, and sight. Compared to humans, dogs have an inferior visual acuity, a superior sense of smell, and a relatively large olfactory cortex. They perform many roles for humans, such as hunting, herding, pulling loads, protection, companionship, therapy, aiding disabled people, and assisting police and the military. Communication in dogs includes eye gaze, facial expression, vocalization, body posture (including movements of bodies and limbs), and gustatory communication (scents, pheromones, and taste). They mark their territories by urinating on them, which is more likely when entering a new environment. Over the millennia, dogs became uniquely adapted to human behavior; this adaptation includes being able to understand and communicate with humans. As such, the human–canine bond has been a topic of frequent study, and dogs' influence on human society has given them the sobriquet of "man's best friend". The global dog population is estimated at 700 million to 1 billion, distributed around the world. The dog is the most popular pet in the United States, present in 34–40% of households. Developed countries make up approximately 20% of the global dog population, while around 75% of dogs are estimated to be from developing countries, mainly in the form of feral and community dogs. Taxonomy Dogs are domesticated members of the family Canidae. They are classified as a subspecies of Canis lupus, along with wolves and dingoes. Dogs were domesticated from wolves over 14,000 years ago by hunter-gatherers, before the development of agriculture. The remains of the Bonn–Oberkassel dog, buried alongside humans between 14,000 and 15,000 years ago, are the earliest to be conclusively identified as a domesticated dog. Genetic studies show that dogs likely diverged from wolves between 27,000 and 40,000 years ago. The dingo and the related New Guinea singing dog resulted from the geographic isolation and feralization of dogs in Oceania over 8,000 years ago. Dogs, wolves, and dingoes have sometimes been classified as separate species. In 1758, the Swedish botanist and zoologist Carl Linnaeus assigned the genus name Canis (which is the Latin word for "dog") to the domestic dog, the wolf, and the golden jackal in his book, Systema Naturae. He classified the domestic dog as Canis familiaris and, on the next page, classified the grey wolf as Canis lupus. Linnaeus considered the dog to be a separate species from the wolf because of its upturning tail (cauda recurvata in Latin term), which is not found in any other canid. In the 2005 edition of Mammal Species of the World, mammalogist W. Christopher Wozencraft listed the wolf as a wild subspecies of Canis lupus and proposed two additional subspecies: familiaris, as named by Linnaeus in 1758, and dingo, named by Meyer in 1793. Wozencraft included hallstromi (the New Guinea singing dog) as another name (junior synonym) for the dingo. This classification was informed by a 1999 mitochondrial DNA study. The classification of dingoes is disputed and a political issue in Australia. Classifying dingoes as wild dogs simplifies reducing or controlling dingo populations that threaten livestock. Treating dingoes as a separate species allows conservation programs to protect the dingo population. Dingo classification affects wildlife management policies, legislation, and societal attitudes. In 2019, a workshop hosted by the IUCN/Species Survival Commission's Canid Specialist Group considered the dingo and the New Guinea singing dog to be feral Canis familiaris. Therefore, it did not assess them for the IUCN Red List of threatened species. Domestication The earliest remains generally accepted to be those of a domesticated dog were discovered in Bonn-Oberkassel, Germany. Contextual, isotopic, genetic, and morphological evidence shows that this dog was not a local wolf. The dog was dated to 14,223 years ago and was found buried along with a man and a woman, all three having been sprayed with red hematite powder and buried under large, thick basalt blocks. The dog had died of canine distemper. This timing indicates that the dog was the first species to be domesticated in the time of hunter-gatherers, which predates agriculture. Earlier remains dating back to 30,000 years ago have been described as Paleolithic dogs, but their status as dogs or wolves remains debated because considerable morphological diversity existed among wolves during the Late Pleistocene. DNA sequences show that all ancient and modern dogs share a common ancestry and descended from an ancient, extinct wolf population that was distinct from any modern wolf lineage. Some studies have posited that all living wolves are more closely related to each other than to dogs, while others have suggested that dogs are more closely related to modern Eurasian wolves than to American wolves. The dog is a domestic animal that likely travelled a commensal pathway into domestication (i.e. humans initially neither benefitted nor were harmed by wild dogs eating refuse from their camps). The questions of when and where dogs were first domesticated remains uncertain. Genetic studies suggest a domestication process commencing over 25,000 years ago, in one or several wolf populations in either Europe, the high Arctic, or eastern Asia. In 2021, a literature review of the current evidence infers that the dog was domesticated in Siberia 23,000 years ago by ancient North Siberians, then later dispersed eastward into the Americas and westward across Eurasia, with dogs likely accompanying the first humans to inhabit the Americas. Some studies have suggested that the extinct Japanese wolf is closely related to the ancestor of domestic dogs. In 2018, a study identified 429 genes that differed between modern dogs and modern wolves. As the differences in these genes could also be found in ancient dog fossils, these were regarded as being the result of the initial domestication and not from recent breed formation. These genes are linked to neural crest and central nervous system development. These genes affect embryogenesis and can confer tameness, smaller jaws, floppy ears, and diminished craniofacial development, which distinguish domesticated dogs from wolves and are considered to reflect domestication syndrome. The study concluded that during early dog domestication, the initial selection was for behavior. This trait is influenced by those genes which act in the neural crest, which led to the phenotypes observed in modern dogs. Breeds There are around 450 official dog breeds, the most of any mammal. Dogs began diversifying in the Victorian era, when humans took control of their natural selection. Most breeds were derived from small numbers of founders within the last 200 years. Since then, dogs have undergone rapid phenotypic change and have been subjected to artificial selection by humans. The skull, body, and limb proportions between breeds display more phenotypic diversity than can be found within the entire order of carnivores. These breeds possess distinct traits related to morphology, which include body size, skull shape, tail phenotype, fur type, and colour. As such, humans have long used dogs for their desirable traits to complete or fulfill a certain work or role. Their behavioural traits include guarding, herding, hunting, retrieving, and scent detection. Their personality traits include hypersocial behavior, boldness, and aggression. Present-day dogs are dispersed around the world. An example of this dispersal is the numerous modern breeds of European lineage during the Victorian era. Anatomy and physiology Size and skeleton Dogs are extremely variable in size, ranging from one of the largest breeds, the Great Dane, at and , to one of the smallest, the Chihuahua, at and . All healthy dogs, regardless of their size and type, have the same amount of bones (with the exception of the tail), although there is significant skeletal variation between dogs of different types. The dog's skeleton is well adapted for running; the vertebrae on the neck and back have extensions for back muscles, consisting of epaxial muscles and hypaxial muscles, to connect to; the long ribs provide room for the heart and lungs; and the shoulders are unattached to the skeleton, allowing for flexibility. Compared to the dog's wolf-like ancestors, selective breeding since domestication has seen the dog's skeleton increase in size for larger types such as mastiffs and miniaturised for smaller types such as terriers; dwarfism has been selectively bred for some types where short legs are preferred, such as dachshunds and corgis. Most dogs naturally have 26 vertebrae in their tails, but some with naturally short tails have as few as three. The dog's skull has identical components regardless of breed type, but there is significant divergence in terms of skull shape between types. The three basic skull shapes are the elongated dolichocephalic type as seen in sighthounds, the intermediate mesocephalic or mesaticephalic type, and the very short and broad brachycephalic type exemplified by mastiff type skulls. The jaw contains around 42 teeth, and it has evolved for the consumption of flesh. Dogs use their carnassial teeth to cut food into bite-sized chunks, more especially meat. Senses Dogs' senses include vision, hearing, smell, taste, touch, and magnetoreception. One study suggests that dogs can feel small variations in Earth's magnetic field. Dogs prefer to defecate with their spines aligned in a north–south position in calm magnetic field conditions. Dogs' vision is dichromatic; their visual world consists of yellows, blues, and grays. They have difficulty differentiating between red and green, and much like other mammals, the dog's eye is composed of two types of cone cells compared to the human's three. The divergence of the eye axis of dogs ranges from 12 to 25°, depending on the breed, which can have different retina configurations. The fovea centralis area of the eye is attached to a nerve fiber, and is the most sensitive to photons. Additionally, a study found that dogs' visual acuity was up to eight times less effective than a human, and their ability to discriminate levels of brightness was about two times worse than a human. While the human brain is dominated by a large visual cortex, the dog brain is dominated by a large olfactory cortex. Dogs have roughly forty times more smell-sensitive receptors than humans, ranging from about 125million to nearly 300million in some dog breeds, such as bloodhounds. This sense of smell is the most prominent sense of the species; it detects chemical changes in the environment, allowing dogs to pinpoint the location of mating partners, potential stressors, resources, etc. Dogs also have an acute sense of hearing up to four times greater than that of humans. They can pick up the slightest sounds from about compared to for humans. Dogs have stiff, deeply embedded hairs known as whiskers that sense atmospheric changes, vibrations, and objects not visible in low light conditions. The lower most part of whiskers hold more receptor cells than other hair types, which help in alerting dogs of objects that could collide with the nose, ears, and jaw. Whiskers likely also facilitate the movement of food towards the mouth. Coat The coats of domestic dogs are of two varieties: "double" being common in dogs (as well as wolves) originating from colder climates, made up of a coarse guard hair and a soft down hair, or "single", with the topcoat only. Breeds may have an occasional "blaze", stripe, or "star" of white fur on their chest or underside. Premature graying can occur in dogs as early as one year of age; this is associated with impulsive behaviors, anxiety behaviors, and fear of unfamiliar noise, people, or animals. Some dog breeds are hairless, while others have a very thick corded coat. The coats of certain breeds are often groomed to a characteristic style, for example, the Yorkshire Terrier's "show cut". Dewclaw A dog's dewclaw is the fifth digit in its forelimb and hind legs. Dewclaws on the forelimbs are attached by bone and ligament, while the dewclaws on the hind legs are attached only by skin. Most dogs aren't born with dewclaws in their hind legs, and some are without them in their forelimbs. Dogs' dewclaws consist of the proximal phalanges and distal phalanges. Some publications theorize that dewclaws in wolves, who usually do not have dewclaws, were a sign of hybridization with dogs. Tail A dog's tail is the terminal appendage of the vertebral column, which is made up of a string of 5 to 23 vertebrae enclosed in muscles and skin that support the dog's back extensor muscles. One of the primary functions of a dog's tail is to communicate their emotional state. The tail also helps the dog maintain balance by putting its weight on the opposite side of the dog's tilt, and it can also help the dog spread its anal gland's scent through the tail's position and movement. Dogs can have a violet gland (or supracaudal gland) characterized by sebaceous glands on the dorsal surface of their tails; in some breeds, it may be vestigial or absent. The enlargement of the violet gland in the tail, which can create a bald spot from hair loss, can be caused by Cushing's disease or an excess of sebum from androgens in the sebaceous glands. A study suggests that dogs show asymmetric tail-wagging responses to different emotive stimuli. "Stimuli that could be expected to elicit approach tendencies seem to be associated with [a] higher amplitude of tail-wagging movements to the right side". Dogs can injure themselves by wagging their tails forcefully; this condition is called kennel tail, happy tail, bleeding tail, or splitting tail. In some hunting dogs, the tail is traditionally docked to avoid injuries. Some dogs can be born without tails because of a DNA variant in the T gene, which can also result in a congenitally short (bobtail) tail. Tail docking is opposed by many veterinary and animal welfare organisations such as the American Veterinary Medical Association and the British Veterinary Association. Evidence from veterinary practices and questionnaires showed that around 500 dogs would need to have their tail docked to prevent one injury. Health Numerous disorders have been known to affect dogs. Some are congenital and others are acquired. Dogs can acquire upper respiratory tract diseases including diseases that affect the nasal cavity, the larynx, and the trachea; lower respiratory tract diseases which includes pulmonary disease and acute respiratory diseases; heart diseases which includes any cardiovascular inflammation or dysfunction of the heart; haemopoietic diseases including anaemia and clotting disorders; gastrointestinal disease such as diarrhoea and gastric dilatation volvulus; hepatic disease such as portosystemic shunts and liver failure; pancreatic disease such as pancreatitis; renal disease; lower urinary tract disease such as cystitis and urolithiasis; endocrine disorders such as diabetes mellitus, Cushing's syndrome, hypoadrenocorticism, and hypothyroidism; nervous system diseases such as seizures and spinal injury; musculoskeletal disease such as arthritis and myopathies; dermatological disorders such as alopecia and pyoderma; ophthalmological diseases such as conjunctivitis, glaucoma, entropion, and progressive retinal atrophy; and neoplasia. Common dog parasites are lice, fleas, fly larvae, ticks, mites, cestodes, nematodes, and coccidia. Taenia is a notable genus with 5 species in which dogs are the definitive host. Additionally, dogs are a source of zoonoses for humans. They are responsible for 99% of rabies cases worldwide; however, in some developed countries such as the UK, rabies is absent from dogs and is instead only transmitted by bats. Other common zoonoses are hydatid disease, leptospirosis, pasteurellosis, ringworm, and toxocariasis. Common infections in dogs include canine adenovirus, canine distemper virus, canine parvovirus, leptospirosis, canine influenza, and canine coronavirus. All of these conditions have vaccines available. Dogs are the companion animal most frequently reported for exposure to toxins. Most poisonings are accidental and over 80% of reports of exposure to the ASPCA animal poisoning hotline are due to oral exposure. The most common substances people report exposure to are: pharmaceuticals, toxic foods, and rodenticides. Data from the Pet Poison Helpline shows that human drugs are the most frequent cause of toxicosis death. The most common household products ingested are cleaning products. Most food related poisonings involved theobromine poisoning (chocolate). Other common food poisonings include xylitol, Vitis (grapes, raisins, etc.) and Allium (garlic, onions, etc.). Pyrethrin insecticides were the most common cause of pesticide poisoning. Metaldehyde a common pesticide for snails and slugs typically causes severe outcomes when ingested by dogs. Neoplasia is the most common cause of death for dogs. Other common causes of death are heart and renal failure. Their pathology is similar to that of humans, as is their response to treatment and their outcomes. Genes found in humans to be responsible for disorders are investigated in dogs as being the cause and vice versa. Lifespan The typical lifespan of dogs varies widely among breeds, but the median longevity (the age at which half the dogs in a population have died and half are still alive) is approximately 12.7 years. Obesity correlates negatively with longevity with one study finding obese dogs to have a life expectancy approximately a year and a half less than dogs with a healthy weight. In a 2024 UK study analyzing 584,734 dogs, it was concluded that purebred dogs lived longer than crossbred dogs, challenging the previous notion of the latter having the higher life expectancies. The authors noted that their study included "designer dogs" as crossbred and that purebred dogs were typically given better care than their crossbred counterparts, which likely influenced the outcome of the study. Other studies also show that fully mongrel dogs live about a year longer on average than dogs with pedigrees. Furthermore, small dogs with longer muzzles have been shown to have higher lifespans than larger medium-sized dogs with much more depressed muzzles. For free-ranging dogs, less than 1 in 5 reach sexual maturity, and the median life expectancy for feral dogs is less than half of dogs living with humans. Reproduction In domestic dogs, sexual maturity happens around six months to one year for both males and females, although this can be delayed until up to two years of age for some large breeds. This is the time at which female dogs will have their first estrous cycle, characterized by their vulvas swelling and producing discharges, usually lasting between 4 and 20 days. They will experience subsequent estrous cycles semiannually, during which the body prepares for pregnancy. At the peak of the cycle, females will become estrous, mentally and physically receptive to copulation. Because the ova survive and can be fertilized for a week after ovulation, more than one male can sire the same litter. Fertilization typically occurs two to five days after ovulation. After ejaculation, the dogs are coitally tied for around 5–30 minutes because of the male's bulbus glandis swelling and the female's constrictor vestibuli contracting; the male will continue ejaculating until they untie naturally due to muscle relaxation. 14–16 days after ovulation, the embryo attaches to the uterus, and after seven to eight more days, a heartbeat is detectable. Dogs bear their litters roughly 58 to 68 days after fertilization, with an average of 63 days, although the length of gestation can vary. An average litter consists of about six puppies. Neutering Neutering is the sterilization of animals via gonadectomy, which is an orchidectomy (castration) in dogs and ovariohysterectomy (spay) in bitches. Neutering reduces problems caused by hypersexuality, especially in male dogs. Spayed females are less likely to develop cancers affecting the mammary glands, ovaries, and other reproductive organs. However, neutering increases the risk of urinary incontinence in bitches, prostate cancer in dogs, and osteosarcoma, hemangiosarcoma, cruciate ligament rupture, pyometra, obesity, and diabetes mellitus in either sex. Neutering is the most common surgical procedure in dogs less than a year old in the US and is seen as a control method for overpopulation. Neutering often occurs as early as 6–14 weeks in shelters in the US. The American Society for the Prevention of Cruelty to Animals (ASPCA) advises that dogs not intended for further breeding should be neutered so that they do not have undesired puppies that may later be euthanized. However, the Society for Theriogenology and the American College of Theriogenologists made a joint statement that opposes mandatory neutering; they said that the cause of overpopulation in the US is cultural. Neutering is less common in most European countries, especially in Nordic countries—except for the UK, where it is common. In Norway, neutering is illegal unless for the benefit of the animal's health (e.g., ovariohysterectomy in case of ovarian or uterine neoplasia). Some European countries have similar laws to Norway, but their wording either explicitly allows for neutering for controlling reproduction or it is allowed in practice or by contradiction through other laws. Italy and Portugal have passed recent laws that promote it. Germany forbids early age neutering, but neutering is still allowed at the usual age. In Romania, neutering is mandatory except for when a pedigree to select breeds can be shown. Inbreeding depression A common breeding practice for pet dogs is to mate them between close relatives (e.g., between half- and full-siblings). In a study of seven dog breeds (the Bernese Mountain Dog, Basset Hound, Cairn Terrier, Brittany, German Shepherd Dog, Leonberger, and West Highland White Terrier), it was found that inbreeding decreases litter size and survival. Another analysis of data on 42,855 Dachshund litters found that as the inbreeding coefficient increased, litter size decreased and the percentage of stillborn puppies increased, thus indicating inbreeding depression. In a study of Boxer litters, 22% of puppies died before reaching 7 weeks of age. Stillbirth was the most frequent cause of death, followed by infection. Mortality due to infection increased significantly with increases in inbreeding. Behavior Dog behavior has been shaped by millennia of contact with humans. They have acquired the ability to understand and communicate with humans and are uniquely attuned to human behaviors. Behavioral scientists suggest that a set of social-cognitive abilities in domestic dogs that are not possessed by the dog's canine relatives or other highly intelligent mammals, such as great apes, are parallel to children's social-cognitive skills. Most domestic animals were initially bred for the production of goods. Dogs, on the other hand, were selectively bred for desirable behavioral traits. In 2016, a study found that only 11 fixed genes showed variation between wolves and dogs. These gene variations indicate the occurrence of artificial selection and the subsequent divergence of behavior and anatomical features. These genes have been shown to affect the catecholamine synthesis pathway, with the majority of the genes affecting the fight-or-flight response (i.e., selection for tameness) and emotional processing. Compared to their wolf counterparts, dogs tend to be less timid and less aggressive, though some of these genes have been associated with aggression in certain dog breeds. Traits of high sociability and lack of fear in dogs may include genetic modifications related to Williams-Beuren syndrome in humans, which cause hypersociability at the expense of problem-solving ability. In a 2023 study of 58 dogs, some dogs classified as attention deficit hyperactivity disorder-like showed lower serotonin and dopamine concentrations. A similar study claims that hyperactivity is more common in male and young dogs. A dog can become aggressive because of trauma or abuse, fear or anxiety, territorial protection, or protecting an item it considers valuable. Acute stress reactions from post-traumatic stress disorder (PTSD) seen in dogs can evolve into chronic stress. Police dogs with PTSD can often refuse to work. Dogs have a natural instinct called prey drive (the term is chiefly used to describe training dogs' habits) which can be influenced by breeding. These instincts can drive dogs to consider objects or other animals to be prey or drive possessive behavior. These traits have been enhanced in some breeds so that they may be used to hunt and kill vermin or other pests. Puppies or dogs sometimes bury food underground. One study found that wolves outperformed dogs in finding food caches, likely due to a "difference in motivation" between wolves and dogs. Some puppies and dogs engage in coprophagy out of habit, stress, for attention, or boredom; most of them will not do it later in life. A study hypothesizes that the behavior was inherited from wolves, a behavior likely evolved to lessen the presence of intestinal parasites in dens. Most dogs can swim. In a study of 412 dogs, around 36.5% of the dogs could not swim; the other 63.5% were able to swim without a trainer in a swimming pool. A study of 55 dogs found a correlation between swimming and 'improvement' of the hip osteoarthritis joint. Nursing The female dog may produce colostrum, a type of milk high in nutrients and antibodies, 1–7 days before giving birth. Milk production lasts for around three months, and increases with litter size. The dog can sometimes vomit and refuse food during child contractions. In the later stages of the dog's pregnancy, nesting behaviour may occur. Puppies are born with a protective fetal membrane that the mother usually removes shortly after birth. Dogs can have the maternal instincts to start grooming their puppies, consume their puppies' feces, and protect their puppies, likely due to their hormonal state. While male-parent dogs can show more disinterested behaviour toward their own puppies, most can play with the young pups as they would with other dogs or humans. A female dog may abandon or attack her puppies or her male partner dog if she is stressed or in pain. Intelligence Researchers have tested dogs' ability to perceive information, retain it as knowledge, and apply it to solve problems. Studies of two dogs suggest that dogs can learn by inference. A study with Rico, a Border Collie, showed that he knew the labels of over 200 different items. He inferred the names of novel things by exclusion learning and correctly retrieved those new items after four weeks of the initial exposure. A study of another Border Collie, Chaser, documented that he had learned the names and could associate them by verbal command with over 1,000 words. One study of canine cognitive abilities found that dogs' capabilities are similar to those of horses, chimpanzees, or cats. One study of 18 household dogs found that the dogs could not distinguish food bowls at specific locations without distinguishing cues; the study stated that this indicates a lack of spatial memory. A study stated that dogs have a visual sense for number. The dogs showed a ratio-dependent activation both for numerical values from 1–3 to larger than four. Dogs demonstrate a theory of mind by engaging in deception. Another experimental study showed evidence that Australian dingos can outperform domestic dogs in non-social problem-solving, indicating that domestic dogs may have lost much of their original problem-solving abilities once they joined humans. Another study showed that dogs stared at humans after failing to complete an impossible version of the same task they had been trained to solve. Wolves, under the same situation, avoided staring at humans altogether. Communication Dog communication is the transfer of information between dogs, as well as between dogs and humans. Communication behaviors of dogs include eye gaze, facial expression, vocalization, body posture (including movements of bodies and limbs), and gustatory communication (scents, pheromones, and taste). Dogs mark their territories by urinating on them, which is more likely when entering a new environment. Both sexes of dogs may also urinate to communicate anxiety or frustration, submissiveness, or when in exciting or relaxing situations. Aroused dogs can be a result of the dogs' higher cortisol levels. Dogs begin socializing with other dogs by the time they reach the ages of 3 to 8 weeks, and at about 5 to 12 weeks of age, they alter their focus from dogs to humans. Belly exposure in dogs can be a defensive behavior that can lead to a bite or to seek comfort. Humans communicate with dogs by using vocalization, hand signals, and body posture. With their acute sense of hearing, dogs rely on the auditory aspect of communication for understanding and responding to various cues, including the distinctive barking patterns that convey different messages. A study using functional magnetic resonance imaging (fMRI) has shown that dogs respond to both vocal and nonvocal voices using the brain's region towards the temporal pole, similar to that of humans' brains. Most dogs also looked significantly longer at the face whose expression matched the valence of vocalization. A study of caudate responses shows that dogs tend to respond more positively to social rewards than to food rewards. Ecology Population The dog is the most widely abundant large carnivoran living in the human environment. In 2020, the estimated global dog population was between 700 million and 1 billion. In the same year, a study found the dog to be the most popular pet in the United States, as they were present in 34 out of every 100 homes. About 20% of the dog population live in developed countries. In the developing world, it is estimated that three-quarters of the world's dog population lives in the developing world as feral, village, or community dogs. Most of these dogs live as scavengers and have never been owned by humans, with one study showing that village dogs' most common response when approached by strangers is to run away (52%) or respond aggressively (11%). Competitors Feral and free-ranging dogs' potential to compete with other large carnivores is limited by their strong association with humans. Although wolves are known to kill dogs, wolves tend to live in pairs in areas where they are highly persecuted, giving them a disadvantage when facing large dog groups. In some instances, wolves have displayed an uncharacteristic fearlessness of humans and buildings when attacking dogs, to the extent that they have to be beaten off or killed. Although the numbers of dogs killed each year are relatively low, there is still a fear among humans of wolves entering villages and farmyards to take dogs, and losses of dogs to wolves have led to demands for more liberal wolf hunting regulations. Coyotes and big cats have also been known to attack dogs. In particular, leopards are known to have a preference for dogs and have been recorded to kill and consume them, no matter their size. Siberian tigers in the Amur river region have killed dogs in the middle of villages. They will not tolerate wolves as competitors within their territories, and the tigers could be considering dogs in the same way. Striped hyenas are known to kill dogs in their range. Dogs as introduced predators have affected the ecology of New Zealand, which lacked indigenous land-based mammals before human settlement. Dogs have made 11 vertebrate species extinct and are identified as a 'potential threat' to at least 188 threatened species worldwide. Dogs have also been linked to the extinction of 156 animal species. Dogs have been documented to have killed a few birds of the endangered species, the kagu, in New Caledonia. Diet Dogs are typically described as omnivores. Compared to wolves, dogs from agricultural societies have extra copies of amylase and other genes involved in starch digestion that contribute to an increased ability to thrive on a starch-rich diet. Similar to humans, some dog breeds produce amylase in their saliva and are classified as having a high-starch diet. Despite being an omnivore, dogs are only able to conjugate bile acid with taurine. They must get vitamin D from their diet. Of the twenty-one amino acids common to all life forms (including selenocysteine), dogs cannot synthesize ten: arginine, histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Like cats, dogs require arginine to maintain nitrogen balance. These nutritional requirements place dogs halfway between carnivores and omnivores. Range As a domesticated or semi-domesticated animal, the dog has notable exceptions of presence in: The Aboriginal Tasmanians, who were separated from Australia before the arrival of dingos on that continent The Andamanese peoples, who were isolated when rising sea levels covered the land bridge to Myanmar The Fuegians, who instead domesticated the Fuegian dog, an already extinct different canid species Individual Pacific islands whose maritime settlers did not bring dogs or where the dogs died out after original settlement, notably the Mariana Islands, Palau and most of the Caroline Islands with exceptions such as Fais Island and Nukuoro, the Marshall Islands, the Gilbert Islands, New Caledonia, Vanuatu, Tonga, Marquesas, Mangaia in the Cook Islands, Rapa Iti in French Polynesia, Easter Island, the Chatham Islands, and Pitcairn Island (settled by the Bounty mutineers, who killed off their dogs to escape discovery by passing ships). Dogs were introduced to Antarctica as sled dogs. Starting practice in December 1993, dogs were later outlawed by the Protocol on Environmental Protection to the Antarctic Treaty international agreement due to the possible risk of spreading infections. Roles with humans The domesticated dog originated as a predator and scavenger. They inherited complex behaviors, such as bite inhibition, from their wolf ancestors, which would have been pack hunters with complex body language. These sophisticated forms of social cognition and communication may account for dogs' trainability, playfulness, and ability to fit into human households and social situations, and probably also their co-existence with early human hunter-gatherers. Dogs perform many roles for people, such as hunting, herding, pulling loads, protection, assisting police and the military, companionship, and aiding disabled individuals. These roles in human society have earned them the nickname "man's best friend" in the Western world. In some cultures, however, dogs are also a source of meat. Pets The keeping of dogs as companions, particularly by elites, has a long history. Pet-dog populations grew significantly after World War II as suburbanization increased. In the 1980s, there have been changes in the pet dog's functions, such as the increased role of dogs in the emotional support of their human guardians. Within the second half of the 20th century, more and more dog owners considered their animal to be a part of the family. This major social status shift allowed the dog to conform to social expectations of personality and behavior. The second has been the broadening of the concepts of family and the home to include dogs-as-dogs within everyday routines and practices. Products such as dog-training books, classes, and television programs, target dog owners. Some dog-trainers have promoted a dominance model of dog-human relationships. However, the idea of the "alpha dog" trying to be dominant is based on a controversial theory about wolf packs. It has been disputed that "trying to achieve status" is characteristic of dog-human interactions. Human family members have increased participation in activities in which the dog is an integral partner, such as dog dancing and dog yoga. According to statistics published by the American Pet Products Manufacturers Association in the National Pet Owner Survey in 2009–2010, an estimated 77.5 million people in the United States have pet dogs. The source shows that nearly 40% of American households own at least one dog, of which 67% own just one dog, 25% own two dogs, and nearly 9% own more than two dogs. The data also shows an equal number of male and female pet dogs; less than one-fifth of the owned dogs come from shelters. Workers In addition to dogs' role as companion animals, dogs have been bred for herding livestock (such as collies and sheepdogs); for hunting; for rodent control (such as terriers); as search and rescue dogs; as detection dogs (such as those trained to detect illicit drugs or chemical weapons); as homeguard dogs; as police dogs (sometimes nicknamed "K-9"); as welfare-purpose dogs; as dogs who assist fishermen retrieve their nets; and as dogs that pull loads (such as sled dogs). In 1957, the dog Laika became one of the first animals to be launched into Earth orbit aboard the Soviets's Sputnik 2; Laika died during the flight from overheating. Various kinds of service dogs and assistance dogs, including guide dogs, hearing dogs, mobility assistance dogs, and psychiatric service dogs, assist individuals with disabilities. A study of 29 dogs found that 9 dogs owned by people with epilepsy were reported to exhibit attention-getting behavior to their handler 30 seconds to 45 minutes prior to an impending seizure; there was no significant correlation between the patients' demographics, health, or attitude towards their pets. Shows and sports Dogs compete in breed-conformation shows and dog sports (including racing, sledding, and agility competitions). In dog shows, also referred to as "breed shows", a judge familiar with the specific dog breed evaluates individual purebred dogs for conformity with their established breed type as described in a breed standard. Weight pulling, a dog sport involving pulling weight, has been criticized for promoting doping and for its risk of injury. Dogs as food Humans have consumed dog meat going back at least 14,000 years. It's unknown to what extent prehistoric dogs were consumed and bred for meat. For centuries, the practice was prevalent in Southeast Asia, East Asia, Africa, and Oceania before cultural changes triggered by the spread of religions resulted in dog meat consumption declining and becoming more taboo. Switzerland, Polynesia, and pre-Columbian Mexico historically consumed dog meat. Some Native American dogs, like the Peruvian Hairless Dog and Xoloitzcuintle, were raised to be sacrificed and eaten. Han Chinese traditionally ate dogs. Consumption of dog meat declined but did not end during the Sui dynasty (581–618) and Tang dynasty (618–907) due in part to the spread of Buddhism and the upper class rejecting the practice. Dog consumption was rare in India, Iran, and Europe. Eating dog meat is a social taboo in most parts of the world, though some still consume it in modern times. It is still consumed in some East Asian countries, including China, Vietnam, Korea, Indonesia, and the Philippines. An estimated 30 million dogs are killed and consumed in Asia every year. China is the world's largest consumer of dogs, with an estimated 10 to 20 million dogs killed every year for human consumption. In Vietnam, about 5 million dogs are slaughtered annually. In 2024, China, Singapore, and Thailand placed a ban on the consumption of dogs within their borders. In some parts of Poland and Central Asia, dog fat is reportedly believed to be beneficial for the lungs. Proponents of eating dog meat have argued that placing a distinction between livestock and dogs is Western hypocrisy and that there is no difference in eating different animals' meat. There is a long history of dog meat consumption in South Korea, but the practice has fallen out of favor. A 2017 survey found that under 40% of participants supported a ban on the distribution and consumption of dog meat. This increased to over 50% in 2020, suggesting changing attitudes, particularly among younger individuals. In 2018, the South Korean government passed a bill banning restaurants that sell dog meat from doing so during that year's Winter Olympics. On 9 January 2024, the South Korean parliament passed a law banning the distribution and sale of dog meat. It will take effect in 2027, with plans to assist dog farmers in transitioning to other products. The primary type of dog raised for meat in South Korea has been the Nureongi. In North Korea where meat is scarce, eating dog is a common and accepted practice, officially promoted by the government. Health risks In 2018, the World Health Organization (WHO) reported that 59,000 people died globally from rabies, with 59.6% of the deaths in Asia and 36.4% in Africa. Rabies is a disease for which dogs are the most significant vector. Dog bites affect tens of millions of people globally each year. The primary victims of dog bite incidents are children. They are more likely to sustain more serious injuries from bites, which can lead to death. Sharp claws can lacerate flesh and cause serious infections. In the United States, cats and dogs are a factor in more than 86,000 falls each year. It has been estimated that around 2% of dog-related injuries treated in U.K. hospitals are domestic accidents. The same study concluded that dog-associated road accidents involving injuries more commonly involve two-wheeled vehicles. Some countries and cities have also banned or restricted certain dog breeds, usually for safety concerns. Toxocara canis (dog roundworm) eggs in dog feces can cause toxocariasis. It is estimated that nearly 14% of people in the United States are infected with Toxocara; about 10,000 cases are reported each year. Untreated toxocariasis can cause retinal damage and decreased vision. Dog feces can also contain hookworms that cause cutaneous larva migrans in humans. Health benefits The scientific evidence is mixed as to whether a dog's companionship can enhance human physical and psychological well-being. Studies suggest that there are benefits to physical health and psychological well-being, but they have been criticized for being "poorly controlled". One study states that "the health of elderly people is related to their health habits and social supports but not to their ownership of, or attachment to, a companion animal". Earlier studies have shown that pet-dog or -cat guardians make fewer hospital visits and are less likely to be on medication for heart problems and sleeping difficulties than non-guardians. People with pet dogs took considerably more physical exercise than those with cats or those without pets; these effects are relatively long-term. Pet guardianship has also been associated with increased survival in cases of coronary artery disease. Human guardians are significantly less likely to die within one year of an acute myocardial infarction than those who do not own dogs. Studies have found a small to moderate correlation between dog-ownership and increased adult physical-activity levels. A 2005 paper by the British Medical Journal states: Recent research has failed to support earlier findings that pet ownership is associated with a reduced risk of cardiovascular disease, a reduced use of general practitioner services, or any psychological or physical benefits on health for community dwelling older people. Research has, however, pointed to significantly less absenteeism from school through sickness among children who live with pets. Health benefits of dogs can result from contact with dogs in general, not solely from having dogs as pets. For example, when in a pet dog's presence, people show reductions in cardiovascular, behavioral, and psychological indicators of anxiety and are exposed to immune-stimulating microorganisms, which can protect against allergies and autoimmune diseases (according to the hygiene hypothesis). Other benefits include dogs as social support. One study indicated that wheelchair-users experience more positive social interactions with strangers when accompanied by a dog than when they are not. In a 2015 study, it was found that having a pet made people more inclined to foster positive relationships with their neighbors. In one study, new guardians reported a significant reduction in minor health problems during the first month following pet acquisition, which was sustained through the 10-month study. Using dogs and other animals as a part of therapy dates back to the late-18th century, when animals were introduced into mental institutions to help socialize patients with mental disorders. Animal-assisted intervention research has shown that animal-assisted therapy with a dog can increase smiling and laughing among people with Alzheimer's disease. One study demonstrated that children with ADHD and conduct disorders who participated in an education program with dogs and other animals showed increased attendance, knowledge, and skill-objectives and decreased antisocial and violent behavior compared with those not in an animal-assisted program. Cultural importance Artworks have depicted dogs as symbols of guidance, protection, loyalty, fidelity, faithfulness, alertness, and love. In ancient Mesopotamia, from the Old Babylonian period until the Neo-Babylonian period, dogs were the symbol of Ninisina, the goddess of healing and medicine, and her worshippers frequently dedicated small models of seated dogs to her. In the Neo-Assyrian and Neo-Babylonian periods, dogs served as emblems of magical protection. In China, Korea, and Japan, dogs are viewed as kind protectors. In mythology, dogs often appear as pets or as watchdogs. Stories of dogs guarding the gates of the underworld recur throughout Indo-European mythologies and may originate from Proto-Indo-European traditions. In Greek mythology, Cerberus is a three-headed, dragon-tailed watchdog who guards the gates of Hades. Dogs also feature in association with the Greek goddess Hecate. In Norse mythology, a dog called Garmr guards Hel, a realm of the dead. In Persian mythology, two four-eyed dogs guard the Chinvat Bridge. In Welsh mythology, Cŵn Annwn guards Annwn. In Hindu mythology, Yama, the god of death, owns two watchdogs named Shyama and Sharvara, which each have four eyes—they are said to watch over the gates of Naraka. A black dog is considered to be the vahana (vehicle) of Bhairava (an incarnation of Shiva). In Christianity, dogs represent faithfulness. Within the Roman Catholic denomination specifically, the iconography of Saint Dominic includes a dog after the saint's mother dreamt of a dog springing from her womb and became pregnant shortly after that. As such, the Dominican Order (Ecclesiastical Latin: Domini canis) means "dog of the Lord" or "hound of the Lord". In Christian folklore, a church grim often takes the form of a black dog to guard Christian churches and their churchyards from sacrilege. Jewish law does not prohibit keeping dogs and other pets but requires Jews to feed dogs (and other animals that they own) before themselves and to make arrangements for feeding them before obtaining them. The view on dogs in Islam is mixed, with some schools of thought viewing them as unclean, although Khaled Abou El Fadl states that this view is based on "pre-Islamic Arab mythology" and "a tradition [...] falsely attributed to the Prophet". The Sunni Maliki school jurists disagree with the idea that dogs are unclean. Terminology Dog – the species (or subspecies) as a whole, also any male member of the same. Bitch – any female member of the species (or subspecies). Puppy or pup – a young member of the species (or subspecies) under 12 months old. Sire – the male parent of a litter. Dam – the female parent of a litter. Litter – all of the puppies resulting from a single whelping. Whelping – the act of a bitch giving birth. Whelps – puppies still dependent upon their dam. See Also Saint Guinefort References Bibliography External links Biodiversity Heritage Library bibliography for Canis lupus familiaris Fédération Cynologique Internationale (FCI) – World Canine Organisation Dogs in the Ancient World, an article on the history of dogs View the dog genome on Ensembl Genome of Canis lupus familiaris (version UU_Cfam_GSD_1.0/canFam4), via UCSC Genome Browser Data of the genome of Canis lupus familiaris, via NCBI Data of the genome assembly of Canis lupus familiaris (version UU_Cfam_GSD_1.0/canFam4), via NCBI Wolves Scavengers Cosmopolitan mammals Animal models Extant Late Pleistocene first appearances Mammals described in 1758 Taxa named by Carl Linnaeus English words
Dog
[ "Biology" ]
10,386
[ "Model organisms", "Animal models" ]
4,271,664
https://en.wikipedia.org/wiki/List%20of%20atmospheric%20dispersion%20models
Atmospheric dispersion models are computer programs that use mathematical algorithms to simulate how pollutants in the ambient atmosphere disperse and, in some cases, how they react in the atmosphere. US Environmental Protection Agency models Many of the dispersion models developed by or accepted for use by the U.S. Environmental Protection Agency (U.S. EPA) are accepted for use in many other countries as well. Those EPA models are grouped below into four categories. Preferred and recommended models AERMOD – An atmospheric dispersion model based on atmospheric boundary layer turbulence structure and scaling concepts, including treatment of multiple ground-level and elevated point, area and volume sources. It handles flat or complex, rural or urban terrain and includes algorithms for building effects and plume penetration of inversions aloft. It uses Gaussian dispersion for stable atmospheric conditions (i.e., low turbulence) and non-Gaussian dispersion for unstable conditions (high turbulence). Algorithms for plume depletion by wet and dry deposition are also included in the model. This model was in development for approximately 14 years before being officially accepted by the U.S. EPA. CALPUFF – A non-steady-state puff dispersion model that simulates the effects of time- and space-varying meteorological conditions on pollution transport, transformation, and removal. CALPUFF can be applied for long-range transport and for complex terrain. BLP – A Gaussian plume dispersion model designed to handle unique modelling problems associated with industrial sources where plume rise and downwash effects from stationary line sources are important. CALINE3 – A steady-state Gaussian dispersion model designed to determine pollution concentrations at receptor locations downwind of highways located in relatively uncomplicated terrain. CAL3QHC and CAL3QHCR – CAL3QHC is a CALINE3 based model with queuing calculations and a traffic model to calculate delays and queues that occur at signalized intersections. CAL3QHCR is a more refined version based on CAL3QHC that requires local meteorological data. CTDMPLUS – A complex terrain dispersion model (CTDM) plus algorithms for unstable situations (i.e., highly turbulent atmospheric conditions). It is a refined point source Gaussian air quality model for use in all stability conditions (i.e., all conditions of atmospheric turbulence) for complex terrain. OCD – Offshore and coastal dispersion model (OCD) is a Gaussian model developed to determine the impact of offshore emissions from point, area or line sources on the air quality of coastal regions. It incorporates overwater plume transport and dispersion as well as changes that occur as the plume crosses the shoreline. Alternative models ADAM – Air force dispersion assessment model (ADAM) is a modified box and Gaussian dispersion model which incorporates thermodynamics, chemistry, heat transfer, aerosol loading, and dense gas effects. ADMS 5 – Atmospheric Dispersion Modelling System (ADMS 5) is an advanced dispersion model developed in the United Kingdom for calculating concentrations of pollutants emitted both continuously from point, line, volume and area sources, or discretely from point sources. AFTOX – A Gaussian dispersion model that handles continuous or puff, liquid or gas, elevated or surface releases from point or area sources. DEGADIS – Dense gas dispersion (DEGADIS) is a model that simulates the dispersion at ground level of area source clouds of denser-than-air gases or aerosols released with zero momentum into the atmosphere over flat, level terrain. HGSYSTEM – A collection of computer programs developed by Shell Research Ltd. and designed to predict the source-term and subsequent dispersion of accidental chemical releases with an emphasis on dense gas behavior. HOTMAC and RAPTAD – HOTMAC is a model for weather forecasting used in conjunction with RAPTAD which is a puff model for pollutant transport and dispersion. These models are used for complex terrain, coastal regions, urban areas, and around buildings where other models fail. HYROAD – The hybrid roadway model integrates three individual modules simulating the pollutant emissions from vehicular traffic and the dispersion of those emissions. The dispersion module is a puff model that determines concentrations of carbon monoxide (CO) or other gaseous pollutants and particulate matter (PM) from vehicle emissions at receptors within 500 meters of the roadway intersections. ISC3 – A Gaussian model used to assess pollutant concentrations from a wide variety of sources associated with an industrial complex. This model accounts for: settling and dry deposition of particles; downwash; point, area, line, and volume sources; plume rise as a function of downwind distance; separation of point sources; and limited terrain adjustment. ISC3 operates in both long-term and short-term modes. OBODM – A model for evaluating the air quality impacts of the open burning and detonation (OB/OD) of obsolete munitions and solid propellants. It uses dispersion and deposition algorithms taken from existing models for instantaneous and quasi-continuous sources to predict the transport and dispersion of pollutants released by the open burning and detonation operations. PANACHE – Fluidyn-PANACHE is an Eulerian (and Lagrangian for particulate matter), 3-dimensional finite volume fluid mechanics model designed to simulate continuous and short-term pollutant dispersion in the atmosphere, in simple or complex terrain. PLUVUEII – A model that estimates atmospheric visibility degradation and atmospheric discoloration caused by plumes resulting from the emissions of particles, nitrogen oxides, and sulfur oxides. The model predicts the transport, dispersion, chemical reactions, optical effects and surface deposition of such emissions from a single point or area source. SCIPUFF – A puff dispersion model that uses a collection of Gaussian puffs to predict three-dimensional, time-dependent pollutant concentrations. In addition to the average concentration value, SCIPUFF predicts the statistical variance in the concentrations resulting from the random fluctuations of the wind. SDM – Shoreline dispersion model (SDM) is a Gaussian dispersion model used to determine ground-level concentrations from tall stationary point source emissions near a shoreline. SLAB – A model for denser-than-air gaseous plume releases that utilizes the one-dimensional equations of momentum, conservation of mass and energy, and the equation of state. SLAB handles point source ground-level releases, elevated jet releases, releases from volume sources and releases from the evaporation of volatile liquid spill pools. Screening models These are models that are often used before applying a refined air quality model to determine if refined modelling is needed. AERSCREEN – The screening version of AERMOD. It produces estimates of concentrations, without the need for meteorological data, that are equal to or greater than the estimates produced by AERMOD with a full set of meteorological data. The U.S. EPA released version 11060 of AERSCREEN on 11 March 2010 with a subsequent update, version 11076, on 17 March 2010. The U.S. EPA published the "Clarification memorandum on AERSCREEN as the recommended screening model" on 11 April 2011. CTSCREEN – The screening version of CTDMPLUS. SCREEN3 – The screening version of ISC3. TSCREEN – Toxics screening model (TSCREEN) is a Gaussian model for screening toxic air pollutant emissions and their subsequent dispersion from possible releases at superfund sites. It contains 3 modules: SCREEN3, PUFF, and RVD (Relief Valve Discharge). VALLEY – A screening, complex terrain, Gaussian dispersion model for estimating 24-hour or annual concentrations resulting from up to 50 point and area emission sources. COMPLEX1 – A multiple point source screening model with terrain adjustment that uses the plume impaction algorithm of the VALLEY model. RTDM3.2 – Rough terrain diffusion model (RTDM3.2) is a Gaussian model for estimating ground-level concentrations of one or more co-located point sources in rough (or flat) terrain. VISCREEN – A model that calculates the impact of specified emissions for specific transport and dispersion conditions. Photochemical models Photochemical air quality models have become widely utilized tools for assessing the effectiveness of control strategies adopted by regulatory agencies. These models are large-scale air quality models that simulate the changes of pollutant concentrations in the atmosphere by characterizing the chemical and physical processes in the atmosphere. These models are applied at multiple geographical scales ranging from local and regional to national and global. Models-3/CMAQ – The latest version of the community multi-scale air quality (CMAQ) model has state-of-the-science capabilities for conducting urban to regional scale simulations of multiple air quality issues, including tropospheric ozone, fine particles, toxics, acid deposition, and visibility degradation. CAMx – The comprehensive air quality model with extensions (CAMx) simulates air quality over many geographic scales. It handles a variety of inert and chemically active pollutants, including ozone, particulate matter, inorganic and organic PM2.5/PM10, and mercury and other toxics. REMSAD – The regional modeling system for aerosols and deposition (REMSAD) calculates the concentrations of both inert and chemically reactive pollutants by simulating the atmospheric processes that affect pollutant concentrations over regional scales. It includes processes relevant to regional haze, particulate matter and other airborne pollutants, including soluble acidic components and mercury. UAM-V – The urban airshed model was a pioneering effort in photochemical air quality modelling in the early 1970s and has been used widely for air quality studies focusing on ozone. Other models developed in the United States CHARM – A model capable of simulating dispersion of toxics and particles. It can calculate impacts of thermal radiation from fires, overpressures from mechanical failures and explosions, and nuclear radiation from radionuclide releases. CHARM is capable of handling effects of complex terrain and buildings. A Lagrangian puff screening version and Eulerian full-function version are available. More information is available here. HYSPLIT – Hybrid Single Particle Lagrangian Integrated Trajectory Model. Developed at NOAA's Air Resources Laboratory. The HYSPLIT model is a complete system for computing simple air parcel trajectories to complex dispersion and deposition simulations. More information about this model can be found at PUFF-PLUME – A Gaussian chemical/radionuclide dispersion model that includes wet and dry deposition, real-time input of meteorological observations and forecasts, dose estimates from inhalation and gamma shine, and puff or plume dispersion modes. It is the primary model for emergency response use for atmospheric releases of radioactive materials at the Savannah River Site of the United States Department of Energy. It was first developed by the Pacific Northwest National Laboratory (PNNL) in the 1970s. Puff model – Puff is a volcanic ash tracking model developed at the University of Alaska Fairbanks. It requires NWP wind field data on a geographic grid covering the area over which ash may be dispersed. Representative ash particles are initiated at the volcano's location and then allowed to advect, diffuse, and settle within the atmosphere. The location of the particles at any time after the eruption can be viewed using the post-processing software included with the model. Output data is in netCDF format and can also be viewed with a variety of software. More information on the model is available here. Models developed in the United Kingdom ADMS-5 – See the description of this model in the alternative models section of the models accepted by the U.S. EPA. ADMS-URBAN – A model for simulating dispersion on scales ranging from a street scale to citywide or county-wide scale, handling most relevant emission sources such as traffic, industrial, commercial, and domestic sources. It is also used for air quality management and assessments of current and future air quality vis-a-vis national and regional standards in Europe and elsewhere. ADMS-Roads – A model for simulating dispersion of vehicular pollutant emissions from small road networks in combination with emissions from industrial plants. It handles multiple road sources as well as multiple point, line or area emission sources and the model operation is similar to the other ADMS models ADMS-Screen – A screening model for rapid assessment of the air quality impact of a single industrial stack to determine if more detailed modelling is needed. It combines the dispersion modelling algorithms of the ADMS models with a user interface requiring minimal input data. GASTAR – A model for simulating accidental releases of denser-than-air flammable and toxic gases. It handles instantaneous and continuous releases, releases from jet sources, releases from evaporation of volatile liquid pools, variable terrain slopes and ground roughness, obstacles such as fences and buildings, and time-varying releases. NAME – Numerical atmospheric-dispersion modelling environment (NAME) is a local to global scale model developed by the UK's Met Office. It is used for: forecasting of air quality, air pollution dispersion, and acid rain; tracking radioactive emissions and volcanic ash discharges; analysis of accidental air pollutant releases and assisting in emergency response; and long-term environmental impact analysis. It is an integrated model that includes boundary layer dispersion modelling. UDM – Urban dispersion model is a Gaussian puff based model for predicting the dispersion of atmospheric pollutants in the range of 10m to 25 km throughout the urban environment. It is developed by the Defense Science and Technology Laboratory for the UK Ministry of Defence. It handles instantaneous, continuous, and pool releases, and can model gases, particulates, and liquids. The model has a three regime structure: that of single building (area density < 5%), urban array (area density > 5%) and open. The model can be coupled with the US model SCIPUFF to replace the open regime and extend the model's prediction range. Models developed in continental Europe The European Topic Centre on Air and Climate Change, which is part of the European Environment Agency (EEA), maintains an online Model Documentation System (MDS) that includes descriptions and other information for almost all of the dispersion models developed by the countries of Europe. The MDS currently (July 2012) contains 142 models, mostly developed in Europe. Of those 142 models, some were subjectively selected for inclusion here. Anyone interested in seeing the complete MDS can access it here. Some of the European models listed in the MDS are public domain and some are not. Many of them include a pre-processor module for the input of meteorological and other data, and many also include a post-processor module for graphing the output data and/or plotting the area impacted by the air pollutants on maps. The country of origin is included for each of the European models listed below. AEROPOL (Estonia) – The AERO-POLlution model developed at the Tartu Observatory in Estonia is a Gaussian plume model for simulating the dispersion of continuous, buoyant plumes from stationary point, line and area sources over flat terrain on a local to regional scale. It includes plume depletion by wet and/or dry deposition as well as the effects of buildings in the plume path. Airviro Gauss (Sweden) – A gaussian dispersion model that handles point, road, area and grid sources developed by SMHI. Plumes follow trajectories from a wind model and each plume has a cutoff dependent on wind speed. The model also support irregular calculation grids. Airviro Grid (Sweden) – A simplified eulerian model developed by SMHI. Can handle point, road, area and grid sources. Includes dry and wet deposition and sedimentation. Airviro Heavy Gas (Sweden) – A model for heavy gas dispersion developed by SMHI. Airviro receptor model (Sweden)- An inverse dispersion model developed by SMHI. Used to find emission sources. ATSTEP (Germany) – Gaussian puff dispersion and deposition model used in the decision support system RODOS (real-time on-line decision support) for nuclear emergency management. RODOS is operational in Germany by the Federal Office for Radiation Protection (BfS) and test-operational in many other European countries. More information on RODOS is available here and on the ATSTEP model here. AUSTAL2000 (Germany) – The official air dispersion model to be used in the permitting of industrial sources by the German Federal Environmental Agency. The model accommodates point, line, area and volume sources of buoyant plumes. It has capabilities for building effects, complex terrain, plume depletion by wet or dry deposition, and first order chemical reactions. It is based on the LASAT model developed by Ingenieurbüro Janicke Gesellschaft für Umweltphysik. BUO-FMI (Finland) – This model was developed by the Finnish Meteorological Institute (FMI) specifically for estimating the atmospheric dispersion of neutral or buoyant plume gases and particles emitted from fires in warehouses and chemical stores. It is a hybrid of a local scale Gaussian plume model and another model type. Plume depletion by dry deposition is included but wet deposition is not included. CAR-FMI (Finland) – This model was developed by the Finnish Meteorological Institute (FMI) for evaluating atmospheric dispersion and chemical transformation of vehicular emissions of inert (CO, NOx) and reactive (NO, NO2, O3) gases from a road network of line sources on a local scale. It is a Gaussian line source model which includes an analytical solution for the chemical cycle NO-O3-NO2. CAR-International (The Netherlands) – Calculation of air pollution from road traffic (CAR-International) is an atmospheric dispersion model developed by the Netherlands Organisation for Applied Scientific Research. It is used for simulating the dispersion of vehicular emissions from roadway traffic. DIPCOT (Greece) – Dispersion over complex terrain (DIPCOT) is a model developed in the National Centre of Scientific Research "DEMOKRITOS" of Greece that simulates dispersion of buoyant plumes from multiple point sources over complex terrain on a local to regional scale. It does not include wet deposition or chemical reactions. DISPERSION21 (Sweden) – This model was developed by the Swedish Meteorological and Hydrological Institute (SMHI) for evaluating air pollutant emissions from existing or planned industrial or urban sources on a local scale. It is a Gaussian plume model for point, area, line and vehicular traffic sources. It includes plume penetration of inversions aloft, building effects, NOx chemistry and it can handle street canyons. It does not include wet or dry deposition, complex atmospheric chemistry, or the effects of complex terrain. DISPLAY-2 (Greece) – A vapour cloud dispersion model for neutral or denser-than-air pollution plumes over irregular, obstructed terrain on a local scale. It accommodates jet releases as well as two-phase (i.e., liquid-vapor mixtures) releases. This model was also developed at the National Centre of Scientific Research "DEMOKRITOS" of Greece. EK100W (Poland) – A Gaussian plume model used for air quality impact assessments of pollutants from industrial point sources as well as for urban air quality studies on a local scale. It includes wet and dry deposition. The effects of complex terrain are not included. FARM (Italy) – The Flexible Air quality Regional Model (FARM) is a multi-grid Eulerian model for dispersion, transformation and deposition of airborne pollutants in gas and aerosol phases, including photo-oxidants, aerosols, heavy metals and other toxics. It is suited for case studies, air quality assessments, scenarios analyses and pollutants forecast. FLEXPART (Austria/Germany/Norway) – An efficient and flexible Lagrangian particle transport and diffusion model for regional to global applications, with capability for forward and backward mode. Freely available. Developed at BOKU Vienna, Technical University of Munich, and NILU. GRAL (Austria) – The GRAz Lagrangian model was initially developed at the Graz University of Technology and it is a dispersion model for buoyant plumes from multiple point, line, area and tunnel portal sources. It handles flat or complex terrain (mesoscale prognostic flow field model) including building effects (microscale prognostic flow field model) but it has no chemistry capabilities. The model is freely available: http://lampz.tugraz.at/~gral/ HAVAR (Czech Republic) – A Gaussian plume model integrated with a puff model and a hybrid plume-puff model, developed by the Czech Academy of Sciences, is intended for routine and/or accidental releases of radionuclides from single point sources within nuclear power plants. The model includes radioactive plume depletion by dry and wet deposition as well as by radioactive decay. For the decay of some nuclides, the creation of daughter products that then grow into the plume is taken into account. IFDM (Belgium) – The immission frequency distribution model, developed at the Flemish Institute for Technological Research (VITO), is a Gaussian dispersion model used for point and area sources dispersing over flat terrain on a local scale. The model includes plume depletion by dry or wet deposition and has been updated to handle building effects and the O3-NOx-chemistry. It is not designed for complex terrain or other chemically reactive pollutants. INPUFF-U (Romania) – This model was developed by the National Institute of Meteorology and Hydrology in Bucharest, Romania. It is a Gaussian puff model for calculating the dispersion of radionuclides from passive emission plumes on a local to urban scale. It can simulate accidental or continuous releases from stationary or mobile point sources. It includes wet and dry deposition. Building effects, buoyancy effects, chemical reactions and effects of complex terrain are not included. LAPMOD (Italy) – The LAPMOD (LAgrangian Particle MODel) modeling system is developed by Enviroware and it is available for free. LAPMOD is a Lagrangian partile model fully coupled to the diagnostic meteorological model CALMET and can be used to simulate the dispersion of inert pollutants as well as odors and radioactive substances. It includes dry and wet deposition algorithms and advanced numerical schemes for plume rise (Janicke and Janicke, Webster and Thomson). It can simulate inert pollutants, odors and radioactive substances and it is part of ARIES, the official Italian modeling system for nuclear emergencies operated by ISPRA and by the regional environmental protection agency of Emilia-Romagna, Italy. LOTOS-EUROS (The Netherlands) – the long term ozone simulation – European operational smog (LOTOS-EUROS) model was developed by the Netherlands Organisation for Applied Scientific Research (TNO) and Netherlands National Institute for Public Health and the Environment (RIVM) in The Netherlands. It is designed for modelling the dispersion of pollutants (such as: photo-oxidants, aerosols, heavy metals) over all of Europe. It includes simple reaction chemistry as well as wet and dry deposition. MATCH (Sweden) – A multi-scale atmospheric transport and chemistry (MATCH). A three-dimensional, Eulerian model, suitable from urban to global scale. MEMO (Greece) – A Eulerian non-hydrostatic prognostic mesoscale model for wind flow simulation. It was developed by the Aristotle University of Thessaloniki in collaboration with the Universität Karlsruhe. This model is designed for describing atmospheric transport phenomena in the local-to-regional scale, often referred to as mesoscale air pollution models. MERCURE (France) – An atmospheric dispersion modeling CFD code developed by Electricite de France (EDF) and distributed by ARIA Technologies, a French company. The code is a version of the CFD software ESTET, developed by EDF's Laboratoire National d'Hydraulique. MODIM (Slovak Republic) – A model for calculating the dispersion of continuous, neutral or buoyant plumes on a local to regional scale. It integrates a Gaussian plume model for single or multiple point and area sources with a numerical model for line sources, street networks and street canyons. It is intended for regulatory and planning purposes. MSS (France) – Micro-swift-spray is a Lagrangian particle model used to predict the transport and dispersion of contaminants in urban environments. The SWIFT portion of this model predicts a mass-consistent wind field that considers terrain; no-penetration conditions for building boundaries; Rockle zones for recirculation, edge, and rooftop separation; and background and locally generated turbulence. The spray portion of the tool handles the dispersion of passive gases, dense gases, and particulates. Spray also accounts for plume buoyancy effects, wet and dry depositions, and calculates microscale pressure fields for integration with building models. The MSS development team is found at ARIA Technologies (France) and U.S. integration activities are led by Leidos. Validation testing of MSS has been done in conjunction with JEM and HPAC tool releases and the model is coupled with SCIPUFF/UDM to create a nested dispersion capability inside HPAC. For more information on MSS see http://www.aria.fr. MUSE (Greece) – A photochemical atmospheric dispersion model developed by Professor Nicolas Moussiopoulos at the Aristotle University of Thessaloniki in Greece. It is intended for the study of photochemical smog formation in urban areas and assessment of control strategies on a local to regional scale. It can simulate dry deposition and transformation of pollutants can be treated using any suitable chemical reaction mechanism. OML (Denmark) – A model for dispersion calculations of continuous neutral or buoyant plumes from single or multiple, stationary point and area sources. It has some simple methods for handling photochemistry (primarily for NO2) and for handling complex terrain. The model was developed by the National Environmental Research Institute of Denmark. It is now maintained by the Department of Environmental Science, Aarhus University. For further reference see as well: OML home page ONM9440 (Austria) – A Gaussian dispersion model for continuous, buoyant plumes from stationary sources for use in flat terrain areas. It includes plume depletion by dry deposition of solid particulates. OSPM (Denmark) – The operational street pollution model (OSPM) is a practical street pollution model, developed by the National Environmental Research Institute of Denmark. It is now maintained by the Department of Environmental Science, Aarhus University. For almost 20 years, OSPM has been routinely used in many countries for studying traffic pollution, performing analyses of field campaign measurements, studying efficiency of pollution abatement strategies, carrying out exposure assessments and as reference in comparisons to other models. OSPM is generally considered as state-of-the-art in applied street pollution modelling. For further reference see as well: OSPM home page PANACHE (France) – fluidyn-PANACHE is a self-contained fully 3D fluid dynamics software package designed to simulate accidental or continuous industrial and urban pollutant dispersion into the atmosphere. It simulates release and toxic/flammables pollutants dispersion in various weather conditions in calculated 3D complex winds and turbulence fields. Gas, particles, droplets induced flow and transport/diffusion is simulated with Navier-Stokes equations for jet-like, dense, cold, cryogenic or hot, buoyant releases. The application covers the very short scale (tens of meters) and the local scale (ten kilometers) where the complex flow pattern as related to obstacles, variable land uses, topography is calculated explicitly. PROKAS-V (Germany) – A Gaussian dispersion model for evaluating the atmospheric dispersion of air pollutants emitted from vehicular traffic on a road network of line sources on a local scale. PLUME (Bulgaria) – A conventional Gaussian plume model used in many regulatory applications. The basis of the model is a single simple formula which assumes constant wind speed and reflection from the ground surface. The horizontal and vertical dispersion parameters are a function of downwind distance and stability. The model was developed for routine applications in air quality assessment, regulatory purposes and policy support. POLGRAPH (Portugal) – This model was developed at the University of Aveiro, Portugal by Professor Carlos Borrego. It was designed for evaluating the impact of industrial pollutant releases and for air quality assessments. It is a Gaussian plume dispersion model for continuous, elevated point sources to be used on a local scale over flat or gently rolling terrain. RADM (France) – The random-walk advection and dispersion model (RADM) was developed by ACRI-ST, an independent research and development organization in France. It can model gas plumes and particles (including pollutants with exponential decay or formation rates) from single or multiple stationary, mobile or area sources. Chemical reaction, radioactive decay, deposition, complex terrain, and inversion conditions are accommodated. RIMPUFF (Denmark) – A local and regional scale real-time puff diffusion model developed by Risø National Laboratory for Sustainable Energy, Technical University of Denmark. Risø DTU. RIMPUFF is an operational emergency response model in use for assisting emergency management organisations dealing with chemical, nuclear, biological and radiological (CBRN) releases to the atmosphere. RIMPUFF is in operation in several European national emergency centres for preparedness and prediction of nuclear accidental releases (RODOS, EURANOS, ARGOS), chemical gas releases (ARGOS), and serves also as a decision support tool during active combatting of airborne transmission of various biological infections, including e.g. Foot-and Mouth Disease outbreaks. DEFRA Foot and Mouth Disease. SAFE AIR II (Italy) – The simulation of air pollution from emissions II (SAFE AIR II) was developed at the Department of Physics, University of Genoa, Italy to simulate the dispersion of air pollutants above complex terrain at local and regional scales. It can handle point, line, area and volume sources and continuous plumes as well as puffs. It includes first-order chemical reactions and plume depletion by wet and dry deposition, but it does not include any photochemistry. SEVEX (Belgium) – The Seveso expert model simulates the accidental release of toxic and/or flammable material over flat or complex terrain from multiple pipe and vessel sources or from evaporation of volatile liquid spill pools. The accidental releases may be continuous, transient or catastrophic. The integrated model can handle denser-than-air gases as well as neutral gases (i.e., neither denser than or lighter than air). It does not include handling of multi-component material, nor does it provide for chemical transformation of the releases. The model's name is derived from the major disaster caused by the accidental release of highly toxic gases that occurred in Seveso, Italy in 1976. SNAP (Norway) – The Severe Nuclear Accident Programme (SNAP) model is a Lagrangian type atmospheric dispersion model specialized on modelling dispersion of radioactive debris. SPRAY (Italy, France) – A Lagrangian particle dispersion model (LPDM) which simulates the transport, dispersion and deposition of pollutants emitted from sources of different kind over complex terrain and with the presence of obstacles. The model easily takes into account complex situations, such as the presence of breeze cycles, strong meteorological inhomogeneities and non-stationary, low wind calm conditions and recirculations. Simulations can cover area ranging from very local (less than one kilometer) to regional (hundreds of kilometres) scales. Plume rise of hot emission from stack is taken into account using a Briggs formulation. Algorithms for particle-oriented dry/wet deposition processes and for considering the gravitational settling are present. Dry deposition can be computed on ground and also on ceil/roof and on lateral faces of obstacles. Dispersion under generalized geometries like arches, tunnels and walkways can be performed. Dense gas dispersion is simulated using five conservation equations (mass, energy, vertical momentum and two horizontal momenta) based on Glandening et al. (1984) and Hurley and Manins (1995). Plume spread at the ground due to gravity is also simulated by a method (Anfossi et al., 2009), based on Eidsvik (1980). STACKS (The Netherlands) – A Gaussian plume dispersion model for point and area buoyant plumes to be used over flat terrain on a local scale. It includes building effects, NO2 chemistry and plume depletion by deposition. It is used for environmental impact studies and evaluation of emission reduction strategies. STOER.LAG (Germany) – A dispersion model designed to evaluate accidental releases of hazardous and/or flammable materials from point or area sources in industrial plants. It can handle neutral and denser-than-air gases or aerosols from ground-level or elevated sources. The model accommodates building and terrain effects, evaporation of volatile liquid spill pools, and combustion or explosion of flammable gas-air mixtures (including the impact of heat and pressure waves caused by a fire or explosion). SYMOS'97 (Czech Republic) – A model developed by the Czech Hydrometeorological Institute for dispersion calculations of continuous neutral or buoyant plumes from single or multiple point, area or line sources. It can handle complex terrain and it can also be used to simulate the dispersion of cooling tower plumes. TCAM is a multiphase three-dimensional eulerian grid model designed by ESMA group of University of Brescia, for modelling dispersion of pollutants (in particular photochemical and aerosol) at mesoscale. UDM-FMI (Finland) – This model was developed by the Finnish Meteorological Institute (FMI) as an integrated Gaussian urban scale model intended for regulatory pollution control. It handles multiple point, line, area and volume sources and it includes chemical transformation (for NO2), wet and dry deposition (for SO2), and downwash phenomena (but no building effects). VANADIS (Poland) – 3D unsteady state eulerian type model – Demo – 3d dispersion model – please read vanadis_eng.txt. Models developed in Australia AUSPLUME – A dispersion model that has been designated as the primary model accepted by the Environmental Protection Authority (EPA) of the Australian state of Victoria. (update:AUSPLUME V6 will no longer be the air pollution dispersion regulatory model in Victoria from 1 January 2014. From this date the air pollution dispersion regulatory model in Victoria will be AERMOD.) pDsAUSMOD – Australian graphical user interface for AERMOD pDsAUSMET – Australian meteorological data processor for AERMOD LADM – An advanced model developed by Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) for simulating the dispersion of buoyant pollution plumes and predicting the photochemical formation of smog over complex terrain on a local to regional scale. The model can also handle fumigated plumes (see the books listed below in the "Further reading" section for an explanation of fumigated plumes). TAPM – An advanced dispersion model integrated with a pre-processor for providing meteorological data inputs. It can handle multiple pollutants, and point, line, area and volume sources on a local, city or regional scale. The model capabilities include building effects, plume depletion by deposition, and a photochemistry module. This model was also developed by Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO). DISPMOD – A Gaussian atmospheric dispersion model for point sources located in coastal regions. It was designed specifically by the Western Australian Department of Environment to simulate the plume fumigation that occurs when an elevated onshore pollution plume intersects a growing thermal internal boundary layer (TIBL) contained within offshore air flow coming onshore. AUSPUFF – A Gaussian puff model designed for regulatory use by CSIRO. It includes some simple algorithms for the chemical transformation of reactive air pollutants. Models developed in Canada MLCD – Modèle Lagrangien à courte distance is a Lagrangian particle dispersion model (LPDM) developed in collaboration by Environment Canada's Canadian Meteorological Centre (CMC) and by the Department of Earth and Atmospheric Sciences of University of Alberta. This atmospheric dispersion and deposition model is designed to estimate air concentrations and surface deposition of pollutants for very short range emergency problems (less than ~10 km from the source). MLDPn – Modèle Lagrangien de dispersion de particules d'ordre n is a Lagrangian particle dispersion model (LPDM) developed by Environment Canada's Canadian Meteorological Centre (CMC). This atmospheric and aquatic transport and dispersion model is designed to estimate air and water concentrations and ground deposition of pollutants for various emergency response problems at different scales (local to global). It is used to forecast and track volcanic ash, radioactive material, forest fire smoke, chemical hazardous substances as well as oil slicks. Trajectory – The trajectory model, developed by Environment Canada's Canadian Meteorological Centre (CMC), is a simple tool designed to calculate the trajectory of a few air parcels moving in the 3D wind field of the atmosphere. The model provides a quick estimate of the expected trajectory of an air parcel by the advection transport mechanism, originating from (forward trajectory) or arriving at (backward trajectory) a specified geographical location and a vertical level. Models developed in India HAMS-GPS – Software used for management of environment, health and safety (EHS). It can be used for training as well as research involving dispersion modeling, accident analysis, fires, explosions, risk assessments and other related subjects. Air pollution dispersion models ADMS 5 AERMOD CALPUFF DISPERSION21 PUFF-PLUME MERCURE NAME OSPM SAFE AIR RIMPUFF HAMS-GPS EIA modeling Others Air pollution dispersion terminology Atmospheric dispersion modeling Bibliography of atmospheric dispersion modeling Roadway air dispersion modeling Wind profile power law References Schenk R (1996) Entwicklung von IBS Verkehr, Fördervorhaben des Ministeriums für Umwelt und Landwirtschaft des Landes Sachsen-Anhalt, FKZ 76213//95, 1996 Schenk R (1980) Numerische Behandlung instationärer Transportprobleme, Habilitation an der TU Dresden, 1980 Further reading For those who would like to learn more about atmospheric dispersion models, it is suggested that either one of the following books be read: www.crcpress.com External links Air Quality Modeling – From the website of Stuff in the Air The Model Documentation System (MDS) of the European Topic Centre on Air and Climate Change (part of the European Environment Agency) USA EPA Preferred/Recommended Models Alternative Models Screening Models Photochemical Models Wiki on Atmospheric Dispersion Modelling. Addresses the international community of atmospheric dispersion modellers – primarily researchers, but also users of models. Its purpose is to pool experiences gained by dispersion modellers during their work. The ADMS models and the GASTAR model The AUSPLUME model The CHARM model Fluidyn-PANACHE: 3D Computational Fluid Dynamcis(CFD) model for Dispersion Analysis The HAMS-GPS software The LADM, DISPMOD and AUSPUFF models The LAPMOD model The NAME model The RIMPUFF model The SPRAY model The TAPM model Validation of the Urban Dispersion Model (UDM) Atmospheric dispersion modeling
List of atmospheric dispersion models
[ "Chemistry", "Engineering", "Environmental_science" ]
8,417
[ "Atmospheric dispersion modeling", "Environmental modelling", "Environmental engineering" ]
4,271,868
https://en.wikipedia.org/wiki/Rubidium-82%20chloride
Rubidium-82 chloride is a form of rubidium chloride containing a radioactive isotope of rubidium. It is marketed under the brand name Cardiogen-82 by Bracco Diagnostics for use in Myocardial perfusion imaging. It is rapidly taken up by heart muscle cells, and therefore can be used to identify regions of heart muscle that are receiving poor blood flow in a technique called PET perfusion imaging. The half-life of rubidium-82 is only 1.27 minutes; it is normally produced at the place of use by rubidium generators. References Further reading (Note: only about 1/2 page on Rb-generator) Cardiac imaging Rubidium compounds Chlorides Metal halides Alkali metal chlorides Radiopharmaceuticals
Rubidium-82 chloride
[ "Chemistry" ]
154
[ "Pharmacology", "Chlorides", "Medicinal radiochemistry", "Inorganic compounds", "Medicinal chemistry stubs", "Salts", "Chemicals in medicine", "Radiopharmaceuticals", "Metal halides", "Pharmacology stubs" ]
4,272,334
https://en.wikipedia.org/wiki/Blancmange%20curve
In mathematics, the blancmange curve is a self-affine fractal curve constructible by midpoint subdivision. It is also known as the Takagi curve, after Teiji Takagi who described it in 1901, or as the Takagi–Landsberg curve, a generalization of the curve named after Takagi and Georg Landsberg. The name blancmange comes from its resemblance to a Blancmange pudding. It is a special case of the more general de Rham curve. Definition The blancmange function is defined on the unit interval by where is the triangle wave, defined by , that is, is the distance from x to the nearest integer. The Takagi–Landsberg curve is a slight generalization, given by for a parameter ; thus the blancmange curve is the case . The value is known as the Hurst parameter. The function can be extended to all of the real line: applying the definition given above shows that the function repeats on each unit interval. Functional equation definition The periodic version of the Takagi curve can also be defined as the unique bounded solution to the functional equation Indeed, the blancmange function is certainly bounded, and solves the functional equation, since Conversely, if is a bounded solution of the functional equation, iterating the equality one has for any N whence . Incidentally, the above functional equations possesses infinitely many continuous, non-bounded solutions, e.g. Graphical construction The blancmange curve can be visually built up out of triangle wave functions if the infinite sum is approximated by finite sums of the first few terms. In the illustrations below, progressively finer triangle functions (shown in red) are added to the curve at each stage. Properties Convergence and continuity The infinite sum defining converges absolutely for all Since for all if The Takagi curve of parameter is defined on the unit interval (or ) if . The Takagi function of parameter is continuous. The functions defined by the partial sums are continuous and converge uniformly toward for all x when This bound decreases as By the uniform limit theorem, is continuous if |w| < 1. Subadditivity Since the absolute value is a subadditive function so is the function , and its dilations ; since positive linear combinations and point-wise limits of subadditive functions are subadditive, the Takagi function is subadditive for any value of the parameter . The special case of the parabola For , one obtains the parabola: the construction of the parabola by midpoint subdivision was described by Archimedes. Differentiability For values of the parameter the Takagi function is differentiable in the classical sense at any which is not a dyadic rational. By derivation under the sign of series, for any non dyadic rational one finds where is the sequence of binary digits in the base 2 expansion of : Equivalently, the bits in the binary expansion can be understood as a sequence of square waves, the Haar wavelets, scaled to width This follows, since the derivative of the triangle wave is just the square wave: and so For the parameter the function is Lipschitz of constant In particular for the special value one finds, for any non dyadic rational , according with the mentioned For the blancmange function it is of bounded variation on no non-empty open set; it is not even locally Lipschitz, but it is quasi-Lipschitz, indeed, it admits the function as a modulus of continuity . Fourier series expansion The Takagi–Landsberg function admits an absolutely convergent Fourier series expansion: with and, for where is the maximum power of that divides . Indeed, the above triangle wave has an absolutely convergent Fourier series expansion By absolute convergence, one can reorder the corresponding double series for : putting yields the above Fourier series for Self similarity The recursive definition allows the monoid of self-symmetries of the curve to be given. This monoid is given by two generators, g and r, which act on the curve (restricted to the unit interval) as and A general element of the monoid then has the form for some integers This acts on the curve as a linear function: for some constants a, b and c. Because the action is linear, it can be described in terms of a vector space, with the vector space basis: In this representation, the action of g and r are given by and That is, the action of a general element maps the blancmange curve on the unit interval [0,1] to a sub-interval for some integers m, n, p. The mapping is given exactly by where the values of a, b and c can be obtained directly by multiplying out the above matrices. That is: Note that is immediate. The monoid generated by g and r is sometimes called the dyadic monoid; it is a sub-monoid of the modular group. When discussing the modular group, the more common notation for g and r is T and S, but that notation conflicts with the symbols used here. The above three-dimensional representation is just one of many representations it can have; it shows that the blancmange curve is one possible realization of the action. That is, there are representations for any dimension, not just 3; some of these give the de Rham curves. Integrating the Blancmange curve Given that the integral of from 0 to 1 is 1/2, the identity allows the integral over any interval to be computed by the following relation. The computation is recursive with computing time on the order of log of the accuracy required. Defining one has that The definite integral is given by: A more general expression can be obtained by defining which, combined with the series representation, gives Note that This integral is also self-similar on the unit interval, under an action of the dyadic monoid described in the section Self similarity. Here, the representation is 4-dimensional, having the basis . The action of g on the unit interval is the commuting diagram From this, one can then immediately read off the generators of the four-dimensional representation: and Repeated integrals transform under a 5,6,... dimensional representation. Relation to simplicial complexes Let Define the Kruskal–Katona function The Kruskal–Katona theorem states that this is the minimum number of (t − 1)-simplexes that are faces of a set of N t-simplexes. As t and N approach infinity, (suitably normalized) approaches the blancmange curve. See also Cantor function (also known as the Devil's staircase) Minkowski's question mark function Weierstrass function Dyadic transformation References Benoit Mandelbrot, "Fractal Landscapes without creases and with rivers", appearing in The Science of Fractal Images, ed. Heinz-Otto Peitgen, Dietmar Saupe; Springer-Verlag (1988) pp 243–260. Linas Vepstas, Symmetries of Period-Doubling Maps, (2004) Donald Knuth, The Art of Computer Programming, volume 4a. Combinatorial algorithms, part 1. . See pages 372–375. Further reading External links Takagi Explorer (Some properties of the Takagi function) De Rham curves Theory of continuous functions
Blancmange curve
[ "Mathematics" ]
1,499
[ "Theory of continuous functions", "Topology" ]
4,272,803
https://en.wikipedia.org/wiki/Vertical%20handover
Vertical handover or vertical handoff refers to a network node changing the type of connectivity it uses to access a supporting infrastructure, usually to support node mobility. For example, a suitably equipped laptop might be able to use both high-speed wireless LAN and cellular technology for Internet access. Wireless LAN connections generally provide higher speeds, while cellular technologies generally provide more ubiquitous coverage. Thus the laptop user might want to use a wireless LAN connection whenever one is available and to revert to a cellular connection when the wireless LAN is unavailable. Vertical handovers refer to the automatic transition from one technology to another in order to maintain communication. This is different from a horizontal handover between different wireless access points that use the same technology. Vertical handoffs between WLAN and UMTS (WCDMA) have attracted a great deal of attention in all the research areas of the 4G wireless network, due to the benefit of utilizing the higher bandwidth and lower cost of WLAN as well as better mobility support and larger coverage of UMTS. Vertical handovers among a range of wired and wireless access technologies including WiMAX can be achieved using Media independent handover which is standardized as IEEE 802.21. Related issues Dual mode card To support vertical handover, a mobile terminal needs to have a dual mode card, for example one that can work under both WLAN and UMTS frequency bands and modulation schemes. Interworking architecture For the vertical handover between UMTS and WLAN, there are two main interworking architecture: tight coupling and loose coupling. The tight coupling scheme, which 3GPP adopted, introduces two more elements: WAG (Wireless Access Gateway) and PDG (Packet Data Gateway). So the data transfers from WLAN AP to a Corresponding Node on the internet must go through the Core Network of UMTS. Loose coupling is more used when the WLAN is not operated by cellular operator but any private user. So the data transmitted through WLAN will not go through Cellular Networks. Handover metrics In traditional handovers, such as a handover between cellular networks, the handover decision is based mainly on RSS (Received Signal Strength) in the border region of two cells, and may also be based on call drop rate, etc. for resource management reasons. In vertical handover, the situation is more complex. Two different kinds of wireless networks normally have incomparable signal strength metrics, for example, WLAN compared to UMTS. In, WLAN and UMTS networks both cover an area at the same time. The handover metrics in this situation should include RSS, user preference, network conditions, application types, cost etc. Handover decision algorithm Based on the handover metrics mentioned above, the decision about how and when to switch the interface to which network will be made. Many papers have given reasonable flow charts based on the better service and lower cost, etc. while some others, using fuzzy logic, neuron network or MADM methods to solve the problem. Mobility management When a mobile station transfers a user's session from one network to another, the IP address will change. In order to allow the Corresponding Node that the MS is communicating with to find it correctly and allow the session to continue, Mobility Management is used. The Mobility Management problem can be solved in different layers, such as the Application Layer, Transport Layer, IP Layer, etc. The most common method is to use SIP (Session Initiation Protocol) and Mobile IP. Handoff procedure The handover procedure specifies the control signalling used to perform the handover and is invoked by the handover decision algorithm. See also Load balancing (computing) Media-independent handover Multihoming Access network discovery and selection function Related standards 3GPP TS 23.234 “3GPP system to WLAN interworking; System description 3GPP TS 23.228 IP Multimedia Subsystem 3GPP TS 23.237 IP Multimedia Subsystem (IMS) Service Continuity; Stage 2 802.21 Media independent handover IEEE 802.21 Mobile IP Wireless networking Mobile telecommunications standards
Vertical handover
[ "Technology", "Engineering" ]
823
[ "Mobile telecommunications", "Wireless networking", "Computer networks engineering", "Mobile telecommunications standards" ]
4,273,427
https://en.wikipedia.org/wiki/Famciclovir
Famciclovir is a guanosine analogue antiviral drug used for the treatment of various herpesvirus infections, most commonly for herpes zoster (shingles). It is a prodrug form of penciclovir with improved oral bioavailability. Famciclovir is marketed under the trade name Famvir (Novartis). Famciclovir was patented in 1983 and approved for medical use in 1994. In 2007, the United States Food and Drug Administration approved the first generic version of famciclovir. Generic tablets are manufactured by TEVA Pharmaceuticals and Mylan Pharmaceuticals. Medical uses Famciclovir is indicated for the treatment of herpes zoster (shingles), treatment of herpes simplex virus 2 (genital herpes), herpes labialis (cold sores) in immunocompetent patients and for the suppression of recurring episodes of herpes simplex virus 2. It is also indicated for treatment of recurrent episodes of herpes simplex in HIV patients. Adverse effects Side effects: mild to extreme stomach upset, headaches, mild fever. Herpes Early treatment Several studies in humans and mice provide evidence that early treatment with famciclovir soon after the first infection with herpes can significantly lower the chance of future outbreaks. Use of famciclovir in this manner has been shown to reduce the amount of latent virus in the neural ganglia compared to no treatment or treatment with valaciclovir. A review of human subjects treated for five days with famciclovir 250 mg three times daily during their first herpes episode found that only 4.2 percent experienced a recurrence within six months after the first outbreak, a fivefold decrease compared to the 19 percent recurrence in acyclovir-treated patients. Neither drug affected latency if treatment was delayed for several months. References Anti-herpes virus drugs Prodrugs Drugs developed by Novartis Drugs developed by Schering-Plough Purines Acetate esters
Famciclovir
[ "Chemistry" ]
432
[ "Chemicals in medicine", "Prodrugs" ]
14,758,305
https://en.wikipedia.org/wiki/Pole%20splitting
Pole splitting is a phenomenon exploited in some forms of frequency compensation used in an electronic amplifier. When a capacitor is introduced between the input and output sides of the amplifier with the intention of moving the pole lowest in frequency (usually an input pole) to lower frequencies, pole splitting causes the pole next in frequency (usually an output pole) to move to a higher frequency. This pole movement increases the stability of the amplifier and improves its step response at the cost of decreased speed. Example of pole splitting This example shows that introduction of the capacitor referred to as CC in the amplifier of Figure 1 has two results: first it causes the lowest frequency pole of the amplifier to move still lower in frequency and second, it causes the higher pole to move higher in frequency. The amplifier of Figure 1 has a low frequency pole due to the added input resistance Ri and capacitance Ci, with the time constant Ci ( RA || Ri ). This pole is moved down in frequency by the Miller effect. The amplifier is given a high frequency output pole by addition of the load resistance RL and capacitance CL, with the time constant CL ( Ro || RL ). The upward movement of the high-frequency pole occurs because the Miller-amplified compensation capacitor CC alters the frequency dependence of the output voltage divider. The first objective, to show the lowest pole moves down in frequency, is established using the same approach as the Miller's theorem article. Following the procedure described in the article on Miller's theorem, the circuit of Figure 1 is transformed to that of Figure 2, which is electrically equivalent to Figure 1. Application of Kirchhoff's current law to the input side of Figure 2 determines the input voltage to the ideal op amp as a function of the applied signal voltage , namely, which exhibits a roll-off with frequency beginning at f1 where which introduces notation for the time constant of the lowest pole. This frequency is lower than the initial low frequency of the amplifier, which for CC = 0 F is . Turning to the second objective, showing the higher pole moves still higher in frequency, it is necessary to look at the output side of the circuit, which contributes a second factor to the overall gain, and additional frequency dependence. The voltage is determined by the gain of the ideal op amp inside the amplifier as Using this relation and applying Kirchhoff's current law to the output side of the circuit determines the load voltage as a function of the voltage at the input to the ideal op amp as: This expression is combined with the gain factor found earlier for the input side of the circuit to obtain the overall gain as This gain formula appears to show a simple two-pole response with two time constants. (It also exhibits a zero in the numerator but, assuming the amplifier gain Av is large, this zero is important only at frequencies too high to matter in this discussion, so the numerator can be approximated as unity.) However, although the amplifier does have a two-pole behavior, the two time-constants are more complicated than the above expression suggests because the Miller capacitance contains a buried frequency dependence that has no importance at low frequencies, but has considerable effect at high frequencies. That is, assuming the output R-C product, CL ( Ro || RL ), corresponds to a frequency well above the low frequency pole, the accurate form of the Miller capacitance must be used, rather than the Miller approximation. According to the article on Miller effect, the Miller capacitance is given by (For a positive Miller capacitance, Av is negative.) Upon substitution of this result into the gain expression and collecting terms, the gain is rewritten as: with Dω given by a quadratic in ω, namely: Every quadratic has two factors, and this expression looks simpler if it is rewritten as where and are combinations of the capacitances and resistances in the formula for Dω. They correspond to the time constants of the two poles of the amplifier. One or the other time constant is the longest; suppose is the longest time constant, corresponding to the lowest pole, and suppose >> . (Good step response requires >> . See Selection of CC below.) At low frequencies near the lowest pole of this amplifier, ordinarily the linear term in ω is more important than the quadratic term, so the low frequency behavior of Dω is: where now CM is redefined using the Miller approximation as which is simply the previous Miller capacitance evaluated at low frequencies. On this basis is determined, provided >> . Because CM is large, the time constant is much larger than its original value of Ci ( RA || Ri ). At high frequencies the quadratic term becomes important. Assuming the above result for is valid, the second time constant, the position of the high frequency pole, is found from the quadratic term in Dω as Substituting in this expression the quadratic coefficient corresponding to the product along with the estimate for , an estimate for the position of the second pole is found: and because CM is large, it seems is reduced in size from its original value CL ( Ro || RL ); that is, the higher pole has moved still higher in frequency because of CC. In short, introduction of capacitor CC moved the low pole lower and the high pole higher, so the term pole splitting seems a good description. Selection of CC What value is a good choice for CC? For general purpose use, traditional design (often called dominant-pole or single-pole compensation) requires the amplifier gain to drop at 20 dB/decade from the corner frequency down to 0 dB gain, or even lower. With this design the amplifier is stable and has near-optimal step response even as a unity gain voltage buffer. A more aggressive technique is two-pole compensation. The way to position f2 to obtain the design is shown in Figure 3. At the lowest pole f1, the Bode gain plot breaks slope to fall at 20 dB/decade. The aim is to maintain the 20 dB/decade slope all the way down to zero dB, and taking the ratio of the desired drop in gain (in dB) of 20 log10 Av to the required change in frequency (on a log frequency scale) of ( log10 f2  − log10 f1 ) = log10 ( f2 / f1 ) the slope of the segment between f1 and f2 is: Slope per decade of frequency which is 20 dB/decade provided f2 = Av f1 . If f2 is not this large, the second break in the Bode plot that occurs at the second pole interrupts the plot before the gain drops to 0 dB with consequent lower stability and degraded step response. Figure 3 shows that to obtain the correct gain dependence on frequency, the second pole is at least a factor Av higher in frequency than the first pole. The gain is reduced a bit by the voltage dividers at the input and output of the amplifier, so with corrections to Av for the voltage dividers at input and output the pole-ratio condition for good step response becomes: Using the approximations for the time constants developed above, or which provides a quadratic equation to determine an appropriate value for CC. Figure 4 shows an example using this equation. At low values of gain this example amplifier satisfies the pole-ratio condition without compensation (that is, in Figure 4 the compensation capacitor CC is small at low gain), but as gain increases, a compensation capacitance rapidly becomes necessary (that is, in Figure 4 the compensation capacitor CC increases rapidly with gain) because the necessary pole ratio increases with gain. For still larger gain, the necessary CC drops with increasing gain because the Miller amplification of CC, which increases with gain (see the Miller equation ), allows a smaller value for CC. To provide more safety margin for design uncertainties, often Av is increased to two or three times Av on the right side of this equation. See Sansen or Huijsing and article on step response. Slew rate The above is a small-signal analysis. However, when large signals are used, the need to charge and discharge the compensation capacitor adversely affects the amplifier slew rate; in particular, the response to an input ramp signal is limited by the need to charge CC. See also Frequency compensation Miller effect Common source Bode plot Step response CMOS amplifier References and notes External links Bode Plots in the Circuit Theory Wikibook Bode Plots in the Control Systems Wikibook Analog circuits Electronic design
Pole splitting
[ "Engineering" ]
1,758
[ "Electronic design", "Analog circuits", "Electronic engineering", "Design" ]
14,759,741
https://en.wikipedia.org/wiki/CHRNA1
Neuronal acetylcholine receptor subunit alpha-1, also known as nAChRα1, is a protein that in humans is encoded by the CHRNA1 gene. The protein encoded by this gene is a subunit of certain nicotinic acetylcholine receptors (nAchR). The muscle acetylcholine receptor consists of 5 subunits of 4 different types: 2 alpha isoforms and 1 each of beta, gamma, and delta subunits.2 This gene encodes an alpha subunit that plays a role in acetylcholine binding/channel gating. Alternatively spliced transcript variants encoding different isoforms have been identified. Interactions Cholinergic receptor, nicotinic, alpha 1 has been shown to interact with CHRND. See also Nicotinic acetylcholine receptor References Further reading External links Ion channels Nicotinic acetylcholine receptors
CHRNA1
[ "Chemistry" ]
188
[ "Neurochemistry", "Ion channels" ]
14,760,036
https://en.wikipedia.org/wiki/KCNA4
Potassium voltage-gated channel subfamily A member 4 also known as Kv1.4 is a protein that in humans is encoded by the KCNA4 gene. It contributes to the cardiac transient outward potassium current (Ito1), the main contributing current to the repolarizing phase 1 of the cardiac action potential. Description Potassium channels represent the most complex class of voltage-gated ion channels from both functional and structural standpoints. Their diverse functions include regulating neurotransmitter release, heart rate, insulin secretion, neuronal excitability, epithelial electrolyte transport, smooth muscle contraction, and cell volume. Four sequence-related potassium channel genes - shaker, shaw, shab, and shal - have been identified in Drosophila, and each has been shown to have human homolog(s). This gene encodes a member of the potassium channel, voltage-gated, shaker-related subfamily. This member contains six membrane-spanning domains with a shaker-type repeat in the fourth segment. It belongs to the A-type potassium current class, the members of which may be important in the regulation of the fast repolarizing phase of action potentials in heart and thus may influence the duration of cardiac action potential. The coding region of this gene is intronless, and the gene is clustered with genes KCNA3 and KCNA10 on chromosome 1 in humans. KCNA4 (Kv1.4) contains a tandem inactivation domain at the N terminus. It is composed of two subdomains. Inactivation domain 1 (ID1, residues 1-38) consists of a flexible N terminus anchored at a 5-turn helix, and is thought to work by occluding the ion pathway, as is the case with a classical ball domain. Inactivation domain 2 (ID2, residues 40-50) is a 2.5 turn helix with a high proportion of hydrophobic residues that probably serves to attach ID1 to the cytoplasmic face of the channel. In this way, it can promote rapid access of ID1 to the receptor site in the open channel. ID1 and ID2 function together to bring about fast inactivation of the Kv1.4 channel, which is important for the role of the channel in short-term plasticity. Interactions KCNA4 has been shown to interact with DLG4, KCNA2 and DLG1. See also Voltage-gated potassium channel Voltage-gated potassium-channel Kv1.4 IRES References Further reading External links Protein families Ion channels
KCNA4
[ "Chemistry", "Biology" ]
531
[ "Protein families", "Neurochemistry", "Ion channels", "Protein classification" ]
14,760,042
https://en.wikipedia.org/wiki/KCNN4
Potassium intermediate/small conductance calcium-activated channel, subfamily N, member 4, also known as KCNN4, is a human gene encoding the KCa3.1 protein. Function The KCa3.1 protein is part of a potentially heterotetrameric voltage-independent potassium channel that is activated by intracellular calcium. Activation is followed by membrane hyperpolarization, which promotes calcium influx. The encoded protein may be part of the predominant calcium-activated potassium channel in T-lymphocytes. This gene is similar to other KCNN family potassium channel genes, but it differs enough to possibly be considered as part of a new subfamily. History The channel activity was first described in 1958 by György Gárdos in human erythrocytes. The channel is also named Gardos channel because of its discoverer. See also SK channel Voltage-gated potassium channel Senicapoc References Further reading Ion channels
KCNN4
[ "Chemistry" ]
192
[ "Neurochemistry", "Ion channels" ]
14,760,584
https://en.wikipedia.org/wiki/FluoroPOSS
FluoroPOSS (Fluorinated Polyhedral Oligomeric Silsesquioxanes) is a synthetic microfiber with very low surface energy, which makes it oil-repellent. Mixed with an ordinary polymer, it forms a material which can be applied to other materials such as metal, glass, plastic, plant fibers or leaves. The application process is called electrospinning. FluoroPOSS has been developed at the Massachusetts Institute of Technology (MIT). See also Wetting External links MIT creates oil-repelling materials Fluid mechanics Plastics additives
FluoroPOSS
[ "Physics", "Engineering" ]
119
[ "Materials stubs", "Materials", "Civil engineering", "Fluid mechanics", "Matter" ]
14,761,030
https://en.wikipedia.org/wiki/HMGB2
High-mobility group protein B2 also known as high-mobility group protein 2 (HMG-2) is a protein that in humans is encoded by the HMGB2 gene. Function This gene encodes a member of the non-histone chromosomal high-mobility group protein family. The proteins of this family are chromatin-associated and ubiquitously distributed in the nucleus of higher eukaryotic cells. In vitro studies have demonstrated that this protein is able to efficiently bend DNA and form DNA circles. These studies suggest a role in facilitating cooperative interactions between cis-acting proteins by promoting DNA flexibility. This protein was also reported to be involved in the final ligation step in DNA end-joining processes of DNA double-strand breaks repair and V(D)J recombination. References Further reading Loss of HMGB2 (High-mobility group protein box 2) during senescence blunts SASP (senescence-associated secretory phenotype) gene expression by allowing for spreading of repressive heterochromatin into SASP gene loci. This correlates with incorporation of SASP gene loci into SAHF (senescence-associated heterochromatin foci), which in turn represses SASP gene expression External links Transcription factors
HMGB2
[ "Chemistry", "Biology" ]
265
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,761,062
https://en.wikipedia.org/wiki/IRF4
Interferon regulatory factor 4 (IRF4) also known as MUM1 is a protein that in humans is encoded by the IRF4 gene. IRF4 functions as a key regulatory transcription factor in the development of human immune cells. The expression of IRF4 is essential for the differentiation of T lymphocytes and B lymphocytes as well as certain myeloid cells. Dysregulation of the IRF4 gene can result in IRF4 functioning either as an oncogene or a tumor-suppressor, depending on the context of the modification. The MUM1 symbol is also the current HGNC official symbol for melanoma associated antigen (mutated) 1 (HGNC:29641). Immune cell development IRF4 is a transcription factor belonging to the Interferon Regulatory Factor (IRF) family of transcription factors. In contrast to some other IRF family members, IRF4 expression is not initiated by interferons; rather, IRF4 expression is promoted by a variety of bioactive stimuli, including antigen receptor engagement, lipopolysaccharide (LPS), IL-4, and CD40. IRF4 can function either as an activating or an inhibitory transcription factor depending on its transcription cofactors. IRF4 frequently cooperates with the cofactors B-cell lymphoma 6 protein (BCL6) and nuclear factor of activated T-cells (NFATs). IRF4 expression is limited to cells of the immune system, in particular T cells, B cells, macrophages and dendritic cells. T cell differentiation IRF4 plays an important role in the regulation of T cell differentiation. In particular, IRF4 ensures the differentiation of CD4+ T helper cells into distinct subsets. IRF4 is essential for the development of Th2 cells and Th17 cells. IRF4 regulates this differentiation via apoptosis and cytokine production, which can change depending on the stage of T cell development. For example, IRF4 limits production of Th2-associated cytokines in naïve T cells while its upregulates the production of Th2 cytokines in effector and memory T cells. While not essential, IRF4 is also believed to play a role in CD8+ cytotoxic T cell differentiation through its regulation of factors directly involved in this process, including BLIMP-1, BATF, T-bet, and RORγt. IRF4 is necessary for effector function of T regulatory cells due to its role as a regulatory factor for BLIMP-1. B cell differentiation IRF4 is an essential regulatory component at various stages of B cell development. In early B cell development, IRF4 functions alongside IRF8 to induce the expression of the Ikaros and Aiolos transcription factors, which decrease expression of the pre-B-cell-receptor. IRF4 then regulates the secondary rearrangement of κ and λ chains, making IRF4 essential for the continued development of the BCR. IRF4 also occupies an essential position in the adaptive immune response of mature B cells. When IRF4 is absent, mature B cells fail to form germinal centers (GCs) and proliferate excessively in both the spleen and lymph nodes. IRF4 expression commences GC formation through its upregulation of transcription factors BCL6 and POU2AF1, which promote germinal center formation. IRF4 expression decreases in B cells once the germinal center forms, since IRF4 expression is not necessary for sustained GC function; however, IRF4 expression increases significantly when B cells prepare to leave the germinal center to form plasma cells. Long-lived plasma cells Long-lived plasma cells are memory B cells that secrete high-affinity antibodies and help preserve immunological memory to specific antigens. IRF4 plays a significant role at multiple stages of long-lived plasma cell differentiation. The effects of IRF4 expression are heavily dependent on the quantity of IRF4 present. A limited presence of IRF4 activates BCL6, which is essential for the formation of germinal centers, from which plasma cells differentiate. In contrast, elevated expression of IRF4 represses BCL6 expression and upregulates BLIMP-1 and Zbtb20 expression. This response, dependent on a high dose of IRF4, helps initiate the differentiation of germinal center B cells into plasma cells. IRF4 expression is necessary for isotype class switch recombination in germinal center B cells that will become plasma cells. B cells that lack IRF4 fail to undergo immunoglobulin class switching. Without IRF4, B cells fail to upregulate the AID enzyme, a component necessary for inducing mutations in immunoglobulin switch regions of B cell DNA during somatic hypermutation. In the absence of IRF4, B cells will not differentiate into Ig-secreting plasma cells. IRF4 expression continues to be necessary for long-lived plasma cells once differentiation has occurred. In the absence of IRF4, long-lived plasma cells disappear, suggesting that IRF4 plays a role in regulating molecules essential for the continued survival of these cells. Myeloid cell differentiation Among myeloid cells, IRF4 expression has been identified in dendritic cells (DCs) and macrophages. Dendritic cells (DCs) The transcription factors IRF4 and IRF8 work in concert to achieve DC differentiation. IRF4 expression is responsible for inducing development of CD4+ DCs, while IRF8 expression is necessary for the development of CD8+ DCs. Expression of either IRF4 or IRF8 can result in CD4-/CD8- DCs. Differentiation of DC subtypes also depends on IRF4's interaction with the growth factor GM-CSF. IRF4 expression is necessary for ensuring that monocyte-derived dendritic cells (Mo-DCs) can cross-present antigen to CD8+ cells. Macrophages IRF4 and IRF8 are also significant transcription factors in the differentiation of common myeloid progenitors (CMPs) into macrophages. IRF4 is expressed at a lower level than IRF8 in these progenitor cells; however, IRF4 expression appears to be particularly important for the development of M2 macrophages. JMJD3, which regulates IRF4, has been identified as an important regulator of M2 macrophage polarization, suggesting that IRF4 may also take part in this regulatory process. Clinical significance In melanocytic cells the IRF4 gene may be regulated by MITF. IRF4 is a transcription factor that has been implicated in acute leukemia. This gene is strongly associated with pigmentation: sensitivity of skin to sun exposure, freckles, blue eyes, and brown hair color. A variant has been implicated in greying of hair. The World Health Organization (2016) provisionally defined large B-cell lymphoma with IRF4 rearrangement as a rare indolent large B-cell lymphoma of children and adolescents. This indolent lymphoma mimics, and must be distinguished from, pediatric-type follicular lymphoma. The hallmark of large B-cell lymphoma with IRF4 rearrangement is the overexpression of the IRF4 gene by the disease's malignant cells. This overexpression is forced by the acquisition in these cells of a translocation of IRF4 from its site on the short (i.e. p) arm of chromosome 6 at position 25.3 to a site near the IGH@ immunoglobulin heavy locus on the long (i.e. q) arm of chromosome 14 at position 32.33 Interactions IRF4 has been shown to interact with: Aiolos, BATF3, Blimp-1, BCL6, CD40, GM-CSF, IL-4, Ikaros, IRF8, JMJD3, MMP12, NFATC2, SPI1, and STAT6. See also Interferon regulatory factors References Further reading External links Transcription factors
IRF4
[ "Chemistry", "Biology" ]
1,753
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,761,571
https://en.wikipedia.org/wiki/ZBTB17
Zinc finger and BTB domain-containing protein 17 is a protein that in humans is encoded by the ZBTB17 gene. Interactions ZBTB17 has been shown to interact with TOPBP1, Host cell factor C1 and Myc. References Further reading External links Transcription factors
ZBTB17
[ "Chemistry", "Biology" ]
59
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,761,583
https://en.wikipedia.org/wiki/PRDM2
PR domain zinc finger protein 2 is a protein that in humans is encoded by the PRDM2 gene. Function This tumor suppressor gene is a member of a nuclear histone/protein methyltransferase superfamily. It encodes a zinc finger protein that can bind to retinoblastoma protein, estrogen receptor, and the TPA-responsive element (MTE) of the heme-oxygenase-1 gene. Although the functions of this protein have not been fully characterized, it may (1) play a role in transcriptional regulation during neuronal differentiation and pathogenesis of retinoblastoma, (2) act as a transcriptional activator of the heme-oxygenase-1 gene, and (3) be a specific effector of estrogen action. Three transcript variants encoding different isoforms have been found for this gene. Interactions PRDM2 has been shown to interact with Estrogen receptor alpha and Retinoblastoma protein. References Further reading External links Transcription factors
PRDM2
[ "Chemistry", "Biology" ]
208
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,762,406
https://en.wikipedia.org/wiki/TBX5%20%28gene%29
T-box transcription factor TBX5, (T-box protein 5) is a protein that in humans is encoded by the TBX5 gene. Abnormalities in the TBX5 gene can result in altered limb development, Holt-Oram syndrome, Tetra-amelia syndrome, and cardiac and skeletal problems. This gene is a member of a phylogenetically conserved family of genes that share a common DNA-binding domain, the T-box. T-box genes encode transcription factors involved in the regulation of developmental processes. This gene is closely linked to related family member T-box 3 (ulnar mammary syndrome) on human chromosome 12. TBX5 is located on the long arm of chromosome 12. TBX5 produces a protein called T-box protein 5 that acts as a transcription factor. TBX5 is involved with forelimb and heart development. This gene impacts the early development of the forelimb by triggering fibroblast growth factor, FGF10. Function TBX5 is a transcription factor that codes for the protein called T-box 5. The transcription factors it encodes are necessary for development, especially in the pattern formation of upper limbs and cardiac growth. TBX5 is involved with the development of the four heart chambers, the electrical conducting system, and the septum separating the right and left sides of the heart. Along with playing roles in the development of the heart, septum, and electrical system of the heart, it also activates genes that are involved in the development of the upper limbs, the arms and hands. This gene is also involved in the muscle connective tissue for muscle and tendon patterning. A study showed that deletion of TBX5 in forelimbs causes disruption in the muscle and tendon patterning without affecting the skeleton's development. T-box protein 5 expression is in the cells of the lateral plate mesoderm which form the forelimb bud and the cascade of limb initiation. In its absence, no forelimb bud forms. The encoded protein plays a major role in limb development, specifically during limb bud initiation. For instance, in chickens Tbx5 specifies forelimb status. The activation of Tbx5 and other T-box proteins by Hox genes activates signaling cascades that involve the Wnt signaling pathway and FGF signals in limb buds. Ultimately, Tbx5 leads to the development of apical ectodermal ridge (AER) and zone of polarizing activity (ZPA) signaling centers in the developing limb bud, which specify the orientation growth of the developing limb. Together with Tbx4, Tbx5 plays a role in patterning the soft tissues (muscles and tendons) of the musculoskeletal system. As a protein-coding gene, TBX5 encodes for the protein T-box Transcription Factor 5, which is a part of the T-box family of transcription factors. It also interacts with other genes, such as GATA4 and NKX2-5, and the BAF chromatin-remodeling complex to drive and repress gene expression during development. Role in non-human animals Mice that were genetically modified to not have the TBX5 gene did not survive gestation, due to the heart not developing past embryonic day E10.5. Mice that only had one working copy of TBX5 were born with morphological problems such as enlarged hearts, atrial and ventral septum defects, and limb malformations similar to those found in the Holt-Oram Syndrome. Pigeons with feathered feet have Tbx5 active in the hind feet, which cause them to develop feathered hindlimbs with thicker bones, more similar to their frontlimb wings. Role in human embyronic development A gene "knockout" model for TBX5 by CRISPR/Cas9 genome editing has been created. This homozygous TBX5 knockout human embryonic stem cell line, called TBX5-KO maintained stem cell-like morphology, pluripotency markers, normal karyotype, and could differentiate into all three germ layers in vivo. This cell line can provide an in vitro platform for studying the pathogenic mechanisms and biological function of TBX5 in the heart development. By understanding what happens in development without this gene, further treatment options for fetuses with a TBX5 mutation might be possible to prevent the severe cardiac defects associated with Holt-Oram Syndrome. Clinical significance Mutations in this gene can result in Holt–Oram syndrome, a developmental disorder affecting the heart and upper limbs. Holt-Oram syndrome can cause a hole in the septum, bone abnormalities in the fingers, wrists, or arms, and a conduction disease leading to abnormal heart rates and arrhythmias. The most common cardiac issue associated with this condition is the malformation of the septum, which separates the left and right sides of the heart. Tetra-amelia syndrome is a condition where forelimb malformation occurs because FGF-10 is not triggered due to Tbx5 mutations. This condition can lead to the absence of one or both forelimbs. Skeletally, there may be abnormally bent fingers, sloping shoulders, and phocomelia. Cardiac defects include ventral and atrial septation and problems with the conduction system. Several transcript variants encoding different isoforms have been described for this gene. Interactions TBX5 (gene) has been shown to interact with: GATA4 and NKX2-5. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Holt-Oram Syndrome Transcription factors
TBX5 (gene)
[ "Chemistry", "Biology" ]
1,161
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,762,468
https://en.wikipedia.org/wiki/HIST1H4I
Histone H4 is a protein that, in humans, is encoded by the HIST1H4I gene. Histones are basic nuclear proteins that are responsible for the nucleosome structure of the chromosomal fiber in eukaryotes. Two molecules of each of the four core histones (H2A, H2B, H3, and H4) form an octamer, around which approximately 146 bp of DNA is wrapped in repeating units, called nucleosomes. The linker histone, H1, interacts with linker DNA between nucleosomes and functions in the compaction of chromatin into higher order structures. This gene is intronless and encodes a member of the histone H4 family. Transcripts from this gene lack polyA tails but instead contain a palindromic termination element. This gene is found in the histone microcluster on chromosome 6p21.33. References Further reading
HIST1H4I
[ "Chemistry" ]
202
[ "Biochemistry stubs", "Protein stubs" ]
14,763,419
https://en.wikipedia.org/wiki/DAZ3
Deleted in azoospermia protein 3 is a protein that in humans is encoded by the DAZ3 gene. This gene is a member of the DAZ gene family and is a candidate for the human Y-chromosomal azoospermia factor (AZF). Its expression is restricted to premeiotic germ cells, particularly in spermatogonia. It encodes an RNA-binding protein that is important for spermatogenesis. Four copies of this gene are found on chromosome Y within palindromic duplications; one pair of genes is part of the P2 palindrome and the second pair is part of the P1 palindrome. Each gene contains a 2.4 kb repeat including a 72-bp exon, called the DAZ repeat; the number of DAZ repeats is variable and there are several variations in the sequence of the DAZ repeat. Each copy of the gene also contains a 10.8 kb region that may be amplified; this region includes five exons that encode an RNA recognition motif (RRM) domain. This gene contains one copy of the 10.8 kb repeat. References Further reading
DAZ3
[ "Chemistry" ]
239
[ "Biochemistry stubs", "Protein stubs" ]