id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
4,474
https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20condensate
In condensed matter physics, a Bose–Einstein condensate (BEC) is a state of matter that is typically formed when a gas of bosons at very low densities is cooled to temperatures very close to absolute zero, i.e., . Under such conditions, a large fraction of bosons occupy the lowest quantum state, at which microscopic quantum-mechanical phenomena, particularly wavefunction interference, become apparent macroscopically. More generally, condensation refers to the appearance of macroscopic occupation of one or several states: for example, in BCS theory, a superconductor is a condensate of Cooper pairs. As such, condensation can be associated with phase transition, and the macroscopic occupation of the state is the order parameter. Bose–Einstein condensate was first predicted, generally, in 1924–1925 by Albert Einstein, crediting a pioneering paper by Satyendra Nath Bose on the new field now known as quantum statistics. In 1995, the Bose–Einstein condensate was created by Eric Cornell and Carl Wieman of the University of Colorado Boulder using rubidium atoms; later that year, Wolfgang Ketterle of MIT produced a BEC using sodium atoms. In 2001 Cornell, Wieman, and Ketterle shared the Nobel Prize in Physics "for the achievement of Bose–Einstein condensation in dilute gases of alkali atoms, and for early fundamental studies of the properties of the condensates". History Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons), in which he derived Planck's quantum radiation law without any reference to classical physics. Einstein was impressed, translated the paper himself from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it in 1924. (The Einstein manuscript, once believed to be lost, was found in a library at Leiden University in 2005.) Einstein then extended Bose's ideas to matter in two other papers. The result of their efforts is the concept of a Bose gas, governed by Bose–Einstein statistics, which describes the statistical distribution of identical particles with integer spin, now called bosons. Bosons are allowed to share a quantum state. Einstein proposed that cooling bosonic atoms to a very low temperature would cause them to fall (or "condense") into the lowest accessible quantum state, resulting in a new form of matter. Bosons include the photon, polaritons, magnons, some atoms and molecules (depending on the number of nucleons, see #Isotopes) such as atomic hydrogen, helium-4, lithium-7, rubidium-87 or strontium-84. In 1938, Fritz London proposed the BEC as a mechanism for superfluidity in and superconductivity. The quest to produce a Bose–Einstein condensate in the laboratory was stimulated by a paper published in 1976 by two program directors at the National Science Foundation (William Stwalley and Lewis Nosanow), proposing to use spin-polarized atomic hydrogen to produce a gaseous BEC. This led to the immediate pursuit of the idea by four independent research groups; these were led by Isaac Silvera (University of Amsterdam), Walter Hardy (University of British Columbia), Thomas Greytak (Massachusetts Institute of Technology) and David Lee (Cornell University). However, cooling atomic hydrogen turned out to be technically difficult, and Bose-Einstein condensation of atomic hydrogen was only realized in 1998. On 5 June 1995, the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NIST–JILA lab, in a gas of rubidium atoms cooled to 170 nanokelvins (nK). Shortly thereafter, Wolfgang Ketterle at MIT produced a Bose–Einstein Condensate in a gas of sodium atoms. For their achievements Cornell, Wieman, and Ketterle received the 2001 Nobel Prize in Physics. Bose-Einstein condensation of alkali gases is easier because they can be pre-cooled with laser cooling techniques, unlike atomic hydrogen at the time, which give a significant head start when performing the final forced evaporative cooling to cross the condensation threshold. These early studies founded the field of ultracold atoms, and hundreds of research groups around the world now routinely produce BECs of dilute atomic vapors in their labs. Since 1995, many other atomic species have been condensed (see #Isotopes), and BECs have also been realized using molecules, polaritons, other quasi-particles. BECs of photons can also be made, for example, in dye microcavites with wavelength-scale mirror separation, making a two-dimensional harmonically confined photon gas with tunable chemical potential. Critical temperature This transition to BEC occurs below a critical temperature, which for a uniform three-dimensional gas consisting of non-interacting particles with no apparent internal degrees of freedom is given by where: is the critical temperature, is the particle density, is the mass per boson, is the reduced Planck constant, is the Boltzmann constant, is the Riemann zeta function (). Interactions shift the value, and the corrections can be calculated by mean-field theory. This formula is derived from finding the gas degeneracy in the Bose gas using Bose–Einstein statistics. The critical temperature depends on the density. A more concise and experimentally relevant condition involves the phase-space density , where is the thermal de Broglie wavelength. It is a dimensionless quantity. The transition to BEC occurs when the phase-space density is greater than critical value: in 3D uniform space. This is equivalent to the above condition on the temperature. In a 3D harmonic potential, the critical value is instead where has to be understood as the peak density. Derivation Ideal Bose gas For an ideal Bose gas we have the equation of state where is the per-particle volume, is the thermal wavelength, is the fugacity, and It is noticeable that is a monotonically growing function of in , which are the only values for which the series converge. Recognizing that the second term on the right-hand side contains the expression for the average occupation number of the fundamental state , the equation of state can be rewritten as Because the left term on the second equation must always be positive, , and because , a stronger condition is which defines a transition between a gas phase and a condensed phase. On the critical region it is possible to define a critical temperature and thermal wavelength: recovering the value indicated on the previous section. The critical values are such that if or , we are in the presence of a Bose–Einstein condensate. Understanding what happens with the fraction of particles on the fundamental level is crucial. As so, write the equation of state for , obtaining and equivalently So, if , the fraction , and if , the fraction . At temperatures near to absolute 0, particles tend to condense in the fundamental state, which is the state with momentum . Experimental observation Superfluid helium-4 In 1938, Pyotr Kapitsa, John Allen and Don Misener discovered that helium-4 became a new kind of fluid, now known as a superfluid, at temperatures less than 2.17 K (the lambda point). Superfluid helium has many unusual properties, including zero viscosity (the ability to flow without dissipating energy) and the existence of quantized vortices. It was quickly believed that the superfluidity was due to partial Bose–Einstein condensation of the liquid. In fact, many properties of superfluid helium also appear in gaseous condensates created by Cornell, Wieman and Ketterle (see below). Superfluid helium-4 is a liquid rather than a gas, which means that the interactions between the atoms are relatively strong; the original theory of Bose–Einstein condensation must be heavily modified in order to describe it. Bose–Einstein condensation remains, however, fundamental to the superfluid properties of helium-4. Note that helium-3, a fermion, also enters a superfluid phase (at a much lower temperature) which can be explained by the formation of bosonic Cooper pairs of two atoms (see also fermionic condensate). Dilute atomic gases The first "pure" Bose–Einstein condensate was created by Eric Cornell, Carl Wieman, and co-workers at JILA on 5 June 1995. They cooled a dilute vapor of approximately two thousand rubidium-87 atoms to below 170 nK using a combination of laser cooling (a technique that won its inventors Steven Chu, Claude Cohen-Tannoudji, and William D. Phillips the 1997 Nobel Prize in Physics) and magnetic evaporative cooling. About four months later, an independent effort led by Wolfgang Ketterle at MIT condensed sodium-23. Ketterle's condensate had a hundred times more atoms, allowing important results such as the observation of quantum mechanical interference between two different condensates. Cornell, Wieman and Ketterle won the 2001 Nobel Prize in Physics for their achievements. A group led by Randall Hulet at Rice University announced a condensate of lithium atoms only one month following the JILA work. Lithium has attractive interactions, causing the condensate to be unstable and collapse for all but a few atoms. Hulet's team subsequently showed the condensate could be stabilized by confinement quantum pressure for up to about 1000 atoms. Various isotopes have since been condensed. Velocity-distribution data graph In the image accompanying this article, the velocity-distribution data indicates the formation of a Bose–Einstein condensate out of a gas of rubidium atoms. The false colors indicate the number of atoms at each velocity, with red being the fewest and white being the most. The areas appearing white and light blue are at the lowest velocities. The peak is not infinitely narrow because of the Heisenberg uncertainty principle: spatially confined atoms have a minimum width velocity distribution. This width is given by the curvature of the magnetic potential in the given direction. More tightly confined directions have bigger widths in the ballistic velocity distribution. This anisotropy of the peak on the right is a purely quantum-mechanical effect and does not exist in the thermal distribution on the left. This graph served as the cover design for the 1999 textbook Thermal Physics by Ralph Baierlein. Quasiparticles Bose–Einstein condensation also applies to quasiparticles in solids. Magnons, excitons, and polaritons have integer spin which means they are bosons that can form condensates. Magnons, electron spin waves, can be controlled by a magnetic field. Densities from the limit of a dilute gas to a strongly interacting Bose liquid are possible. Magnetic ordering is the analog of superfluidity. In 1999 condensation was demonstrated in antiferromagnetic , at temperatures as great as 14 K. The high transition temperature (relative to atomic gases) is due to the magnons' small mass (near that of an electron) and greater achievable density. In 2006, condensation in a ferromagnetic yttrium-iron-garnet thin film was seen even at room temperature, with optical pumping. Excitons, electron-hole pairs, were predicted to condense at low temperature and high density by Boer et al., in 1961. Bilayer system experiments first demonstrated condensation in 2003, by Hall voltage disappearance. Fast optical exciton creation was used to form condensates in sub-kelvin in 2005 on. Polariton condensation was first detected for exciton-polaritons in a quantum well microcavity kept at 5 K. In zero gravity In June 2020, the Cold Atom Laboratory experiment on board the International Space Station successfully created a BEC of rubidium atoms and observed them for over a second in free-fall. Although initially just a proof of function, early results showed that, in the microgravity environment of the ISS, about half of the atoms formed into a magnetically insensitive halo-like cloud around the main body of the BEC. Models Bose Einstein's non-interacting gas Consider a collection of N non-interacting particles, which can each be in one of two quantum states, and . If the two states are equal in energy, each different configuration is equally likely. If we can tell which particle is which, there are different configurations, since each particle can be in or independently. In almost all of the configurations, about half the particles are in and the other half in . The balance is a statistical effect: the number of configurations is largest when the particles are divided equally. If the particles are indistinguishable, however, there are only different configurations. If there are particles in state , there are particles in state . Whether any particular particle is in state or in state cannot be determined, so each value of determines a unique quantum state for the whole system. Suppose now that the energy of state is slightly greater than the energy of state by an amount . At temperature , a particle will have a lesser probability to be in state by . In the distinguishable case, the particle distribution will be biased slightly towards state . But in the indistinguishable case, since there is no statistical pressure toward equal numbers, the most-likely outcome is that most of the particles will collapse into state . In the distinguishable case, for large N, the fraction in state can be computed. It is the same as flipping a coin with probability proportional to to land tails. In the indistinguishable case, each value of is a single state, which has its own separate Boltzmann probability. So the probability distribution is exponential: For large , the normalization constant is . The expected total number of particles not in the lowest energy state, in the limit that , is equal to It does not grow when N is large; it just approaches a constant. This will be a negligible fraction of the total number of particles. So a collection of enough Bose particles in thermal equilibrium will mostly be in the ground state, with only a few in any excited state, no matter how small the energy difference. Consider now a gas of particles, which can be in different momentum states labeled . If the number of particles is less than the number of thermally accessible states, for high temperatures and low densities, the particles will all be in different states. In this limit, the gas is classical. As the density increases or the temperature decreases, the number of accessible states per particle becomes smaller, and at some point, more particles will be forced into a single state than the maximum allowed for that state by statistical weighting. From this point on, any extra particle added will go into the ground state. To calculate the transition temperature at any density, integrate, over all momentum states, the expression for maximum number of excited particles, : When the integral (also known as Bose–Einstein integral) is evaluated with factors of and restored by dimensional analysis, it gives the critical temperature formula of the preceding section. Therefore, this integral defines the critical temperature and particle number corresponding to the conditions of negligible chemical potential . In Bose–Einstein statistics distribution, is actually still nonzero for BECs; however, is less than the ground state energy. Except when specifically talking about the ground state, can be approximated for most energy or momentum states as . Bogoliubov theory for weakly interacting gas Nikolay Bogoliubov considered perturbations on the limit of dilute gas, finding a finite pressure at zero temperature and positive chemical potential. This leads to corrections for the ground state. The Bogoliubov state has pressure : . The original interacting system can be converted to a system of non-interacting particles with a dispersion law. Gross–Pitaevskii equation In some simplest cases, the state of condensed particles can be described with a nonlinear Schrödinger equation, also known as Gross–Pitaevskii or Ginzburg–Landau equation. The validity of this approach is actually limited to the case of ultracold temperatures, which fits well for the most alkali atoms experiments. This approach originates from the assumption that the state of the BEC can be described by the unique wavefunction of the condensate . For a system of this nature, is interpreted as the particle density, so the total number of atoms is Provided essentially all atoms are in the condensate (that is, have condensed to the ground state), and treating the bosons using mean-field theory, the energy (E) associated with the state is: Minimizing this energy with respect to infinitesimal variations in , and holding the number of atoms constant, yields the Gross–Pitaevski equation (GPE) (also a non-linear Schrödinger equation): where: {|cellspacing="0" cellpadding="0" |- | |  is the mass of the bosons, |- | |  is the external potential, and |- | |  represents the inter-particle interactions. |} In the case of zero external potential, the dispersion law of interacting Bose–Einstein-condensed particles is given by so-called Bogoliubov spectrum (for ): The Gross-Pitaevskii equation (GPE) provides a relatively good description of the behavior of atomic BEC's. However, GPE does not take into account the temperature dependence of dynamical variables, and is therefore valid only for . It is not applicable, for example, for the condensates of excitons, magnons and photons, where the critical temperature is comparable to room temperature. Numerical solution The Gross-Pitaevskii equation is a partial differential equation in space and time variables. Usually it does not have analytic solution and different numerical methods, such as split-step Crank–Nicolson and Fourier spectral methods, are used for its solution. There are different Fortran and C programs for its solution for contact interaction and long-range dipolar interaction which can be freely used. Weaknesses of Gross–Pitaevskii model The Gross–Pitaevskii model of BEC is a physical approximation valid for certain classes of BECs. By construction, the GPE uses the following simplifications: it assumes that interactions between condensate particles are of the contact two-body type and also neglects anomalous contributions to self-energy. These assumptions are suitable mostly for the dilute three-dimensional condensates. If one relaxes any of these assumptions, the equation for the condensate wavefunction acquires the terms containing higher-order powers of the wavefunction. Moreover, for some physical systems the amount of such terms turns out to be infinite, therefore, the equation becomes essentially non-polynomial. The examples where this could happen are the Bose–Fermi composite condensates, effectively lower-dimensional condensates, and dense condensates and superfluid clusters and droplets. It is found that one has to go beyond the Gross-Pitaevskii equation. For example, the logarithmic term found in the Logarithmic Schrödinger equation must be added to the Gross-Pitaevskii equation along with a Ginzburg–Sobyanin contribution to correctly determine that the speed of sound scales as the cubic root of pressure for Helium-4 at very low temperatures in close agreement with experiment. Other However, it is clear that in a general case the behaviour of Bose–Einstein condensate can be described by coupled evolution equations for condensate density, superfluid velocity and distribution function of elementary excitations. This problem was solved in 1977 by Peletminskii et al. in microscopical approach. The Peletminskii equations are valid for any finite temperatures below the critical point. Years after, in 1985, Kirkpatrick and Dorfman obtained similar equations using another microscopical approach. The Peletminskii equations also reproduce Khalatnikov hydrodynamical equations for superfluid as a limiting case. Superfluidity of BEC and Landau criterion The phenomena of superfluidity of a Bose gas and superconductivity of a strongly-correlated Fermi gas (a gas of Cooper pairs) are tightly connected to Bose–Einstein condensation. Under corresponding conditions, below the temperature of phase transition, these phenomena were observed in helium-4 and different classes of superconductors. In this sense, the superconductivity is often called the superfluidity of Fermi gas. In the simplest form, the origin of superfluidity can be seen from the weakly interacting bosons model. Peculiar properties Quantized vortices As in many other systems, vortices can exist in BECs. Vortices can be created, for example, by "stirring" the condensate with lasers, rotating the confining trap, or by rapid cooling across the phase transition. The vortex created will be a quantum vortex with core shape determined by the interactions. Fluid circulation around any point is quantized due to the single-valued nature of the order BEC order parameter or wavefunction, that can be written in the form where and are as in the cylindrical coordinate system, and is the angular quantum number (a.k.a. the "charge" of the vortex). Since the energy of a vortex is proportional to the square of its angular momentum, in trivial topology only vortices can exist in the steady state; Higher-charge vortices will have a tendency to split into vortices, if allowed by the topology of the geometry. An axially symmetric (for instance, harmonic) confining potential is commonly used for the study of vortices in BEC. To determine , the energy of must be minimized, according to the constraint . This is usually done computationally, however, in a uniform medium, the following analytic form demonstrates the correct behavior, and is a good approximation: Here, is the density far from the vortex and , where is the healing length of the condensate. A singly charged vortex () is in the ground state, with its energy given by where  is the farthest distance from the vortices considered.(To obtain an energy which is well defined it is necessary to include this boundary .) For multiply charged vortices () the energy is approximated by which is greater than that of singly charged vortices, indicating that these multiply charged vortices are unstable to decay. Research has, however, indicated they are metastable states, so may have relatively long lifetimes. Closely related to the creation of vortices in BECs is the generation of so-called dark solitons in one-dimensional BECs. These topological objects feature a phase gradient across their nodal plane, which stabilizes their shape even in propagation and interaction. Although solitons carry no charge and are thus prone to decay, relatively long-lived dark solitons have been produced and studied extensively. Attractive interactions Experiments led by Randall Hulet at Rice University from 1995 through 2000 showed that lithium condensates with attractive interactions could stably exist up to a critical atom number. Quench cooling the gas, they observed the condensate to grow, then subsequently collapse as the attraction overwhelmed the zero-point energy of the confining potential, in a burst reminiscent of a supernova, with an explosion preceded by an implosion. Further work on attractive condensates was performed in 2000 by the JILA team, of Cornell, Wieman and coworkers. Their instrumentation now had better control so they used naturally attracting atoms of rubidium-85 (having negative atom–atom scattering length). Through Feshbach resonance involving a sweep of the magnetic field causing spin flip collisions, they lowered the characteristic, discrete energies at which rubidium bonds, making their Rb-85 atoms repulsive and creating a stable condensate. The reversible flip from attraction to repulsion stems from quantum interference among wave-like condensate atoms. When the JILA team raised the magnetic field strength further, the condensate suddenly reverted to attraction, imploded and shrank beyond detection, then exploded, expelling about two-thirds of its 10,000 atoms. About half of the atoms in the condensate seemed to have disappeared from the experiment altogether, not seen in the cold remnant or expanding gas cloud. Carl Wieman explained that under current atomic theory this characteristic of Bose–Einstein condensate could not be explained because the energy state of an atom near absolute zero should not be enough to cause an implosion; however, subsequent mean-field theories have been proposed to explain it. Most likely they formed molecules of two rubidium atoms; energy gained by this bond imparts velocity sufficient to leave the trap without being detected. The process of creation of molecular Bose condensate during the sweep of the magnetic field throughout the Feshbach resonance, as well as the reverse process, are described by the exactly solvable model that can explain many experimental observations. Current research Compared to more commonly encountered states of matter, Bose–Einstein condensates are extremely fragile. The slightest interaction with the external environment can be enough to warm them past the condensation threshold, eliminating their interesting properties and forming a normal gas. Nevertheless, they have proven useful in exploring a wide range of questions in fundamental physics, and the years since the initial discoveries by the JILA and MIT groups have seen an increase in experimental and theoretical activity. Bose–Einstein condensates composed of a wide range of isotopes have been produced; see below. Fundamental research Examples include experiments that have demonstrated interference between condensates due to wave–particle duality, the study of superfluidity and quantized vortices, the creation of bright matter wave solitons from Bose condensates confined to one dimension, and the slowing of light pulses to very low speeds using electromagnetically induced transparency. Vortices in Bose–Einstein condensates are also currently the subject of analogue gravity research, studying the possibility of modeling black holes and their related phenomena in such environments in the laboratory. Experimenters have also realized "optical lattices", where the interference pattern from overlapping lasers provides a periodic potential. These are used to explore the transition between a superfluid and a Mott insulator. They are also useful in studying Bose–Einstein condensation in fewer than three dimensions, for example the Lieb–Liniger model (an the limit of strong interactions, the Tonks–Girardeau gas) in 1D and the Berezinskii–Kosterlitz–Thouless transition in 2D. Indeed, a deep optical lattice allows the experimentalist to freeze the motion of the particles along one or two directions, effectively eliminating one or two dimension from the system. Further, the sensitivity of the pinning transition of strongly interacting bosons confined in a shallow one-dimensional optical lattice originally observed by Haller has been explored via a tweaking of the primary optical lattice by a secondary weaker one. Thus for a resulting weak bichromatic optical lattice, it has been found that the pinning transition is robust against the introduction of the weaker secondary optical lattice. Studies of vortices in nonuniform Bose–Einstein condensates as well as excitations of these systems by the application of moving repulsive or attractive obstacles, have also been undertaken. Within this context, the conditions for order and chaos in the dynamics of a trapped Bose–Einstein condensate have been explored by the application of moving blue and red-detuned laser beams (hitting frequencies slightly above and below the resonance frequency, respectively) via the time-dependent Gross-Pitaevskii equation. Applications In 1999, Danish physicist Lene Hau led a team from Harvard University which slowed a beam of light to about 17 meters per second using a superfluid. Hau and her associates have since made a group of condensate atoms recoil from a light pulse such that they recorded the light's phase and amplitude, recovered by a second nearby condensate, in what they term "slow-light-mediated atomic matter-wave amplification" using Bose–Einstein condensates. Another current research interest is the creation of Bose–Einstein condensates in microgravity in order to use its properties for high precision atom interferometry. The first demonstration of a BEC in weightlessness was achieved in 2008 at a drop tower in Bremen, Germany by a consortium of researchers led by Ernst M. Rasel from Leibniz University Hannover. The same team demonstrated in 2017 the first creation of a Bose–Einstein condensate in space and it is also the subject of two upcoming experiments on the International Space Station. Researchers in the new field of atomtronics use the properties of Bose–Einstein condensates in the emerging quantum technology of matter-wave circuits. In 1970, BECs were proposed by Emmanuel David Tannenbaum for anti-stealth technology. Isotopes Bose-Einstein condensation has mainly been observed on alkaline atoms, some of which have collisional properties particularly suitable for evaporative cooling in traps, and which where the first to laser-cooled. As of 2021, using ultra-low temperatures of or below, Bose–Einstein condensates had been obtained for a multitude of isotopes with more or less ease, mainly of alkali metal, alkaline earth metal, and lanthanide atoms (, , , , , , , , , , , , , , , , , , and metastable (orthohelium)). Research was finally successful in atomic hydrogen with the aid of the newly developed method of 'evaporative cooling'. In contrast, the superfluid state of below is differs significantly from dilute degenerate atomic gases because the interaction between the atoms is strong. Only 8% of atoms are in the condensed fraction near absolute zero, rather than near 100% of a weakly interacting BEC. The bosonic behavior of some of these alkaline gases appears odd at first sight, because their nuclei have half-integer total spin. It arises from the interplay of electronic and nuclear spins: at ultra-low temperatures and corresponding excitation energies, the half-integer total spin of the electronic shell (one outer electron) and half-integer total spin of the nucleus are coupled by a very weak hyperfine interaction. The total spin of the atom, arising from this coupling, is an integer value. Conversely, alkali isotopes which have an integer nuclear spin (such as and ) are fermions and can form degenerate Fermi gases, also called "Fermi condensates". Cooling fermions to extremely low temperatures has created degenerate gases, subject to the Pauli exclusion principle. To exhibit Bose–Einstein condensation, the fermions must "pair up" to form bosonic compound particles (e.g. molecules or Cooper pairs). The first molecular condensates were created in November 2003 by the groups of Rudolf Grimm at the University of Innsbruck, Deborah S. Jin at the University of Colorado at Boulder and Wolfgang Ketterle at MIT. Jin quickly went on to create the first fermionic condensate, working with the same system but outside the molecular regime. Continuous Bose–Einstein condensation Limitations of evaporative cooling have restricted atomic BECs to "pulsed" operation, involving a highly inefficient duty cycle that discards more than 99% of atoms to reach BEC. Achieving continuous BEC has been a major open problem of experimental BEC research, driven by the same motivations as continuous optical laser development: high flux, high coherence matter waves produced continuously would enable new sensing applications. Continuous BEC was achieved for the first time in 2022 with . In solid state physics In 2020, researchers reported the development of superconducting BEC and that there appears to be a "smooth transition between" BEC and Bardeen–Cooper–Shrieffer regimes. Dark matter P. Sikivie and Q. Yang showed that cold dark matter axions would form a Bose–Einstein condensate by thermalisation because of gravitational self-interactions. Axions have not yet been confirmed to exist. However the important search for them has been greatly enhanced with the completion of upgrades to the Axion Dark Matter Experiment (ADMX) at the University of Washington in early 2018. In 2014, a potential dibaryon was detected at the Jülich Research Center at about 2380 MeV. The center claimed that the measurements confirm results from 2011, via a more replicable method. The particle existed for 10−23 seconds and was named d*(2380). This particle is hypothesized to consist of three up and three down quarks. It is theorized that groups of d* (d-stars) could form Bose–Einstein condensates due to prevailing low temperatures in the early universe, and that BECs made of such hexaquarks with trapped electrons could behave like dark matter. In fiction In the 2016 film Spectral, the US military battles mysterious enemy creatures fashioned out of Bose–Einstein condensates. In the 2003 novel Blind Lake, scientists observe sentient life on a planet 51 light-years away using telescopes powered by Bose–Einstein condensate-based quantum computers. The video game franchise Mass Effect has cryonic ammunition whose flavour text describes it as being filled with Bose–Einstein condensates. Upon impact, the bullets rupture and spray supercooled liquid on the enemy. See also Atom laser Atomic coherence Bose–Einstein correlations Bose–Einstein condensation: a network theory approach Bose–Einstein condensation of quasiparticles Bose–Einstein statistics Cold Atom Laboratory Electromagnetically induced transparency Fermionic condensate Gas in a box Gross–Pitaevskii equation Macroscopic quantum phenomena Macroscopic quantum self-trapping Slow light Super-heavy atom Superconductivity Superfluid film Superfluid helium-4 Supersolid Tachyon condensation Timeline of low-temperature technology Ultracold atom Wiener sausage References Further reading , . . . C. J. Pethick and H. Smith, Bose–Einstein Condensation in Dilute Gases, Cambridge University Press, Cambridge, 2001. Lev P. Pitaevskii and S. Stringari, Bose–Einstein Condensation, Clarendon Press, Oxford, 2003. Monique Combescot and Shiue-Yuan Shiau, "Excitons and Cooper Pairs: Two Composite Bosons in Many-Body Physics", Oxford University Press (). External links Bose–Einstein Condensation 2009 Conference – Frontiers in Quantum Gases BEC Homepage General introduction to Bose–Einstein condensation Nobel Prize in Physics 2001 – for the achievement of Bose–Einstein condensation in dilute gases of alkali atoms, and for early fundamental studies of the properties of the condensates Bose–Einstein condensates at JILA Atomcool at Rice University Alkali Quantum Gases at MIT Atom Optics at UQ Einstein's manuscript on the Bose–Einstein condensate discovered at Leiden University Bose–Einstein condensate on arxiv.org Bosons – The Birds That Flock and Sing Together Easy BEC machine – information on constructing a Bose–Einstein condensate machine. Verging on absolute zero – Cosmos Online Lecture by W Ketterle at MIT in 2001 Bose–Einstein Condensation at NIST – NIST resource on BEC Albert Einstein Condensed matter physics Exotic matter Phases of matter Articles containing video clips
Bose–Einstein condensate
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
7,357
[ "Bose–Einstein condensates", "Phases of matter", "Materials science", "Condensed matter physics", "Exotic matter", "Matter" ]
4,650
https://en.wikipedia.org/wiki/Black%20hole
A black hole is a region of spacetime wherein gravity is so strong that no matter or electromagnetic energy (e.g. light) can escape it. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of no escape is called the event horizon. A black hole has a great effect on the fate and circumstances of an object crossing it, but it has no locally detectable features according to general relativity. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first published the interpretation of "black hole" as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars by Jocelyn Bell Burnell in 1967 sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes of stellar mass form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses () may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Any matter that falls toward a black hole can form an external accretion disk heated by friction, forming quasars, some of the brightest objects in the universe. Stars passing too close to a supermassive black hole can be shredded into streamers that shine very brightly before being "swallowed." If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so big that even light could not escape was briefly proposed by English astronomical pioneer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars rather than the modern model of stars with extraordinary density. Mitchel's idea appeared in a letter published in November 1784. Michell's simplistic calculations assumed such a body might have the same density as the Sun, and concluded that one would form when a star's diameter exceeds the Sun's by a factor of 500, and its surface escape velocity exceeds the usual speed of light. Michell correctly noted that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in journal edited by von Zach. Scholars of the time were initially excited by the proposal that giant but invisible 'dark stars' might be hiding in plain view, but enthusiasm dampened when the wavelike nature of light became apparent in the early nineteenth century, as if light were a wave rather than a particle, it was unclear what, if any, influence gravity would have on escaping light waves. General relativity In 1915, Albert Einstein developed his theory of general relativity, having earlier shown that gravity does influence light's motion. Only a few months later, Karl Schwarzschild found a solution to the Einstein field equations that describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution for the point mass and wrote more extensively about its properties. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, where it became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this surface was not quite understood at the time. In 1924, Arthur Eddington showed that the singularity disappeared after a change of coordinates. In 1933, Georges Lemaître realised that this meant the singularity at the Schwarzschild radius was a non-physical coordinate singularity. Arthur Eddington commented on the possibility of a star with mass compressed to the Schwarzschild radius in a 1926 book, noting that Einstein's theory allows us to rule out overly large densities for visible stars like Betelgeuse because "a star of 250 million km radius could not possibly have so high a density as the Sun. Firstly, the force of gravitation would be so great that light would be unable to escape from it, the rays falling back to the star like a stone to the earth. Secondly, the red shift of the spectral lines would be so great that the spectrum would be shifted out of existence. Thirdly, the mass would produce so much curvature of the spacetime metric that space would close up around the star, leaving us outside (i.e., nowhere)." In 1931, Subrahmanyan Chandrasekhar calculated, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at ) has no stable solutions. His arguments were opposed by many of his contemporaries like Eddington and Lev Landau, who argued that some yet unknown mechanism would stop the collapse. They were partly correct: a white dwarf slightly more massive than the Chandrasekhar limit will collapse into a neutron star, which is itself stable. In 1939, Robert Oppenheimer and others predicted that neutron stars above another limit, the Tolman–Oppenheimer–Volkoff limit, would collapse further for the reasons presented by Chandrasekhar, and concluded that no law of physics was likely to intervene and stop at least some stars from collapsing to black holes. Their original calculations, based on the Pauli exclusion principle, gave it as . Subsequent consideration of neutron-neutron repulsion mediated by the strong force raised the estimate to approximately to . Observations of the neutron star merger GW170817, which is thought to have generated a black hole shortly afterward, have refined the TOV limit estimate to ~. Oppenheimer and his co-authors interpreted the singularity at the boundary of the Schwarzschild radius as indicating that this was the boundary of a bubble in which time stopped. This is a valid point of view for external observers, but not for infalling observers. The hypothetical collapsed stars were called "frozen stars", because an outside observer would see the surface of the star frozen in time at the instant where its collapse takes it to the Schwarzschild radius. Also in 1939, Einstein attempted to prove that black holes were impossible in his publication "On a Stationary System with Spherical Symmetry Consisting of Many Gravitating Masses", using his theory of general relativity to defend his argument. Months later, Oppenheimer and his student Hartland Snyder provided the Oppenheimer–Snyder model in their paper "On Continued Gravitational Contraction", which predicted the existence of black holes. In the paper, which made no reference to Einstein's recent publication, Oppenheimer and Snyder used Einstein's own theory of general relativity to show the conditions on how a black hole could develop, for the first time in contemporary physics. Golden age In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, "a perfect unidirectional membrane: causal influences can cross it in only one direction". This did not strictly contradict Oppenheimer's results, but extended them to include the point of view of infalling observers. Finkelstein's solution extended the Schwarzschild solution for the future of observers falling into a black hole. A complete extension had already been found by Martin Kruskal, who was urged to publish it. These results came at the beginning of the golden age of general relativity, which was marked by general relativity and black holes becoming mainstream subjects of research. This process was helped by the discovery of pulsars by Jocelyn Bell Burnell in 1967, which, by 1969, were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities; but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. In this period more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the axisymmetric solution for a black hole that is both rotating and electrically charged. Through the work of Werner Israel, Brandon Carter, and David Robinson the no-hair theorem emerged, stating that a stationary black hole solution is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange features of the black hole solutions were pathological artefacts from the symmetry conditions imposed, and that the singularities would not appear in generic situations. This view was held in particular by Vladimir Belinsky, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions. However, in the late 1960s Roger Penrose and Stephen Hawking used global techniques to prove that singularities appear generically. For this work, Penrose received half of the 2020 Nobel Prize in Physics, Hawking having died in 2018. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. Observation On 11 February 2016, the LIGO Scientific Collaboration and the Virgo collaboration announced the first direct detection of gravitational waves, representing the first observation of a black hole merger. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. , the nearest known body thought to be a black hole, Gaia BH1, is around away. Though only a couple dozen black holes have been found so far in the Milky Way, there are thought to be hundreds of millions, most of which are solitary and do not cause emission of radiation. Therefore, they would only be detectable by gravitational lensing. Etymology Science writer Marcia Bartusiak traces the term "black hole" to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term "black hole" was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. In December 1967, a student reportedly suggested the phrase "black hole" at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and it quickly caught on, leading some to credit Wheeler with coining the phrase. Properties and structure The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes under the laws of modern physics is currently an unsolved problem. These properties are special because they are visible from outside a black hole. For example, a charged black hole repels other like charges just like any other charged object. Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analogue of Gauss's law (through the ADM mass), far away from the black hole. Likewise, the angular momentum (or spin) can be measured from far away using frame dragging by the gravitomagnetic field, through for example the Lense–Thirring effect. When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. The behaviour of the horizon in this situation is a dissipative system that is closely analogous to that of a conductive stretchy membrane with friction and electrical resistance—the membrane paradigm. This is different from other field theories such as electromagnetism, which do not have any friction or resistivity at the microscopic level, because they are time-reversible. Because a black hole eventually achieves a stable state with only three parameters, there is no way to avoid losing information about the initial conditions: the gravitational and electric fields of a black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including approximately conserved quantum numbers such as the total baryon number and lepton number. This behaviour is so puzzling that it has been called the black hole information loss paradox. Physical properties The simplest static black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916. According to Birkhoff's theorem, it is the only vacuum solution that is spherically symmetric. This means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the same mass. The popular notion of a black hole "sucking in everything" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality for a black hole of mass M. Black holes with the minimum possible mass satisfying this inequality are called extremal. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These solutions have so-called naked singularities that can be observed from the outside, and hence are deemed unphysical. The cosmic censorship hypothesis rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. This is supported by numerical simulations. Due to the relatively large strength of the electromagnetic force, black holes forming from the collapse of stars are expected to retain the nearly neutral charge of the star. Rotation, however, is expected to be a universal feature of compact astrophysical objects. The black-hole candidate binary X-ray source GRS 1915+105 appears to have an angular momentum near the maximum allowed value. That uncharged limit is allowing definition of a dimensionless spin parameter such that Black holes are commonly classified according to their mass, independent of angular momentum, J. The size of a black hole, as determined by the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through where r is the Schwarzschild radius and is the mass of the Sun. For a black hole with nonzero spin or electric charge, the radius is smaller, until an extremal black hole could have an event horizon close to Event horizon The defining feature of a black hole is the appearance of an event horizon—a boundary in spacetime through which matter and light can pass only inward towards the mass of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine whether such an event occurred. As predicted by general relativity, the presence of a mass deforms spacetime in such a way that the paths taken by particles bend towards the mass. At the event horizon of a black hole, this deformation becomes so strong that there are no paths that lead away from the black hole. To a distant observer, clocks near a black hole would appear to tick more slowly than those farther away from the black hole. Due to this effect, known as gravitational time dilation, an object falling into a black hole appears to slow as it approaches the event horizon, taking an infinite amount of time to reach it. At the same time, all processes on this object slow down, from the viewpoint of a fixed outside observer, causing any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. Eventually, the falling object fades away until it can no longer be seen. Typically this process happens very rapidly with an object disappearing from view within less than a second. On the other hand, indestructible observers falling into a black hole do not notice any of these effects as they cross the event horizon. According to their own clocks, which appear to them to tick normally, they cross the event horizon after a finite time without noting any singular behaviour; in classical general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle. The topology of the event horizon of a black hole at equilibrium is always spherical. For non-rotating (static) black holes the geometry of the event horizon is precisely spherical, while for rotating black holes the event horizon is oblate. Singularity At the centre of a black hole, as described by general relativity, may lie a gravitational singularity, a region where the spacetime curvature becomes infinite. For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation. In both cases, the singular region has zero volume. It can also be shown that the singular region contains all the mass of the black hole solution. The singular region can thus be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. They can prolong the experience by accelerating away to slow their descent, but only up to a limit. When they reach the singularity, they are crushed to infinite density and their mass is added to the total of the black hole. Before that happens, they will have been torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the "noodle effect". In the case of a charged (Reissner–Nordström) or rotating (Kerr) black hole, it is possible to avoid the singularity. Extending these solutions as far as possible reveals the hypothetical possibility of exiting the black hole into a different spacetime with the black hole acting as a wormhole. The possibility of travelling to another universe is, however, only theoretical since any perturbation would destroy this possibility. It also appears to be possible to follow closed timelike curves (returning to one's own past) around the Kerr singularity, which leads to problems with causality like the grandfather paradox. It is expected that none of these peculiar effects would survive in a proper quantum treatment of rotating and charged black holes. The appearance of singularities in general relativity is commonly perceived as signalling the breakdown of the theory. This breakdown, however, is expected; it occurs in a situation where quantum effects should describe these actions, due to the extremely high density and therefore particle interactions. To date, it has not been possible to combine quantum and gravitational effects into a single theory, although there exist attempts to formulate such a theory of quantum gravity. It is generally expected that such a theory will not feature any singularities. Photon sphere The photon sphere is a spherical boundary where photons that move on tangents to that sphere would be trapped in a non-stable but circular orbit around the black hole. For non-rotating black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius. Their orbits would be dynamically unstable, hence any small perturbation, such as a particle of infalling matter, would cause an instability that would grow over time, either setting the photon on an outward trajectory causing it to escape the black hole, or on an inward spiral where it would eventually cross the event horizon. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. For a Kerr black hole the radius of the photon sphere depends on the spin parameter and on the details of the photon orbit, which can be prograde (the photon rotates in the same sense of the black hole spin) or retrograde. Ergosphere Rotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly "drag" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole, this effect is so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but is at a much greater distance around the equator. Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole. Thereby the rotation of the black hole slows down. A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. Innermost stable circular orbit (ISCO) In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists an innermost stable circular orbit (often called the ISCO), for which any infinitesimal inward perturbations to a circular orbit will lead to spiraling into the black hole, and any outward perturbations will, depending on the energy, result in spiraling in, stably orbiting between apastron and periastron, or escaping to infinity. The location of the ISCO depends on the spin of the black hole, in the case of a Schwarzschild black hole (spin zero) is: and decreases with increasing black hole spin for particles orbiting in the same direction as the spin. Plunging region The final observable region of spacetime around a black hole is called the plunging region. In this area it is no longer possible for matter to follow circular orbits or to stop a final descent into the black hole. Instead it will rapidly plunge toward the black hole close to the speed of light. Formation and evolution Given the bizarre character of black holes, it was long questioned whether such objects could actually exist in nature or whether they were merely pathological solutions to Einstein's equations. Einstein himself wrongly thought black holes would not form, because he held that the angular momentum of collapsing particles would stabilise their motion at some radius. This led the general relativity community to dismiss all results to the contrary for many years. However, a minority of relativists continued to contend that black holes were physical objects, and by the end of the 1960s, they had persuaded the majority of researchers in the field that there is no obstacle to the formation of an event horizon. Penrose demonstrated that once an event horizon forms, general relativity without quantum mechanics requires that a singularity will form within. Shortly afterwards, Hawking showed that many cosmological solutions that describe the Big Bang have singularities without scalar fields or other exotic matter. The Kerr solution, the no-hair theorem, and the laws of black hole thermodynamics showed that the physical properties of black holes were simple and comprehensible, making them respectable subjects for research. Conventional black holes are formed by gravitational collapse of heavy objects such as stars, but they can also in theory be formed by other processes. Gravitational collapse Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. For stars this usually occurs either because a star has too little "fuel" left to maintain its temperature through stellar nucleosynthesis, or because a star that would have been stable receives extra matter in a way that does not raise its core temperature. In either case the star's temperature is no longer high enough to prevent it from collapsing under its own weight. The collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. The result is one of the various types of compact star. Which type forms depends on the mass of the remnant of the original star left if the outer layers have been blown away (for example, in a Type II supernova). The mass of the remnant, the collapsed object that survives the explosion, can be substantially less than that of the original star. Remnants exceeding are produced by stars that were over before the collapse. If the mass of the remnant exceeds about (the Tolman–Oppenheimer–Volkoff limit), either because the original star was very heavy or because the remnant collected additional mass through accretion of matter, even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole. The gravitational collapse of heavy stars is assumed to be responsible for the formation of stellar mass black holes. Star formation in the early universe may have resulted in very massive stars, which upon their collapse would have produced black holes of up to . These black holes could be the seeds of the supermassive black holes found in the centres of most galaxies. It has further been suggested that massive black holes with typical masses of ~ could have formed from the direct collapse of gas clouds in the young universe. These massive objects have been proposed as the seeds that eventually formed the earliest quasars observed already at redshift . Some candidates for such objects have been found in observations of the young universe. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the light emitted just before the event horizon forms delayed an infinite amount of time. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Primordial black holes and the Big Bang Gravitational collapse requires great density. In the current epoch of the universe these high densities are found only in stars, but in the early universe shortly after the Big Bang densities were much greater, possibly allowing for the creation of black holes. High density alone is not enough to allow black hole formation since a uniform mass distribution will not allow the mass to bunch up. In order for primordial black holes to have formed in such a dense medium, there must have been initial density perturbations that could then grow under their own gravity. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging in size from a Planck mass ( ≈ ≈ ) to hundreds of thousands of solar masses. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the expansion rate was greater than the attraction. Following inflation theory there was a net repulsive gravitation in the beginning until the end of inflation. Since then the Hubble flow was slowed by the energy density of the universe. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. High-energy collisions Gravitational collapse is not the only process that could create black holes. In principle, black holes could be formed in high-energy collisions that achieve sufficient density. As of 2002, no such events have been detected, either directly or indirectly as a deficiency of the mass balance in particle accelerator experiments. This suggests that there must be a lower limit for the mass of black holes. Theoretically, this boundary is expected to lie around the Planck mass, where quantum effects are expected to invalidate the predictions of general relativity. This would put the creation of black holes firmly out of reach of any high-energy process occurring on or near the Earth. However, certain developments in quantum gravity suggest that the minimum black hole mass could be much lower: some braneworld scenarios for example put the boundary as low as . This would make it conceivable for micro black holes to be created in the high-energy collisions that occur when cosmic rays hit the Earth's atmosphere, or possibly in the Large Hadron Collider at CERN. These theories are very speculative, and the creation of black holes in these processes is deemed unlikely by many specialists. Even if micro black holes could be formed, it is expected that they would evaporate in about 10 seconds, posing no threat to the Earth. Growth Once a black hole has formed, it can continue to grow by absorbing additional matter. Any black hole will continually absorb gas and interstellar dust from its surroundings. This growth process is one possible way through which some supermassive black holes may have been formed, although the formation of supermassive black holes is still an open field of research. A similar process has been suggested for the formation of intermediate-mass black holes found in globular clusters. Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Evaporation In 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation at a temperature ħc/(8πGMk); this effect has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles that display a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes. A stellar black hole of has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. If a black hole is very small, the radiation effects are expected to become very strong. A black hole with the mass of a car would have a diameter of about 10 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity of more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/c would take less than 10 seconds to evaporate completely. For such a small black hole, quantum gravity effects are expected to play an important role and could hypothetically make such a small black hole stable, although current developments in quantum gravity do not indicate this is the case. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception, however, is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes. NASA's Fermi Gamma-ray Space Telescope launched in 2008 will continue the search for these flashes. If black holes evaporate via Hawking radiation, a solar mass black hole will evaporate (beginning once the temperature of the cosmic microwave background drops below that of the black hole) over a period of 10 years. A supermassive black hole with a mass of will evaporate in around 2×10 years. Some monster black holes in the universe are predicted to continue to grow up to perhaps during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10 years. Observational evidence By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. For example, a black hole's existence can sometimes be inferred by observing its gravitational influence on its surroundings. Direct interferometry The Event Horizon Telescope (EHT) is an active program that directly observes the immediate environment of black holes' event horizons, such as the black hole at the centre of the Milky Way. In April 2017, EHT began observing the black hole at the centre of Messier 87. "In all, eight radio observatories on six mountains and four continents observed the galaxy in Virgo on and off for 10 days in April 2017" to provide the data yielding the image in April 2019. After two years of data processing, EHT released the first direct image of a black hole. Specifically, the supermassive black hole that lies in the centre of the aforementioned galaxy. What is visible is not the black hole—which shows as black because of the loss of all light within this dark region. Instead, it is the gases at the edge of the event horizon, displayed as orange or red, that define the black hole. On 12 May 2022, the EHT released the first image of Sagittarius A*, the supermassive black hole at the centre of the Milky Way galaxy. The published image displayed the same ring-like structure and circular shadow as seen in the M87* black hole, and the image was created using the same techniques as for the M87 black hole. The imaging process for Sagittarius A*, which is more than a thousand times smaller and less massive than M87*, was significantly more complex because of the instability of its surroundings. The image of Sagittarius A* was partially blurred by turbulent plasma on the way to the galactic centre, an effect which prevents resolution of the image at longer wavelengths. The brightening of this material in the 'bottom' half of the processed EHT image is thought to be caused by Doppler beaming, whereby material approaching the viewer at relativistic speeds is perceived as brighter than material moving away. In the case of a black hole, this phenomenon implies that the visible material is rotating at relativistic speeds (>), the only speeds at which it is possible to centrifugally balance the immense gravitational attraction of the singularity, and thereby remain in orbit above the event horizon. This configuration of bright material implies that the EHT observed M87* from a perspective catching the black hole's accretion disc nearly edge-on, as the whole system rotated clockwise. The extreme gravitational lensing associated with black holes produces the illusion of a perspective that sees the accretion disc from above. In reality, most of the ring in the EHT image was created when the light emitted by the far side of the accretion disc bent around the black hole's gravity well and escaped, meaning that most of the possible perspectives on M87* can see the entire disc, even that directly behind the "shadow". In 2015, the EHT detected magnetic fields just outside the event horizon of Sagittarius A* and even discerned some of their properties. The field lines that pass through the accretion disc were a complex mixture of ordered and tangled. Theoretical studies of black holes had predicted the existence of magnetic fields. In April 2023, an image of the shadow of the Messier 87 black hole and the related high-energy jet, viewed together for the first time, was presented. Detection of gravitational waves from merging black holes On 14 September 2015, the LIGO gravitational wave observatory made the first-ever successful direct observation of gravitational waves. The signal was consistent with theoretical predictions for the gravitational waves produced by the merger of two black holes: one with about 36 solar masses, and the other around 29 solar masses. This observation provides the most concrete evidence for the existence of black holes to date. For instance, the gravitational wave signal suggests that the separation of the two objects before the merger was just 350 km, or roughly four times the Schwarzschild radius corresponding to the inferred masses. The objects must therefore have been extremely compact, leaving black holes as the most plausible interpretation. More importantly, the signal observed by LIGO also included the start of the post-merger ringdown, the signal produced as the newly formed compact object settles down to a stationary state. Arguably, the ringdown is the most direct way of observing a black hole. From the LIGO signal, it is possible to extract the frequency and damping time of the dominant mode of the ringdown. From these, it is possible to infer the mass and angular momentum of the final object, which match independent predictions from numerical simulations of the merger. The frequency and decay time of the dominant mode are determined by the geometry of the photon sphere. Hence, observation of this mode confirms the presence of a photon sphere; however, it cannot exclude possible exotic alternatives to black holes that are compact enough to have a photon sphere. The observation also provides the first observational evidence for the existence of stellar-mass black hole binaries. Furthermore, it is the first observational evidence of stellar-mass black holes weighing 25 solar masses or more. Since then, many more gravitational wave events have been observed. Stars orbiting Sagittarius A* The proper motions of stars near the centre of our own Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. By fitting their motions to Keplerian orbits, the astronomers were able to infer, in 1998, that a object must be contained in a volume with a radius of 0.02 light-years to cause the motions of those stars. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass to and a radius of less than 0.002 light-years for the object causing the orbital motion of those stars. The upper limit on the object's size is still too large to test whether it is smaller than its Schwarzschild radius. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. Accretion of matter Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object. Artists' impressions such as the accompanying representation of a black hole with corona commonly depict the black hole as if it were a flat-space body hiding the part of the disk just behind it, but in reality gravitational lensing would greatly distort the image of the accretion disk. Within such a disk, friction would cause angular momentum to be transported outward, allowing matter to fall farther inward, thus releasing potential energy and increasing the temperature of the gas. When the accreting object is a neutron star or a black hole, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the compact object. The resulting friction is so significant that it heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays). These bright X-ray sources may be detected by telescopes. This process of accretion is one of the most efficient energy-producing processes known. Up to 40% of the rest mass of the accreted material can be emitted as radiation. In nuclear fusion only about 0.7% of the rest mass will be emitted as energy. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. As such, many of the universe's more energetic phenomena have been attributed to the accretion of matter on black holes. In particular, active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. Similarly, X-ray binaries are generally accepted to be binary star systems in which one of the two stars is a compact object accreting matter from its companion. It has also been suggested that some ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. In November 2011 the first direct observation of a quasar accretion disk around a supermassive black hole was reported. X-ray binaries X-ray binaries are binary star systems that emit a majority of their radiation in the X-ray part of the spectrum. These X-ray emissions are generally thought to result when one of the stars (compact object) accretes matter from another (regular) star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. If such a system emits signals that can be directly traced back to the compact object, it cannot be a black hole. The absence of such a signal does, however, not exclude the possibility that the compact object is a neutron star. By studying the companion star it is often possible to obtain the orbital parameters of the system and to obtain an estimate for the mass of the compact object. If this is much larger than the Tolman–Oppenheimer–Volkoff limit (the maximum mass a star can have without collapsing) then the object cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Some doubt remained, due to the uncertainties that result from the companion star being much heavier than the candidate black hole. Currently, better candidates for black holes are found in a class of X-ray binaries called soft X-ray transients. In this class of system, the companion star is of relatively low mass allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star during this period. One of the best such candidates is V404 Cygni. Quasi-periodic oscillations The X-ray emissions from accretion disks sometimes flicker at certain frequencies. These signals are called quasi-periodic oscillations and are thought to be caused by material moving along the inner edge of the accretion disk (the innermost stable circular orbit). As such their frequency is linked to the mass of the compact object. They can thus be used as an alternative way to determine the mass of candidate black holes. Galactic nuclei Astronomers use the term "active galaxy" to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the activity in these active galactic nuclei (AGN) may be explained by the presence of supermassive black holes, which can be millions of times more massive than stellar ones. The models of these AGN consist of a central black hole that may be millions or billions of times more massive than the Sun; a disk of interstellar gas and dust called an accretion disk; and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, M32, M87, NGC 3115, NGC 3377, NGC 4258, NGC 4889, NGC 1277, OJ 287, APM 08279+5255 and the Sombrero Galaxy. It is now widely accepted that the centre of nearly every galaxy, not just active ones, contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Microlensing Another way the black hole nature of an object may be tested is through observation of effects caused by a strong gravitational field in their vicinity. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, such as light passing through an optic lens. Observations have been made of weak gravitational lensing, in which light rays are deflected by only a few arcseconds. Microlensing occurs when the sources are unresolved and the observer sees a small brightening. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. Another possibility for observing gravitational lensing by a black hole would be to observe stars orbiting the black hole. There are several candidates for such an observation in orbit around Sagittarius A*. Alternatives The evidence for stellar black holes strongly relies on the existence of an upper limit for the mass of a neutron star. The size of this limit heavily depends on the assumptions made about the properties of dense matter. New exotic phases of matter could push up this bound. A phase of free quarks at high density might allow the existence of dense quark stars, and some supersymmetric models predict the existence of Q stars. Some extensions of the standard model posit the existence of preons as fundamental building blocks of quarks and leptons, which could hypothetically form preon stars. These hypothetical models could potentially explain a number of observations of stellar black hole candidates. However, it can be shown from arguments in general relativity that any such object will have a maximum mass. Since the average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass, supermassive black holes are much less dense than stellar black holes. The average density of a black hole is comparable to that of water. Consequently, the physics of matter forming a supermassive black hole is much better understood and the possible alternative explanations for supermassive black hole observations are much more mundane. For example, a supermassive black hole could be modelled by a large cluster of very dark objects. However, such alternatives are typically not stable enough to explain the supermassive black hole candidates. The evidence for the existence of stellar and supermassive black holes implies that in order for black holes not to form, general relativity must fail as a theory of gravity, perhaps due to the onset of quantum mechanical corrections. A much anticipated feature of a theory of quantum gravity is that it will not feature singularities or event horizons and thus black holes would not be real artefacts. For example, in the fuzzball model based on string theory, the individual states of a black hole solution do not generally have an event horizon or singularity, but for a classical/semiclassical observer the statistical average of such states appears just as an ordinary black hole as deduced from general relativity. A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. These include the gravastar, the black star, related nestar and the dark-energy star. Open questions Entropy and thermodynamics In 1971, Hawking showed under general conditions that the total area of the event horizons of any collection of classical black holes can never decrease, even if they collide and merge. This result, now known as the second law of black hole mechanics, is remarkably similar to the second law of thermodynamics, which states that the total entropy of an isolated system can never decrease. As with classical objects at absolute zero temperature, it was assumed that black holes had zero entropy. If this were the case, the second law of thermodynamics would be violated by entropy-laden matter entering a black hole, resulting in a decrease in the total entropy of the universe. Therefore, Bekenstein proposed that a black hole should have an entropy, and that it should be proportional to its horizon area. The link with the laws of thermodynamics was further strengthened by Hawking's discovery in 1974 that quantum field theory predicts that a black hole radiates blackbody radiation at a constant temperature. This seemingly causes a violation of the second law of black hole mechanics, since the radiation will carry away energy from the black hole causing it to shrink. The radiation also carries away entropy, and it can be proven under general assumptions that the sum of the entropy of the matter surrounding a black hole and one quarter of the area of the horizon as measured in Planck units is in fact always increasing. This allows the formulation of the first law of black hole mechanics as an analogue of the first law of thermodynamics, with the mass acting as energy, the surface gravity as temperature and the area as entropy. One puzzling feature is that the entropy of a black hole scales with its area rather than with its volume, since entropy is normally an extensive quantity that scales linearly with the volume of the system. This odd property led Gerard 't Hooft and Leonard Susskind to propose the holographic principle, which suggests that anything that happens in a volume of spacetime can be described by data on the boundary of that volume. Although general relativity can be used to perform a semiclassical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system that have the same macroscopic qualities, such as mass, charge, pressure, etc. Without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some progress has been made in various approaches to quantum gravity. In 1995, Andrew Strominger and Cumrun Vafa showed that counting the microstates of a specific supersymmetric black hole in string theory reproduced the Bekenstein–Hawking entropy. Since then, similar results have been reported for different black holes both in string theory and in other approaches to quantum gravity like loop quantum gravity. Information loss paradox Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principle. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever. The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics community. In quantum mechanics, loss of information corresponds to the violation of a property called unitarity, and it has been argued that loss of unitarity would also imply violation of conservation of energy, though this has also been disputed. Over recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem. One attempt to resolve the black hole information paradox is known as black hole complementarity. In 2012, the "firewall paradox" was introduced with the goal of demonstrating that black hole complementarity fails to solve the information paradox. According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will emit only a finite amount of information encoded within its Hawking radiation. According to research by physicists like Don Page and Leonard Susskind, there will eventually be a time by which an outgoing particle must be entangled with all the Hawking radiation the black hole has previously emitted. This seemingly creates a paradox: a principle called "monogamy of entanglement" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two other systems at the same time; yet here the outgoing particle appears to be entangled both with the infalling particle and, independently, with past Hawking radiation. In order to resolve this contradiction, physicists may eventually be forced to give up one of three time-tested principles: Einstein's equivalence principle, unitarity, or local quantum field theory. One possible solution, which violates the equivalence principle, is that a "firewall" destroys incoming particles at the event horizon. In general, which—if any—of these assumptions should be abandoned remains a topic of debate. In science fiction Christopher Nolan's 2014 science fiction epic Interstellar features a black hole known as Gargantua, which is the central object of a planetary system in a distant galaxy. Humanity accessed this system via a wormhole in the outer solar system, near Saturn. See also Black brane or Black string Black Hole Initiative Black hole starship Black holes in fiction Blanet BTZ black hole Golden binary Hypothetical black hole (disambiguation) Kugelblitz (astrophysics) List of black holes List of nearest black holes Outline of black holes Sonic black hole Virtual black hole Susskind-Hawking battle Timeline of black hole physics White hole Planck star Dark star (dark matter) Notes References Sources , the lecture notes on which the book was based are available for free from Sean Carroll's website Further reading Popular reading University textbooks and monographs Review papers Lecture notes from 2005 SLAC Summer Institute. External links Stanford Encyclopedia of Philosophy: "Singularities and Black Holes" by Erik Curiel and Peter Bokulich. Black Holes: Gravity's Relentless Pull – Interactive multimedia Web site about the physics and astronomy of black holes from the Space Telescope Science Institute (HubbleSite) ESA's Black Hole Visualization Frequently Asked Questions (FAQs) on Black Holes Schwarzschild Geometry Black holes - basic (NYT; April 2021) Videos 16-year-long study tracks stars orbiting Sagittarius A* Movie of Black Hole Candidate from Max Planck Institute Computer visualisation of the signal detected by LIGO Two Black Holes Merge into One (based upon the signal GW150914) Galaxies Theory of relativity Concepts in astronomy Articles containing video clips
Black hole
[ "Physics", "Astronomy" ]
12,394
[ "Black holes", "Physical phenomena", "Physical quantities", "Concepts in astronomy", "Galaxies", "Unsolved problems in physics", "Astrophysics", "Density", "Theory of relativity", "Stellar phenomena", "Astronomical objects" ]
4,651
https://en.wikipedia.org/wiki/Beta%20decay
In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which an atomic nucleus emits a beta particle (fast energetic electron or positron), transforming into an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in what is called positron emission. Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or Q value must be positive. Beta decay is a consequence of the weak force, which is characterized by relatively long decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by means of a virtual W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. Description The two types of beta decay are known as beta minus and beta plus. In beta minus (β−) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino; while in beta plus (β+) decay, a proton is converted to a neutron and the process creates a positron and an electron neutrino. β+ decay is also known as positron emission. Beta decay conserves a quantum number known as the lepton number, or the number of electrons and their associated neutrinos (other leptons are the muon and tau particles). These particles have lepton number +1, while their antiparticles have lepton number −1. Since a proton or neutron has lepton number zero, β+ decay (a positron, or antielectron) must be accompanied with an electron neutrino, while β− decay (an electron) must be accompanied by an electron antineutrino. An example of electron emission (β− decay) is the decay of carbon-14 into nitrogen-14 with a half-life of about 5,730 years: → + + In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation. This new element has an unchanged mass number , but an atomic number that is increased by one. As in all nuclear decays, the decaying element (in this case ) is known as the parent nuclide while the resulting element (in this case ) is known as the daughter nuclide. Another example is the decay of hydrogen-3 (tritium) into helium-3 with a half-life of about 12.3 years: → + + An example of positron emission (β+ decay) is the decay of magnesium-23 into sodium-23 with a half-life of about 11.3 s: → + + β+ decay also results in nuclear transmutation, with the resulting element having an atomic number that is decreased by one. The beta spectrum, or distribution of energy values for the beta particles, is continuous. The total energy of the decay process is divided between the electron, the antineutrino, and the recoiling nuclide. In the figure to the right, an example of an electron with 0.40 MeV energy from the beta decay of 210Bi is shown. In this example, the total decay energy is 1.16 MeV, so the antineutrino has the remaining energy: . An electron at the far right of the curve would have the maximum possible kinetic energy, leaving the energy of the neutrino to be only its small rest mass. History Discovery and initial characterization Radioactivity was discovered in 1896 by Henri Becquerel in uranium, and subsequently observed by Marie and Pierre Curie in thorium and in the new elements polonium and radium. In 1899, Ernest Rutherford separated radioactive emissions into two types: alpha and beta (now beta minus), based on penetration of objects and ability to cause ionization. Alpha rays could be stopped by thin sheets of paper or aluminium, whereas beta rays could penetrate several millimetres of aluminium. In 1900, Paul Villard identified a still more penetrating type of radiation, which Rutherford identified as a fundamentally new type in 1903 and termed gamma rays. Alpha, beta, and gamma are the first three letters of the Greek alphabet. In 1900, Becquerel measured the mass-to-charge ratio () for beta particles by the method of J.J. Thomson used to study cathode rays and identify the electron. He found that for a beta particle is the same as for Thomson's electron, and therefore suggested that the beta particle is in fact an electron. In 1901, Rutherford and Frederick Soddy showed that alpha and beta radioactivity involves the transmutation of atoms into atoms of other chemical elements. In 1913, after the products of more radioactive decays were known, Soddy and Kazimierz Fajans independently proposed their radioactive displacement law, which states that beta (i.e., ) emission from one element produces another element one place to the right in the periodic table, while alpha emission produces an element two places to the left. Neutrinos The study of beta decay provided the first physical evidence for the existence of the neutrino. In both alpha and gamma decay, the resulting alpha or gamma particle has a narrow energy distribution, since the particle carries the energy from the difference between the initial and final nuclear states. However, the kinetic energy distribution, or spectrum, of beta particles measured by Lise Meitner and Otto Hahn in 1911 and by Jean Danysz in 1913 showed multiple lines on a diffuse background. These measurements offered the first hint that beta particles have a continuous spectrum. In 1914, James Chadwick used a magnetic spectrometer with one of Hans Geiger's new counters to make more accurate measurements which showed that the spectrum was continuous. The results, which appeared to be in contradiction to the law of conservation of energy, were validated by means of calorimetric measurements in 1929 by Lise Meitner and Wilhelm Orthmann. If beta decay were simply electron emission as assumed at the time, then the energy of the emitted electron should have a particular, well-defined value. For beta decay, however, the observed broad distribution of energies suggested that energy is lost in the beta decay process. This spectrum was puzzling for many years. A second problem is related to the conservation of angular momentum. Molecular band spectra showed that the nuclear spin of nitrogen-14 is 1 (i.e., equal to the reduced Planck constant) and more generally that the spin is integral for nuclei of even mass number and half-integral for nuclei of odd mass number. This was later explained by the proton-neutron model of the nucleus. Beta decay leaves the mass number unchanged, so the change of nuclear spin must be an integer. However, the electron spin is 1/2, hence angular momentum would not be conserved if beta decay were simply electron emission. From 1920 to 1927, Charles Drummond Ellis (along with Chadwick and colleagues) further established that the beta decay spectrum is continuous. In 1933, Ellis and Nevill Mott obtained strong evidence that the beta spectrum has an effective upper bound in energy. Niels Bohr had suggested that the beta spectrum could be explained if conservation of energy was true only in a statistical sense, thus this principle might be violated in any given decay. However, the upper bound in beta energies determined by Ellis and Mott ruled out that notion. Now, the problem of how to account for the variability of energy in known beta decay products, as well as for conservation of momentum and angular momentum in the process, became acute. In a famous letter written in 1930, Wolfgang Pauli attempted to resolve the beta-particle energy conundrum by suggesting that, in addition to electrons and protons, atomic nuclei also contained an extremely light neutral particle, which he called the neutron. He suggested that this "neutron" was also emitted during beta decay (thus accounting for the known missing energy, momentum, and angular momentum), but it had simply not yet been observed. In 1931, Enrico Fermi renamed Pauli's "neutron" the "neutrino" ('little neutral one' in Italian). In 1933, Fermi published his landmark theory for beta decay, where he applied the principles of quantum mechanics to matter particles, supposing that they can be created and annihilated, just as the light quanta in atomic transitions. Thus, according to Fermi, neutrinos are created in the beta-decay process, rather than contained in the nucleus; the same happens to electrons. The neutrino interaction with matter was so weak that detecting it proved a severe experimental challenge. Further indirect evidence of the existence of the neutrino was obtained by observing the recoil of nuclei that emitted such a particle after absorbing an electron. Neutrinos were finally detected directly in 1956 by the American physicists Clyde Cowan and Frederick Reines in the Cowan–Reines neutrino experiment. The properties of neutrinos were (with a few minor modifications) as predicted by Pauli and Fermi. decay and electron capture In 1934, Frédéric and Irène Joliot-Curie bombarded aluminium with alpha particles to effect the nuclear reaction  +  →  + , and observed that the product isotope emits a positron identical to those found in cosmic rays (discovered by Carl David Anderson in 1932). This was the first example of  decay (positron emission), which they termed artificial radioactivity since is a short-lived nuclide which does not exist in nature. In recognition of their discovery, the couple were awarded the Nobel Prize in Chemistry in 1935. The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed in 1937 by Luis Alvarez, in the nuclide 48V. Alvarez went on to study electron capture in 67Ga and other nuclides. Non-conservation of parity In 1956, Tsung-Dao Lee and Chen Ning Yang noticed that there was no evidence that parity was conserved in weak interactions, and so they postulated that this symmetry may not be preserved by the weak force. They sketched the design for an experiment for testing conservation of parity in the laboratory. Later that year, Chien-Shiung Wu and coworkers conducted the Wu experiment showing an asymmetrical beta decay of at cold temperatures that proved that parity is not conserved in beta decay. This surprising result overturned long-held assumptions about parity and the weak force. In recognition of their theoretical work, Lee and Yang were awarded the Nobel Prize for Physics in 1957. However Wu, who was female, was not awarded the Nobel prize. β− decay In  decay, the weak interaction converts an atomic nucleus into a nucleus with atomic number increased by one, while emitting an electron () and an electron antineutrino ().  decay generally occurs in neutron-rich nuclei. The generic equation is: → + + where and are the mass number and atomic number of the decaying nucleus, and X and X′ are the initial and final elements, respectively. Another example is when the free neutron () decays by  decay into a proton (): → + + . At the fundamental level (as depicted in the Feynman diagram on the right), this is caused by the conversion of the negatively charged () down quark to the positively charged () up quark promoteby by a virtual boson; the boson subsequently decays into an electron and an electron antineutrino: → + + . β+ decay In  decay, or positron emission, the weak interaction converts an atomic nucleus into a nucleus with atomic number decreased by one, while emitting a positron () and an electron neutrino ().  decay generally occurs in proton-rich nuclei. The generic equation is: → + + This may be considered as the decay of a proton inside the nucleus to a neutron: p → n + + However,  decay cannot occur in an isolated proton because it requires energy, due to the mass of the neutron being greater than the mass of the proton.  decay can only happen inside nuclei when the daughter nucleus has a greater binding energy (and therefore a lower total energy) than the mother nucleus. The difference between these energies goes into the reaction of converting a proton into a neutron, a positron, and a neutrino and into the kinetic energy of these particles. This process is opposite to negative beta decay, in that the weak interaction converts a proton into a neutron by converting an up quark into a down quark resulting in the emission of a or the absorption of a . When a boson is emitted, it decays into a positron and an electron neutrino: → + + . Electron capture (K-capture/L-capture) In all cases where  decay (positron emission) of a nucleus is allowed energetically, so too is electron capture allowed. This is a process during which a nucleus captures one of its atomic electrons, resulting in the emission of a neutrino: + → + An example of electron capture is one of the decay modes of krypton-81 into bromine-81: + → + All emitted neutrinos are of the same energy. In proton-rich nuclei where the energy difference between the initial and final states is less than 2,  decay is not energetically possible, and electron capture is the sole decay mode. If the captured electron comes from the innermost shell of the atom, the K-shell, which has the highest probability to interact with the nucleus, the process is called K-capture. If it comes from the L-shell, the process is called L-capture, etc. Electron capture is a competing (simultaneous) decay process for all nuclei that can undergo β+ decay. The converse, however, is not true: electron capture is the only type of decay that is allowed in proton-rich nuclides that do not have sufficient energy to emit a positron and neutrino. Nuclear transmutation If the proton and neutron are part of an atomic nucleus, the above described decay processes transmute one chemical element into another. For example: :{|border="0" |- style="height:2em;" | || || ||→ || ||+ || ||+ || ||(beta minus decay) |- style="height:2em;" | || || ||→ || ||+ || ||+ || ||(beta plus decay) |- style="height:2em;" | ||+ || ||→ || ||+ || || || ||(electron capture) |} Beta decay does not change the number () of nucleons in the nucleus, but changes only its charge . Thus the set of all nuclides with the same  can be introduced; these isobaric nuclides may turn into each other via beta decay. For a given there is one that is most stable. It is said to be beta stable, because it presents a local minimum of the mass excess: if such a nucleus has numbers, the neighbour nuclei and have higher mass excess and can beta decay into , but not vice versa. For all odd mass numbers , there is only one known beta-stable isobar. For even , there are up to three different beta-stable isobars experimentally known; for example, , , and are all beta-stable. There are about 350 known beta-decay stable nuclides. Competition of beta decay types Usually unstable nuclides are clearly either "neutron rich" or "proton rich", with the former undergoing beta decay and the latter undergoing electron capture (or more rarely, due to the higher energy requirements, positron decay). However, in a few cases of odd-proton, odd-neutron radionuclides, it may be energetically favorable for the radionuclide to decay to an even-proton, even-neutron isobar either by undergoing beta-positive or beta-negative decay. An often-cited example is the single isotope (29 protons, 35 neutrons), which illustrates three types of beta decay in competition. Copper-64 has a half-life of about 12.7 hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay. This particular nuclide (though not all nuclides in this situation) is almost equally likely to decay through proton decay by positron emission () or electron capture () to , as it is through neutron decay by electron emission () to . Stability of naturally occurring nuclides Most naturally occurring nuclides on earth are beta stable. Nuclides that are not beta stable have half-lives ranging from under a second to periods of time significantly greater than the age of the universe. One common example of a long-lived isotope is the odd-proton odd-neutron nuclide , which undergoes all three types of beta decay (, and electron capture) with a half-life of . Conservation rules for beta decay Baryon number is conserved where is the number of constituent quarks, and is the number of constituent antiquarks. Beta decay just changes neutron to proton or, in the case of positive beta decay (electron capture) proton to neutron so the number of individual quarks doesn't change. It is only the baryon flavor that changes, here labelled as the isospin. Up and down quarks have total isospin and isospin projections All other quarks have . In general Lepton number is conserved so all leptons have assigned a value of +1, antileptons −1, and non-leptonic particles 0. Angular momentum For allowed decays, the net orbital angular momentum is zero, hence only spin quantum numbers are considered. The electron and antineutrino are fermions, spin-1/2 objects, therefore they may couple to total (parallel) or (anti-parallel). For forbidden decays, orbital angular momentum must also be taken into consideration. Energy release The value is defined as the total energy released in a given nuclear decay. In beta decay, is therefore also the sum of the kinetic energies of the emitted beta particle, neutrino, and recoiling nucleus. (Because of the large mass of the nucleus compared to that of the beta particle and neutrino, the kinetic energy of the recoiling nucleus can generally be neglected.) Beta particles can therefore be emitted with any kinetic energy ranging from 0 to . A typical is around 1 MeV, but can range from a few keV to a few tens of MeV. Since the rest mass of the electron is 511 keV, the most energetic beta particles are ultrarelativistic, with speeds very close to the speed of light. In the case of Re, the maximum speed of the beta particle is only 9.8% of the speed of light. The following table gives some examples: β− decay Consider the generic equation for beta decay → + + . The value for this decay is , where is the mass of the nucleus of the atom, is the mass of the electron, and is the mass of the electron antineutrino. In other words, the total energy released is the mass energy of the initial nucleus, minus the mass energy of the final nucleus, electron, and antineutrino. The mass of the nucleus is related to the standard atomic mass by That is, the total atomic mass is the mass of the nucleus, plus the mass of the electrons, minus the sum of all electron binding energies for the atom. This equation is rearranged to find , and is found similarly. Substituting these nuclear masses into the -value equation, while neglecting the nearly-zero antineutrino mass and the difference in electron binding energies, which is very small for high- atoms, we have This energy is carried away as kinetic energy by the electron and antineutrino. Because the reaction will proceed only when the  value is positive, β− decay can occur when the mass of atom is greater than the mass of atom . β+ decay The equations for β+ decay are similar, with the generic equation → + + giving However, in this equation, the electron masses do not cancel, and we are left with Because the reaction will proceed only when the  value is positive, β+ decay can occur when the mass of atom exceeds that of by at least twice the mass of the electron. Electron capture The analogous calculation for electron capture must take into account the binding energy of the electrons. This is because the atom will be left in an excited state after capturing the electron, and the binding energy of the captured innermost electron is significant. Using the generic equation for electron capture + → + we have which simplifies to where is the binding energy of the captured electron. Because the binding energy of the electron is much less than the mass of the electron, nuclei that can undergo β+ decay can always also undergo electron capture, but the reverse is not true. Beta emission spectrum Beta decay can be considered as a perturbation as described in quantum mechanics, and thus Fermi's Golden Rule can be applied. This leads to an expression for the kinetic energy spectrum of emitted betas as follows: where is the kinetic energy, is a shape function that depends on the forbiddenness of the decay (it is constant for allowed decays), is the Fermi Function (see below) with Z the charge of the final-state nucleus, is the total energy, is the momentum, and is the Q value of the decay. The kinetic energy of the emitted neutrino is given approximately by minus the kinetic energy of the beta. As an example, the beta decay spectrum of 210Bi (originally called RaE) is shown to the right. Fermi function The Fermi function that appears in the beta spectrum formula accounts for the Coulomb attraction / repulsion between the emitted beta and the final state nucleus. Approximating the associated wavefunctions to be spherically symmetric, the Fermi function can be analytically calculated to be: where is the final momentum, Γ the Gamma function, and (if is the fine-structure constant and the radius of the final state nucleus) , (+ for electrons, − for positrons), and . For non-relativistic betas (), this expression can be approximated by: Other approximations can be found in the literature. Kurie plot A Kurie plot (also known as a Fermi–Kurie plot) is a graph used in studying beta decay developed by Franz N. D. Kurie, in which the square root of the number of beta particles whose momenta (or energy) lie within a certain narrow range, divided by the Fermi function, is plotted against beta-particle energy. It is a straight line for allowed transitions and some forbidden transitions, in accord with the Fermi beta-decay theory. The energy-axis (x-axis) intercept of a Kurie plot corresponds to the maximum energy imparted to the electron/positron (the decay's  value). With a Kurie plot one can find the limit on the effective mass of a neutrino. Helicity (polarization) of neutrinos, electrons and positrons emitted in beta decay After the discovery of parity non-conservation (see History), it was found that, in beta decay, electrons are emitted mostly with negative helicity, i.e., they move, naively speaking, like left-handed screws driven into a material (they have negative longitudinal polarization). Conversely, positrons have mostly positive helicity, i.e., they move like right-handed screws. Neutrinos (emitted in positron decay) have negative helicity, while antineutrinos (emitted in electron decay) have positive helicity. The higher the energy of the particles, the higher their polarization. Types of beta decay transitions Beta decays can be classified according to the angular momentum ( value) and total spin ( value) of the emitted radiation. Since total angular momentum must be conserved, including orbital and spin angular momentum, beta decay occurs by a variety of quantum state transitions to various nuclear angular momentum or spin states, known as "Fermi" or "Gamow–Teller" transitions. When beta decay particles carry no angular momentum (), the decay is referred to as "allowed", otherwise it is "forbidden". Other decay modes, which are rare, are known as bound state decay and double beta decay. Fermi transitions A Fermi transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition). In the non-relativistic limit, the nuclear part of the operator for a Fermi transition is given by with the weak vector coupling constant, the isospin raising and lowering operators, and running over all protons and neutrons in the nucleus. Gamow–Teller transitions A Gamow–Teller transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition). In this case, the nuclear part of the operator is given by with the weak axial-vector coupling constant, and the spin Pauli matrices, which can produce a spin-flip in the decaying nucleon. Forbidden transitions When , the decay is referred to as "forbidden". Nuclear selection rules require high  values to be accompanied by changes in nuclear spin () and parity (). The selection rules for the th forbidden transitions are: where corresponds to no parity change or parity change, respectively. The special case of a transition between isobaric analogue states, where the structure of the final state is very similar to the structure of the initial state, is referred to as "superallowed" for beta decay, and proceeds very quickly. The following table lists the Δ and Δ values for the first few values of : Rare decay modes Bound-state β decay A very small minority of free neutron decays (about four per million) are "two-body decays": the proton, electron and antineutrino are produced, but the electron fails to gain the 13.6 eV energy necessary to escape the proton, and therefore simply remains bound to it, as a neutral hydrogen atom. In this type of beta decay, in essence all of the neutron decay energy is carried off by the antineutrino. For fully ionized atoms (bare nuclei), it is possible in likewise manner for electrons to fail to escape the atom, and to be emitted from the nucleus into low-lying atomic bound states (orbitals). This cannot occur for neutral atoms with low-lying bound states which are already filled by electrons. Bound-state β decays were predicted by Daudel, Jean, and Lecoin in 1947, and the phenomenon in fully ionized atoms was first observed for Dy in 1992 by Jung et al. of the Darmstadt Heavy-Ion Research Center. Though neutral Dy is stable, fully ionized Dy undergoes β decay into the K and L shells with a half-life of 47 days. The resulting nucleus – Ho – is stable only in this almost fully ionized state and will decay via electron capture into Dy in the neutral state. Likewise, while being stable in the neutral state, the fully ionized Tl undergoes bound-state β decay to Pb with a half-life of days. The half-lives of neutral Ho and Pb are respectively 4570 years and years. In addition, it is estimated that β decay is energetically impossible for natural atoms but theoretically possible when fully ionized also for 193Ir, 194Au, 202Tl, 215At, 243Am, and 246Bk. Another possibility is that a fully ionized atom undergoes greatly accelerated β decay, as observed for Re by Bosch et al., also at Darmstadt. Neutral Re does undergo β decay, with half-life years, but for fully ionized Re this is shortened to only 32.9 years. This is because Re is energetically allowed to undergo β decay to the first-excited state in Os, a process energetically disallowed for natural Re. Similarly, neutral Pu undergoes β decay with a half-life of 14.3 years, but in its fully ionized state the beta-decay half-life of Pu decreases to 4.2 days. For comparison, the variation of decay rates of other nuclear processes due to chemical environment is less than 1%. Moreover, current mass determinations cannot decisively determine whether Rn is energetically possible to undergo β decay (the decay energy given in AME2020 is (−6 ± 8) keV), but in either case it is predicted that β will be greatly accelerated for fully ionized Rn. Double beta decay Some nuclei can undergo double beta decay (2β) where the charge of the nucleus changes by two units. Double beta decay is difficult to study, as it has an extremely long half-life. In nuclei for which both β decay and 2β are possible, the rarer 2β process is effectively impossible to observe. However, in nuclei where β decay is forbidden but 2β is allowed, the process can be seen and a half-life measured. Thus, 2β is usually studied only for beta stable nuclei. Like single beta decay, double beta decay does not change ; thus, at least one of the nuclides with some given has to be stable with regard to both single and double beta decay. "Ordinary" 2β results in the emission of two electrons and two antineutrinos. If neutrinos are Majorana particles (i.e., they are their own antiparticles), then a decay known as neutrinoless double beta decay will occur. Most neutrino physicists believe that neutrinoless 2β has never been observed. See also Common beta emitters Neutrino Betavoltaics Particle radiation Radionuclide Tritium illumination, a form of fluorescent lighting powered by beta decay Pandemonium effect Total absorption spectroscopy References Bibliography External links The Live Chart of Nuclides - IAEA with filter on decay type Beta decay simulation Nuclear physics Radioactivity
Beta decay
[ "Physics", "Chemistry" ]
6,572
[ "Radioactivity", "Nuclear physics" ]
15,983,150
https://en.wikipedia.org/wiki/Himalayan%20salt
Himalayan salt is rock salt (halite) mined from the Punjab region of Pakistan. The salt, which often has a pinkish tint due to trace minerals, is primarily used as a food additive to replace refined table salt but is also used for cooking and food presentation, decorative lamps, and spa treatments. The product is often promoted with unsupported claims that it has health benefits. Geology Himalayan salt is mined from the Salt Range mountains, the southern edge of a fold-and-thrust belt that underlies the Pothohar Plateau south of the Himalayas in Pakistan. Himalayan salt comes from a thick layer of Ediacaran to early Cambrian evaporites of the Salt Range Formation. This geological formation consists of crystalline halite intercalated with potash salts, overlain by gypsiferous marl and interlayered with beds of gypsum and dolomite with infrequent seams of oil shale that accumulated between 600 and 540 million years ago. These strata and the overlying Cambrian to Eocene sedimentary rocks were thrust southward over younger sedimentary rocks, and eroded to create the Salt Range. History Local legend traces the discovery of the Himalayan salt deposits to the army of Alexander the Great. However, the first records of mining are from the Janjua clan in the 1200s. The salt is mostly mined at the Khewra Salt Mine in Khewra, Jhelum District, Punjab, Pakistan, which is situated in the foothills of the Salt Range hill system between the Indus River and the Punjab Plain. It is primarily exported in bulk, and processed in other countries for the consumer market. Mineral composition Himalayan salt is a table salt. There is a common misconception that Himalayan salt has lower sodium than conventional table salt, but the levels are similar. Analysis of a range of Khewra salt samples showed them to be between 96% and 99% sodium chloride, with trace presence of calcium, iron, zinc, chromium, magnesium, and sulfates, all at varying safe levels below 1%. Some salt crystals from this region have an off-white to transparent color, while the trace minerals in some veins of salt give it a pink, reddish, or beet-red color. Nutritionally, Himalayan salt is similar to common table salt. A study of pink salts in Australia showed Himalayan salt to contain higher levels of a range of trace elements compared to table salt, but that the levels were too low for nutritional significance without an "exceedingly high intake", at which point any nutritional benefit would be outweighed by the risks of elevated sodium consumption. One notable exception regards the essential mineral iodine. Commercial table salt in many countries is supplemented with iodine, and this has significantly reduced disorders of iodine deficiency. Himalayan salt lacks these beneficial effects of iodine supplementation. Uses Himalayan salt is used to flavor food. Due mainly to marketing costs, pink Himalayan salt is up to 20 times more expensive than table salt or sea salt. The impurities giving it its distinctive pink hue, as well as its unprocessed state and lack of anti-caking agents, have given rise to the unsupported belief that it is healthier than common table salt. There is no scientific basis for such claimed health benefits. In the United States, the Food and Drug Administration warned a manufacturer of dietary supplements, including one consisting of Himalayan salt, to discontinue marketing the products using unproven claims of health benefits. Slabs of salt are used as serving dishes, baking stones, and griddles, and it is also used to make tequila shot glasses. In such uses, small amounts of salt transfer to the food or drink and alter its flavor profile. It is also used to make that radiate a pinkish or orangish hue, manufactured by placing a light source within the hollowed-out interior of a block of Himalayan salt. Claims that their use results in the release of ions that benefit health have no scientific foundation. Similar scientifically unsupported claims underlie the use of Himalayan salt to line the walls of spas, along with its use for salt-inhalation spa treatments. Salt lamps can be a danger to pets, who may suffer salt poisoning after licking them. See also Health effects of salt List of edible salts List of topics characterized as pseudoscience Sea salt Table salt References External links Edible salt Pseudoscience Salt industry in Pakistan
Himalayan salt
[ "Chemistry" ]
901
[ "Edible salt", "Salts" ]
10,807,783
https://en.wikipedia.org/wiki/Allotype%20%28immunology%29
The word allotype comes from two Greek roots, allo meaning 'other or differing from the norm' and typos meaning 'mark'. In immunology, allotype is an immunoglobulin variation (in addition to isotypic variation) that can be found among antibody classes and is manifested by heterogeneity of immunoglobulins present in a single vertebrate species. The structure of immunoglobulin polypeptide chain is dictated and controlled by number of genes encoded in the germ line. However, these genes, as it was discovered by serologic and chemical methods, could be highly polymorphic. This polymorphism is subsequently projected to the overall amino acid structure of antibody chains. Polymorphic epitopes can be present on immunoglobulin constant regions on both heavy and light chains, differing between individuals or ethnic groups and in some cases may pose as immunogenic determinants. Exposure of individuals to a non-self allotype might elicit an anti- allotype response and became cause of problems for example in a patient after transfusion of blood or in a pregnant woman. However, it is important to mention that not all variations in immunoglobulin amino acid sequence pose as a determinant responsible for immune response. Some of these allotypic determinants may be present at places that are not well exposed and therefore can be hardly serologically discriminated. In other cases, variation in one isotype can be compensated by the presence of this determinant on another antibody isotype in one individual. This means that divergent allotype of heavy chain of IgG antibody may be balanced by presence of this allotype on heavy chain of for example IgA antibody and therefore is called isoallotypic variant. Especially large number of polymorphisms were discovered in IgG antibody subclasses. Which were practically used in forensic medicine and in paternity testing, before replaced by modern day DNA fingerprinting. Definition and organisation of allotypes in humans Human allotypes nomenclature was first described in alphabetical system and further systematized in numerical system, but both could be found in the literature. For example, allotype expressed on constant region of heavy chain on IgG are designated by Gm which stands for ‘genetic marker ‘ together with IgG subclass (IgG1 àG1m, IgG2 àG2m) and the allotype number or letter [ G1m1/ G1m (a) ]. Polymorphisms within IgA are denoted in the same way as A2m (eg. A2m1/2) and kappa light chains constant region polymorphisms as Km (eg. Km1). Despite the fact, that there are multiple known lambda chain isotypes, there have not been reported any lambda chain serological polymorphisms. All these before mentioned allotypes are expressed on constant regions of the immunoglobulin. Genes responsible for encoding structure of constant regions of heavy chains are closely linked and therefore inherited together as one haplotype with low number of crossovers. Although some crossovers did occur during human evolution resulting in the creation of current populations characteristic haplotypes and importance of allotype system in population studies., Implications for monoclonal antibody therapy Antibody allotypes came back to spotlight due to development and use of therapies based on monoclonal antibodies. These recombinant human glycoproteins and proteins are now well established in clinical practise, but sometimes leads to adverse effects such as generation of antitherapeutic antibodies that negates therapy or even cause severe reactions to the therapy. This reaction may be attributed to differences between therapeutics itself or may arise between same therapeutics produced by different companies or even between different lots produced by the same company. To prevent production of such antitherapeutic antibodies, ideally, all clinical used proteins and glycoproteins should poses same allotype as natural patient’s product, this way the presence of ‘altered self‘ which poses a potential target for immune system, is limited. Whilst many parameters connected to developing and manufacturing process that might predispose monoclonal antibodies to cause immune response are well known and appropriate steps are taken to monitor and control these unwanted effects, complications linked with administration of monoclonal antibodies to genetically diverse human population are less well described. Humans exhibit abundance of genotypes and phenotypes, however all currently licensed IgG therapeutic immunoglobulins are developed as single allotypic/ polymorphic form. Patients that are homozygous for alternative phenotype are therefore at higher risk of developing potential immune response to the therapy. See also Allotype (disambiguation) Idiotype Isotype References External links Genetics Transplantation medicine
Allotype (immunology)
[ "Biology" ]
1,000
[ "Genetics" ]
10,809,455
https://en.wikipedia.org/wiki/46%C2%B0%20halo
A 46° halo is a rare atmospheric optical phenomenon that consists of a halo with an apparent radius of approximately 46° around the Sun. At solar elevations of 15–27°, 46° halos are often confused with the less rare and more colourful supralateral and infralateral arcs, which cross the parhelic circle at about 46° to the left and right of the sun. The 46° halo is similar to, but much larger and fainter than, the more common 22° halo. The 46° halo forms when sunlight enters randomly oriented hexagonal ice crystals through a prism face and exits through a hexagonal base. The 90° inclination between the two faces of the crystals causes the colours of the 46° halo to be more widely dispersed than those of the 22° halo. In addition, as many rays are deflected at larger angles than the angle of minimum deviation, the outer edge of the halo is more diffuse. To tell the difference between a 46° halo and the infralateral or supralateral arcs, one should carefully observe sun elevation and the fluctuating shapes and orientations of the arcs. The supralateral arc always touches the circumzenithal arc, while the 46° halo only achieves this when the sun is located 15–27° over the horizon, leaving a gap between the two at other elevations. In contrast, supralateral arcs cannot form when the Sun is over 32°, so a halo in the region of 46° is always a 46° halo at higher elevations. If the Sun is near the zenith, however, circumhorizontal or infralateral arcs are located 46° under the Sun and can be confused with the 46° halo. References External links Atmospheric Optics - 46° Radius Halo - including a HaloSim computer simulation and an Antarctica fish eye photo. Atmospheric optical phenomena
46° halo
[ "Physics" ]
392
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
10,810,026
https://en.wikipedia.org/wiki/Methylcyclohexane
Methylcyclohexane (cyclohexylmethane) is an organic compound with the molecular formula is CH3C6H11. Classified as saturated hydrocarbon, it is a colourless liquid with a faint odor. Methylcyclohexane is used as a solvent. It is mainly converted in naphtha reformers to toluene. A special use is in PF-1 priming fluid in cruise missiles to aid engine start-up when they run on special nonvolatile jet fuel like JP-10. Methylcyclohexane is also used in some correction fluids (such as White-Out) as a solvent. History While researching hydrogenation of arenes with hydroiodic acid in 1876 as part of his doctoral dissertation, first prepared the hydrocarbon from toluene. He determined its boiling point to be 97°C, its density at 20°C to by 0.76 g/cc and named it hexahydrotoluene. It was soon identified in oil from Baku and obtained by other synthetic methods. Production and use Most methylcyclohexane is extracted from petroleum but it can be also produced by catalytic hydrogenation of toluene: CH3C6H5 + 3 H2 → CH3C6H11 The hydrocarbon is a minor component of automobile fuel, with its share in US gasoline varying between 0.3 and 1.7% in early 1990s and 0.1 to 1% in 2011. Its research and motor octane numbers are 75 and 71 respectively. As a component of a mixture, it is usually dehydrogenated to toluene, which increases the octane rating of gasoline. It is also one of a host of substances in jet fuel surrogate blends, e.g., for Jet A fuel. Solvent Methylcyclohexane is used as an organic solvent, with properties similar to related saturated hydrocarbons such as heptane. It is also a solvent in many types of correction fluids. Structure Methylcyclohexane is a monosubstituted cyclohexane because it has one branching via the attachment of one methyl group on one carbon of the cyclohexane ring. Like all cyclohexanes, it can interconvert rapidly between two chair conformers. The lowest energy form of this monosubstituted methylcyclohexane occurs when the methyl group occupies an equatorial rather than an axial position. This equilibrium is embodied in the concept of A value. In the axial position, the methyl group experiences steric crowding (steric strain) because of the presence of axial hydrogen atoms on the same side of the ring (known as the 1,3-diaxial interactions). There are two such interactions, with each pairwise methyl/hydrogen combination contributing approximately 7.61 kJ/mol of strain energy. The equatorial conformation experiences no such interaction, and so it is the energetically favored conformation. Flammability and toxicity Methylcyclohexane is flammable. Furthermore, it is considered "very toxic to aquatic life". Note, while methylcyclohexane is a substructure of 4-methylcyclohexanemethanol (MCHM), it is distinct in its physical, chemical, and biological (ecologic, metabolic, and toxicologic) properties. References Hydrocarbons Hydrocarbon solvents Cyclohexyl compounds Methyl compounds
Methylcyclohexane
[ "Chemistry" ]
718
[ "Organic compounds", "Hydrocarbons" ]
10,810,064
https://en.wikipedia.org/wiki/Methylenecyclohexane
Methylenecyclohexane (IUPAC name: methylidenecyclohexane) is an organic compound with the molecular formula C7H12. Synthesis It can be produced by a Wittig reaction or a reaction with a Tebbe's reagent from cyclohexanone. It can also be synthesized as a side product of the dehydration of 2-methylcyclohexanol into 1-methylcyclohexene. Structure Methylenecyclohexane is an unsaturated hydrocarbon, containing a cyclohexane ring with a methylene (methylidine) group attached. See also Methylcyclohexane Methylenecyclopropane References Vinylidene compounds Hydrocarbons
Methylenecyclohexane
[ "Chemistry" ]
163
[ "Organic compounds", "Hydrocarbons" ]
10,810,380
https://en.wikipedia.org/wiki/Propylamphetamine
Propylamphetamine (code name PAL-424; also known as N-propylamphetamine or NPA) is a psychostimulant of the amphetamine family which was never marketed. It was first developed in the 1970s, mainly for research into the metabolism of, and as a comparison tool to, other amphetamines. Propylamphetamine is inactive as a dopamine releasing agent in vitro and instead acts as a low-potency dopamine reuptake inhibitor with an of 1,013nM. The drug can be N-dealkylated to form amphetamine (10–20% excreted in urine after 24hours). A study in rats found propylamphetamine to be approximately 4-fold less potent than amphetamine. See also Methamphetamine Ethylamphetamine Isopropylamphetamine Butylamphetamine Phenylpropylaminopentane References Norepinephrine-dopamine releasing agents Prodrugs Substituted amphetamines
Propylamphetamine
[ "Chemistry" ]
226
[ "Chemicals in medicine", "Prodrugs" ]
10,810,905
https://en.wikipedia.org/wiki/Post%20Irradiation%20Examination
Post Irradiation Examination (PIE) is the study of used nuclear materials such as nuclear fuel. It has several purposes. It is known that by examination of used fuel that the failure modes which occur during normal use (and the manner in which the fuel will behave during an accident) can be studied. In addition information is gained which enables the users of fuel to assure themselves of its quality and it also assists in the development of new fuels. After major accidents the core (or what is left of it) is normally subject to PIE in order to find out what happened. One site where PIE is done is the ITU which is the EU centre for the study of highly radioactive materials. Materials in a high radiation environment (such as a reactor) can undergo unique behaviors such as swelling and non-thermal creep. If there are nuclear reactions within the material (such as what happens in the fuel), the stoichiometry will also change slowly over time. These behaviors can lead to new material properties, cracking, and fission gas release: Fission gas release As the fuel is degraded or heated the more volatile fission products which are trapped within the uranium dioxide may become free. Fuel cracking As the fuel expands on heating, the core of the pellet expands more than the rim which may lead to cracking. Because of the thermal stress thus formed the fuel cracks, the cracks tend to go from the centre to the edge in a star shaped pattern. In order to better understand and control these changes in materials, these behaviors are studied. . Due to the intensely radioactive nature of the used fuel this is done in a hot cell. A combination of nondestructive and destructive methods of PIE are common. In addition to the effects of radiation and the fission products on materials, scientists also need to consider the temperature of materials in a reactor, and in particular, the fuel. Too high fuel temperatures can compromise the fuel, and therefore it is important to control the temperature in order to control the fission chain reaction. The temperature of the fuel varies as a function of the distance from the centre to the rim. At distance x from the centre the temperature (Tx) is described by the equation where ρ is the power density (W m−3) and Kf is the thermal conductivity. Tx = TRim + ρ (rpellet2 - x2) (4 Kf)−1 To explain this for a series of fuel pellets being used with a rim temperature of 200 °C (typical for a BWR) with different diameters and power densities of 250 Wm−3 have been modeled using the above equation. Note that these fuel pellets are rather large; it is normal to use oxide pellets which are about 10 mm in diameter. Further reading Radiochemistry and Nuclear Chemistry, G. Choppin, J-O Liljenzin and J. Rydberg, 3rd Ed, 2002, Butterworth-Heinemann, References External links IAEA (International Atomic Energy Agency) - Post Irradiation Examination Facilities Database Nuclear materials Nuclear reprocessing Nuclear technology Nuclear chemistry Radioactive waste Actinides
Post Irradiation Examination
[ "Physics", "Chemistry", "Technology" ]
632
[ "Nuclear chemistry", "Radioactive waste", "Nuclear technology", "Materials", "Nuclear materials", "Hazardous waste", "nan", "Nuclear physics", "Radioactivity", "Environmental impact of nuclear power", "Matter" ]
10,811,867
https://en.wikipedia.org/wiki/Inorganic%20polymer
In polymer chemistry, an inorganic polymer is a polymer with a skeletal structure that does not include carbon atoms in the backbone. Polymers containing inorganic and organic components are sometimes called hybrid polymers, and most so-called inorganic polymers are hybrid polymers. One of the best known examples is polydimethylsiloxane, otherwise known commonly as silicone rubber. Inorganic polymers offer some properties not found in organic materials including low-temperature flexibility, electrical conductivity, and nonflammability. The term inorganic polymer refers generally to one-dimensional polymers, rather than to heavily crosslinked materials such as silicate minerals. Inorganic polymers with tunable or responsive properties are sometimes called smart inorganic polymers. A special class of inorganic polymers are geopolymers, which may be anthropogenic or naturally occurring. Main group backbone Traditionally, the area of inorganic polymers focuses on materials in which the backbone is composed exclusively of main-group elements. Homochain polymers Homochain polymers have only one kind of atom in the main chain. One member is polymeric sulfur, which forms reversibly upon melting any of the cyclic allotropes, such as S8. Organic polysulfides and polysulfanes feature short chains of sulfur atoms, capped respectively with alkyl and H. Elemental tellurium and the gray allotrope of elemental selenium also are polymers, although they are not processable. Polymeric forms of the group IV elements are well known. The premier materials are polysilanes, which are analogous to polyethylene and related organic polymers. They are more fragile than the organic analogues and, because of the longer bonds, carry larger substituents. Poly(dimethylsilane) is prepared by reduction of dimethyldichlorosilane. Pyrolysis of poly(dimethylsilane) gives SiC fibers. Heavier analogues of polysilanes are also known to some extent. These include polygermanes, and polystannanes, Heterochain polymers Si-based Heterochain polymers have more than one type of atom in the main chain. Typically two types of atoms alternate along the main chain. Of great commercial interest are the polysiloxanes, where the main chain features Si and O centers: . Each Si center has two substituents, usually methyl or phenyl. Examples include polydimethylsiloxane (PDMS, ), polymethylhydrosiloxane (PMHS, ) and polydiphenylsiloxane ). Related to the siloxanes are the polysilazanes. These materials have the backbone formula . One example is perhydridopolysilazane PHPS. Such materials are of academic interest. P-based A related family of well studied inorganic polymers are the polyphosphazenes. They feature the backbone . With two substituents on phosphorus, they are structurally similar related to the polysiloxanes. Such materials are generated by ring-opening polymerization of hexachlorophosphazene followed by substitution of the groups by alkoxide. Such materials find specialized applications as elastomers. B-based Boron–nitrogen polymers feature backbones. Examples are polyborazylenes, polyaminoboranes. S-based The polythiazyls have the backbone . Unlike most inorganic polymers, these materials lack substituents on the main chain atoms. Such materials exhibit high electrical conductivity, a finding that attracted much attention during the era when polyacetylene was discovered. It is superconducting below 0.26 K. Ionomers Usually not classified with charge-neutral inorganic polymers are ionomers. Phosphorus–oxygen and boron-oxide polymers include the polyphosphates and polyborates. Transition-metal-containing polymers Inorganic polymers also include materials with transition metals in the backbone. Examples are Polyferrocenes, Krogmann's salt and Magnus's green salt. Polymerization methods Inorganic polymers are formed, like organic polymers, by: Step-growth polymerization: Polysiloxanes; Chain-growth polymerization: Polysilanes; Ring-opening polymerization: Poly(dichlorophosphazene). Reactions Inorganic polymers are precursors to inorganic solids. This type of reaction is illustrated by the stepwise conversion of ammonia borane to discrete rings and oligomers, which upon pyrolysis give boron nitrides. References
Inorganic polymer
[ "Chemistry" ]
936
[ "Inorganic polymers", "Inorganic compounds" ]
10,813,212
https://en.wikipedia.org/wiki/SPC%20file%20format
The SPC file format is a file format for storing spectroscopic data. The SPC file format is a file format in which all kinds of spectroscopic data, including among others infrared spectra, Raman spectra and UV/VIS spectra. The format can be regarded as a database with records of variable length and each record stores a different kind of data (instrumental information, information on one spectrum of a dataset, the spectrum itself or extra logs). It was invented by Galactic Industries as generic file format for its programs. Their original specification was implemented in 1986, but a more versatile format was created in 1996. Galactic Industries was purchased by Thermo Fisher Scientific who now maintain and develop the GRAMS Software Suite for which the format was defined. They provide free tools and libraries to allow developers to create and maintain SPC files consistently. This file format is not in plaintext, such as XML or CSV, but is a binary format and is therefore not readable with a standard text editor but requires a special reader or software to interpret the file data. The Environmental Protection Agency publishes a free spectra reader called ShowSPC that is open to the public for reading spectra data. Additionally, a company AnalyzeIQ produces a free SPC to CSV converter aptly titled SPC2CSV, an open-source project OpenSpectralWorks is an alternative free reader, as well as SpectraGryph which has analytic and display capabilities for reading SPC files. The Essential FTIR software offers a file reader that can read, display, analyze and export .spc files as well as many other spectroscopy file formats. References External links Python module to read and convert SPC files on GitHub hyperSpec R (programming language) package on GitHub Computer file formats Spectroscopy
SPC file format
[ "Physics", "Chemistry" ]
363
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
10,813,277
https://en.wikipedia.org/wiki/Upper-atmospheric%20lightning
Upper-atmospheric lightning and ionospheric lightning are terms sometimes used by researchers to refer to a family of short-lived electrical-breakdown phenomena that occur well above the altitudes of normal lightning and storm clouds. Upper-atmospheric lightning is believed to be electrically induced forms of luminous plasma. The preferred usage is transient luminous event (TLE), because the various types of electrical-discharge phenomena in the upper atmosphere lack several characteristics of the more familiar tropospheric lightning. Transient luminous events have also been observed in far-ultraviolet images of Jupiter's upper atmosphere, high above the altitude of lightning-producing water clouds. Characteristics There are several types of TLEs, the most common being sprites. Sprites are flashes of bright red light that occur above storm systems. C-sprites (short for "columniform sprites") is the name given to vertical columns of red light. C-sprites exhibiting tendrils are sometimes called "carrot sprites". Other types of TLEs include sprite halos, ghosts, blue jets, gigantic jets, pixies, gnomes, trolls, blue starters, and ELVESs. The acronym ELVES (“emission of light and very low frequency perturbations due to electromagnetic pulse sources”) refers to a singular event which is commonly thought of as being plural. TLEs are secondary phenomena that occur in the upper atmosphere in association with underlying thunderstorm lightning. TLEs generally last anywhere from less than a millisecond to more than 2 seconds. The first video recording of a TLE was captured accidentally on July 6, 1989 when researcher R.C Franz left a camera running overnight to view the night sky. When reviewing the video taken, two finger-like vertical images appeared in two frames of the film. The next known video recordings of a TLE were taken in 1989, when the Space Shuttle mission STS-34 was conducting the Mesoscale Lightning Observation Experiment. On October 21, 1989 TLEs were recorded during orbits 44 and 45. TLEs have been captured by a variety of optical recording systems, with the total number of recent recorded events (early 2009) estimated at many tens-of-thousands. The global rate of TLE occurrence has been estimated from satellite (FORMOSAT-2) observations to be several million events per year. History In the 1920s, the Scottish physicist C.T.R. Wilson predicted that electrical breakdown should occur in the atmosphere high above large thunderstorms. In ensuing decades, high altitude electrical discharges were reported by aircraft pilots and discounted by meteorologists until the first direct visual evidence was documented in 1989. Several years later, the optical signatures of these events were named 'sprites' by researchers to avoid inadvertently implying physical properties that were, at the time, still unknown. The terms red sprites and blue jets gained popularity after a video clip was circulated following an aircraft research campaign to study sprites in 1994. Sprites Sprites are large-scale electrical discharges which occur high above a thunderstorm cloud, or cumulonimbus, giving rise to a quite varied range of visual shapes. They are triggered by the discharges of positive lightning between the thundercloud and the ground. The phenomena were named after the mischievous sprite, e.g., Shakespeare's Ariel or Puck, and is also a backronym for stratospheric/mesospheric perturbations resulting from intense thunderstorm electrification. They normally are colored reddish-orange or greenish-blue, with hanging tendrils below and arcing branches above. They can also be preceded by a reddish halo, known as a sprite halo. They often occur in clusters, reaching to above the Earth's surface. Sprites have been witnessed thousands of times. Sprites have been held responsible for otherwise unexplained accidents involving high-altitude vehicular operations above thunderstorms. Jets Although jets are considered to be a type of upper-atmospheric lightning, it has been found that they are components of tropospheric lightning and a type of cloud-to-air discharge that initiates within a thunderstorm and travels upwards. In contrast, other types of TLEs are not electrically connected with tropospheric lightning—despite being triggered by it. The two main types of jets are blue jets and gigantic jets. Blue starters are considered to be a weaker form of blue jets. Blue jets Blue jets are believed to be initiated as "normal" lightning discharges between the upper positive charge region in a thundercloud and a negative "screening layer" present above this charge region. The positive end of the leader network fills the negative charge region before the negative end fills the positive charge region, and the positive leader subsequently exits the cloud and propagates upward. It was previously believed that blue jets were not directly related to lightning flashes, and that the presence of hail somehow led to their occurrence. They are also brighter than sprites and, as implied by their name, are blue in color. The color is believed to be due to a set of blue and near-ultraviolet emission lines from neutral and ionized molecular nitrogen. They were first recorded on October 21, 1989, on a monochrome video of a thunderstorm on the horizon taken from the Space Shuttle as it passed over Australia. Blue jets occur much less frequently than sprites. By 2007, fewer than a hundred images had been obtained. The majority of these images, which include the first color imagery, are associated with a single thunderstorm. These were taken in a series of 1994 aircraft flights to study sprites. More recently, the source and formation of blue jets has been observed from the International Space Station. Blue starters Blue starters were discovered on video from a night time research flight around thunderstorms and appear to be "an upward moving luminous phenomenon closely related to blue jets." They appear to be shorter and brighter than blue jets, reaching altitudes of only up to 20 km. "Blue starters appear to be blue jets that never quite make it," according to Dr. Victor P. Pasko, associate professor of electrical engineering. Gigantic jets Where blue jets are believed to initiate between the upper positive charge region and a negative screening layer directly above this region, gigantic jets appear to initiate as an intracloud flash between the middle negative and upper positive charge regions in the thundercloud. The negatively charged leader then escapes upward from the cloud toward the ionosphere before it can discharge within the cloud. Gigantic jets reach higher altitudes than blue jets, terminating at 90 km. While they may appear to be visually similar to carrot-type sprites, gigantic jets differ in that they are not associated with cloud to ground lightning and propagate upward from the cloud at a slower rate. Observations On September 14, 2001, scientists at the Arecibo Observatory photographed a gigantic jet—double the height of those previously observed—reaching around into the atmosphere. The jet was located above a thunderstorm over an ocean, and lasted under a second. The jet was initially observed to be traveling up at around at a speed similar to typical lightning, increased to , but then split in two and sped upward with speeds of at least to the ionosphere where it then spread out in a bright burst of light. On July 22, 2002, five gigantic jets between in length were observed over the South China Sea from Taiwan, reported in Nature. The jets lasted under a second, with shapes likened by the researchers to giant trees and carrots. On November 10, 2012, the Chinese Science Bulletin reported a gigantic jet event observed over a thunderstorm in mainland China on August 12, 2010. "GJ event that was clearly recorded in eastern China (storm center located at 35.6°N,119.8°E, near the Huanghai Sea)". On February 2, 2014, the Oro Verde Observatory of Argentina reported ten or more gigantic jet events observed over a thunderstorm in Entre Ríos south. The storm center was located at 33°S, 60°W, near the city of Rosario. On August 13, 2016, photographer Phebe Pan caught a clear wide-angle photo of a gigantic jet on a wide-angle lens while shooting Perseid meteors atop Shi Keng Kong peak in Guangdong province and Li Hualong captured the same jet from a more distant location in Jiahe, Hunan, China. On March 28, 2017, photographer Jeff Miles captured four gigantic jets over Australia. On July 24, 2017, the Gemini Cloudcam at the Mauna Kea Observatory in Hawaii captured several gigantic jets as well as ionosphere-height gravity waves during one thunderstorm. On October 16, 2019, pilot Chris Holmes captured a high-resolution video of a gigantic jet from 35,000 feet (10.6 km) above the Gulf of Mexico near the Yucatán Peninsula. From 35 miles (56 km), Holmes's video shows a blue streamer reach up from the top of a thunderstorm to the ionosphere, becoming red at the top. Only then does a brilliant white lightning leader crawl slowly from the top of the cloud, reaching about 10% of the height of the gigantic jet before fading. On September 20, 2021, at 10:41 pm (02:41 UTC) facing NE from Cabo Rojo, Puerto Rico, photographer Frankie Lucena recorded a video of a gigantic jet plasma event which occurred over a thunderstorm in the area. On 15 February 2024, photographer JJ Rao (Nature by JJ) captured a gigantic jet in high-resolution slow-motion video from Derby, in the Kimberley Region of Western Australia. Other types Elves ELVES often appear as a dim, flattened, expanding glow around in diameter that lasts for, typically, just one millisecond. They occur in the ionosphere above the ground over thunderstorms. Their color was unknown for some time, but is now known to be red. ELVES were first recorded on another shuttle mission, this time recorded off French Guiana on October 7, 1990. That ELVES was discovered in the Shuttle Video by the Mesoscale Lightning Experiment (MLE) team at Marshall Space Flight Center, AL led by the Principal Investigator, Otha H."Skeet" Vaughan, Jr. ELVES is a whimsical acronym for emissions of light and very Low frequency perturbations due to electromagnetic pulse sources. This refers to the process by which the light is generated; the excitation of nitrogen molecules due to electron collisions (the electrons possibly having been energized by the electromagnetic pulse caused by a discharge from an underlying thunderstorm). Trolls TROLLs (transient red optical luminous lineaments) occur after strong sprites, and appear as red spots with faint tails, and on higher-speed cameras, appear as a rapid series of events, starting as a red glow that forms after a sprite tendril, that later produces a red streak downward from itself. They are similar to jets. Pixies Pixies were first observed during the STEPS program during the summer of 2000, a multi-organizational field program investigating the electrical characteristics over thunderstorms on the High Plains. A series of unusual, white luminous events atop the thunderstorm were observed over a 20-minute period, lasting for an average of 16 milliseconds each. They were later dubbed 'pixies'. These pixies are less than 100 meters across, and are not related to lightning. Ghosts Ghosts (greenish optical emission from sprite tops) are faint, green glows that appear within the footprint of a red sprite, remaining after the red has dissipated, and fading away in milliseconds. Though possible examples of ghosts can be seen in historical images, ghosts were first noted as an exclusive phenomenon by storm chasers Hank Schyma and in 2019. The first spectroscopy study to analyze the dynamics and chemistry of ghosts was led by the Atmospheric Electricity group of the Institute of Astrophysics of Andalusia (IAA). This experimental campaign reported the main contributors to the greenish hue of a single event recorded in 2019 to be atomic iron and nickel, molecular nitrogen and ionic molecular oxygen. A weak -but certain- contribution of atomic oxygen, and atomic sodium and ionic silicon were also detected. Gnomes A gnome is a type of lightning that is a small, brief spike of light that points upward from a thunderstorm cloud's anvil top, caused as strong updrafts push moist air above the anvil. It lasts for only a few microseconds. It is about 200 meters wide, and is a maximum of 1 kilometer in height. Its color is unknown as it has only been observed in black-and-white footage. Most sources unofficially refer to them as "Gnomes". See also Aurora Heat lightning Schumann resonances Sprite (lightning) St. Elmo's fire Steve (atmospheric phenomenon) References External links Homepage of the Eurosprite campaign, itself part of the CAL (Coupled Atmospheric Layers) research group March 2, 1999, University of Houston: UH Physicists Pursue Lightning-Like Mysteries Quote: "...Red sprites and blue jets are brief but powerful lightning-like flashes that appear at altitudes of 40–100 km (25–60 miles) above thunderstorms..." Ground and Balloon-Borne Observations of Sprites and Jets Barrington-Leigh, C. P., "ELVES : Ionospheric Heating By the Electromagnetic Pulses from Lightning (A primer)". Space Science Lab, Berkeley. "Darwin Sprites '97". Space Physics Group, University of Otago. Gibbs, W. Wayt, "Sprites and ELVES : Lightning's strange cousins flicker faster than light itself". San Francisco. ScientificAmerican.com. Barrington-Leigh, Christopher, "VLF Research at Palmer Station". Sprites, jets and TLE pictures and articles High speed video (10,000 frame/s) taken by Hans Stenbaek-Nielsen, University of Alaska Video Reveals 'Sprite' Lightning Secrets, Livescience article, 2007. Video evidence Pictures and video of two separate gigantic jets above Oklahoma Gigantic jets between a thundercloud and the ionosphere. Huge Mystery Flashes Seen In Outer Atmosphere Sprite Gallery http://webarchive.loc.gov/all/20020914103454/http://elf.gi.alaska.edu/ The Endless, Short film inspired by Sprite Cloud Flashes ZT Research Electrical phenomena Lightning Terrestrial plasmas Space plasmas Weather hazards Light sources
Upper-atmospheric lightning
[ "Physics" ]
2,960
[ "Space plasmas", "Physical phenomena", "Weather hazards", "Weather", "Astrophysics", "Electrical phenomena", "Lightning" ]
10,813,955
https://en.wikipedia.org/wiki/Male%20egg
Male egg can refer to either: An egg that artificially contains genetic material from a male. An egg from a haplodiploid species such as an ant or bee that is unfertilized and will hatch a male A fertilized egg that a male organism is developing in This article focuses on the first definition. Male eggs are the result of a process in which the eggs of a female would be emptied of their genetic contents (a technique similar to that used in the cloning process), and those contents would be replaced with male DNA. Such eggs could then be fertilized by sperm. The procedure was conceived by Calum MacKellar, a Scottish bioethicist. With this technique, two males could be the biological parents of a child. However, such a procedure would additionally require an artificial womb or a female gestational carrier. In 2023, male eggs from male mice cells were developed and used to create bi-paternal mice that grew into adulthood; bi-paternal mice had been obtained in 2008, but they only survived for a few days. See also Female sperm Male pregnancy LGBT reproduction Genomic imprinting References Applied genetics Biological engineering Biotechnology Genetic engineering Medical ethics Sexuality
Male egg
[ "Chemistry", "Engineering", "Biology" ]
244
[ "Biological engineering", "Behavior", "Sex", "Genetic engineering", "Biotechnology", "nan", "Molecular biology", "Sexuality" ]
10,813,956
https://en.wikipedia.org/wiki/Female%20sperm
Female sperm can refer to either: A sperm which contains an X chromosome, produced in the usual way in the testicles, referring to the occurrence of such a sperm fertilizing an egg and giving birth to a female. A sperm which artificially contains genetic material from a female. Since the late 1980s, scientists have explored how to produce sperm where all of the chromosomes come from a female donor. Artificial female sperm production Creating female sperm was first raised as a possibility in a patent filed in 1991 by injecting a female's cells into a male's testicles, though the patent focused mostly on injecting altered male cells into a male's testes to correct genetic diseases. In 1997, Japanese scientists partially confirmed such techniques by creating chicken female sperm in a similar manner. "However, the ratio of produced W chromosome-bearing (W-bearing) spermatozoa fell substantially below expectations. It is therefore concluded that most of the W-bearing PGC could not differentiate into spermatozoa because of restricted spermatogenesis." These simple transplantation methods follow from earlier observations by developmental biologists that germ stem cells are autonomous in the sense that they can begin the processes to become both sperm and eggs. One potential roadblock to injecting a female's cells into a male's testicles is that the male's immune system might attack and destroy the female's cells. In usual circumstances, when foreign cells (such as cells or organs from other people, or infectious bacteria) are put into a human body, the immune system will reject such cells or organs. However, a special property of testicles is that they are immune-privileged, that is, a male's immune system will not attack foreign cells (such as a female's cells) injected into the sperm-producing part of the testicles. Thus, a female's cells will remain in the male's testicles long enough to be converted into sperm. However, there are more serious challenges. Biologists have well established that male sperm production relies on certain genes on the Y chromosome, which, when missing or defective, lead to such males producing little to no sperm in their testicles. An analogy, then, is that XX cells have complete Y chromosome deficiency. While many genes on the Y chromosome have backups (homologues) on other chromosomes, a few genes such as RBMY on the Y chromosome do not have such backups, and their effects must be compensated to convert a female's cells from into sperm. In 2007, a patent application was filed on methods for creating human female sperm using artificial or natural Y chromosomes and testicular transplantation. Key to successful creation of female sperm (and male eggs) will be inducing male epigenetic markings for female cells that initially have female markings, with techniques for doing so disclosed in the patent application. In 2018, Chinese research scientists produced 29 viable mice offspring from two female mice by creating sperm-like structures from haploid embryonic stem cells using gene editing to alter imprinted regions of DNA. Experts noted that there was little chance of these techniques being applied to humans in the near future. See also LGBT reproduction Male egg References Sex Applied genetics Biological engineering Biotechnology Genetic engineering Medical ethics
Female sperm
[ "Chemistry", "Engineering", "Biology" ]
662
[ "Biological engineering", "Sex", "Genetic engineering", "Biotechnology", "nan", "Molecular biology" ]
7,153,598
https://en.wikipedia.org/wiki/Richards%20equation
The Richards equation represents the movement of water in unsaturated soils, and is attributed to Lorenzo A. Richards who published the equation in 1931. It is a quasilinear partial differential equation; its analytical solution is often limited to specific initial and boundary conditions. Proof of the existence and uniqueness of solution was given only in 1983 by Alt and Luckhaus. The equation is based on Darcy-Buckingham law representing flow in porous media under variably saturated conditions, which is stated as where is the volumetric flux; is the volumetric water content; is the liquid pressure head, which is negative for unsaturated porous media; is the unsaturated hydraulic conductivity; is the geodetic head gradient, which is assumed as for three-dimensional problems. Considering the law of mass conservation for an incompressible porous medium and constant liquid density, expressed as , where is the sink term [T], typically root water uptake. Then substituting the fluxes by the Darcy-Buckingham law the following mixed-form Richards equation is obtained: . For modeling of one-dimensional infiltration this divergence form reduces to . Although attributed to L. A. Richards, the equation was originally introduced 9 years earlier by Lewis Fry Richardson in 1922. Formulations The Richards equation appears in many articles in the environmental literature because it describes the flow in the vadose zone between the atmosphere and the aquifer. It also appears in pure mathematical journals because it has non-trivial solutions. The above-given mixed formulation involves two unknown variables: and . This can be easily resolved by considering constitutive relation , which is known as the water retention curve. Applying the chain rule, the Richards equation may be reformulated as either -form (head based) or -form (saturation based) Richards equation. Head-based By applying the chain rule on temporal derivative leads to , where is known as the retention water capacity . The equation is then stated as . The head-based Richards equation is prone to the following computational issue: the discretized temporal derivative using the implicit Rothe method yields the following approximation: This approximation produces an error that affects the mass conservation of the numerical solution, and so special strategies for temporal derivatives treatment are necessary. Saturation-based By applying the chain rule on the spatial derivative leads to where , which could be further formulated as , is known as the soil water diffusivity . The equation is then stated as The saturation-based Richards equation is prone to the following computational issues. Since the limits and , where is the saturated (maximal) water content and is the residual (minimal) water content a successful numerical solution is restricted just for ranges of water content satisfactory below the full saturation (the saturation should be even lower than air entry value) as well as satisfactory above the residual water content. Parametrization The Richards equation in any of its forms involves soil hydraulic properties, which is a set of five parameters representing soil type. The soil hydraulic properties typically consist of water retention curve parameters by van Genuchten: (), where is the inverse of air entry value [L−1], is the pore size distribution parameter [-], and is usually assumed as . Further the saturated hydraulic conductivity (which is for non isotropic environment a tensor of second order) should also be provided. Identification of these parameters is often non-trivial and was a subject of numerous publications over several decades. Limitations The numerical solution of the Richards equation is one of the most challenging problems in earth science. Richards' equation has been criticized for being computationally expensive and unpredictable because there is no guarantee that a solver will converge for a particular set of soil constitutive relations. Advanced computational and software solutions are required here to over-come this obstacle. The method has also been criticized for over-emphasizing the role of capillarity, and for being in some ways 'overly simplistic' In one dimensional simulations of rainfall infiltration into dry soils, fine spatial discretization less than one cm is required near the land surface, which is due to the small size of the representative elementary volume for multiphase flow in porous media. In three-dimensional applications the numerical solution of the Richards equation is subject to aspect ratio constraints where the ratio of horizontal to vertical resolution in the solution domain should be less than about 7. References See also Infiltration (hydrology) Water retention curve Finite water-content vadose zone flow method Soil Moisture Velocity Equation Soil physics Hydrology Partial differential equations
Richards equation
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
924
[ "Environmental engineering", "Hydrology", "Applied and interdisciplinary physics", "Soil physics" ]
7,154,332
https://en.wikipedia.org/wiki/Skewness%20risk
Skewness risk in forecasting models utilized in the financial field is the risk that results when observations are not spread symmetrically around an average value, but instead have a skewed distribution. As a result, the mean and the median can be different. Skewness risk can arise in any quantitative model that assumes a symmetric distribution (such as the normal distribution) but is applied to skewed data. Ignoring skewness risk, by assuming that variables are symmetrically distributed when they are not, will cause any model to understate the risk of variables with high skewness. Skewness risk plays an important role in hypothesis testing. The analysis of variance, one of the most common tests used in hypothesis testing, assumes that the data is normally distributed. If the variables tested are not normally distributed because they are too skewed, the test cannot be used. Instead, nonparametric tests can be used, such as the Mann–Whitney test for unpaired situation or the sign test for paired situation. Skewness risk and kurtosis risk also have technical implications in calculation of value at risk. If either are ignored, the Value at Risk calculations will be flawed. Benoît Mandelbrot, a French mathematician, extensively researched this issue. He feels that the extensive reliance on the normal distribution for much of the body of modern finance and investment theory is a serious flaw of any related models (including the Black–Scholes model and CAPM). He explained his views and alternative finance theory in a book: The (Mis)Behavior of Markets: A Fractal View of Risk, Ruin and Reward. In options markets, the difference in implied volatility at different strike prices represents the market's view of skew, and is called volatility skew. (In pure Black–Scholes, implied volatility is constant with respect to strike and time to maturity.) Skewness for bonds Bonds have a skewed return. A bond will either pay the full amount on time (very likely to much less likely depending on quality), or less than that. A normal bond does not ever pay more than the "good" case. See also Skewness Kurtosis risk Taleb distribution Stochastic volatility References Mandelbrot, Benoit B., and Hudson, Richard L., The (mis)behaviour of markets : a fractal view of risk, ruin and reward, London : Profile, 2004, Johansson, A. (2005) "Pricing Skewness and Kurtosis Risk on the Swedish Stock Market", Masters Thesis, Department of Economics, Lund University, Sweden Premaratne, G., Bera, A. K. (2000). Modeling Asymmetry and Excess Kurtosis in Stock Return Data. Office of Research Working Paper Number 00-0123, University of Illinois Statistical deviation and dispersion Investment Risk analysis Mathematical finance Applied probability
Skewness risk
[ "Mathematics" ]
597
[ "Applied mathematics", "Mathematical finance", "Applied probability" ]
9,305,752
https://en.wikipedia.org/wiki/Fanno%20flow
In fluid dynamics, Fanno flow (after Italian engineer Gino Girolamo Fanno) is the adiabatic flow through a constant area duct where the effect of friction is considered. Compressibility effects often come into consideration, although the Fanno flow model certainly also applies to incompressible flow. For this model, the duct area remains constant, the flow is assumed to be steady and one-dimensional, and no mass is added within the duct. The Fanno flow model is considered an irreversible process due to viscous effects. The viscous friction causes the flow properties to change along the duct. The frictional effect is modeled as a shear stress at the wall acting on the fluid with uniform properties over any cross section of the duct. For a flow with an upstream Mach number greater than 1.0 in a sufficiently long duct, deceleration occurs and the flow can become choked. On the other hand, for a flow with an upstream Mach number less than 1.0, acceleration occurs and the flow can become choked in a sufficiently long duct. It can be shown that for flow of calorically perfect gas the maximum entropy occurs at M = 1.0. Theory The Fanno flow model begins with a differential equation that relates the change in Mach number with respect to the length of the duct, dM/dx. Other terms in the differential equation are the heat capacity ratio, γ, the Fanning friction factor, f, and the hydraulic diameter, Dh: Assuming the Fanning friction factor is a constant along the duct wall, the differential equation can be solved easily. One must keep in mind, however, that the value of the Fanning friction factor can be difficult to determine for supersonic and especially hypersonic flow velocities. The resulting relation is shown below where L* is the required duct length to choke the flow assuming the upstream Mach number is supersonic. The left-hand side is often called the Fanno parameter. Equally important to the Fanno flow model is the dimensionless ratio of the change in entropy over the heat capacity at constant pressure, cp. The above equation can be rewritten in terms of a static to stagnation temperature ratio, which, for a calorically perfect gas, is equal to the dimensionless enthalpy ratio, H: The equation above can be used to plot the Fanno line, which represents a locus of states for given Fanno flow conditions on an H-ΔS diagram. In the diagram, the Fanno line reaches maximum entropy at H = 0.833 and the flow is choked. According to the Second law of thermodynamics, entropy must always increase for Fanno flow. This means that a subsonic flow entering a duct with friction will have an increase in its Mach number until the flow is choked. Conversely, the Mach number of a supersonic flow will decrease until the flow is choked. Each point on the Fanno line corresponds with a different Mach number, and the movement to choked flow is shown in the diagram. The Fanno line defines the possible states for a gas when the mass flow rate and total enthalpy are held constant, but the momentum varies. Each point on the Fanno line will have a different momentum value, and the change in momentum is attributable to the effects of friction. Additional Fanno flow relations As was stated earlier, the area and mass flow rate in the duct are held constant for Fanno flow. Additionally, the stagnation temperature remains constant. These relations are shown below with the * symbol representing the throat location where choking can occur. A stagnation property contains a 0 subscript. Differential equations can also be developed and solved to describe Fanno flow property ratios with respect to the values at the choking location. The ratios for the pressure, density, temperature, velocity and stagnation pressure are shown below, respectively. They are represented graphically along with the Fanno parameter. Applications The Fanno flow model is often used in the design and analysis of nozzles. In a nozzle, the converging or diverging area is modeled with isentropic flow, while the constant area section afterwards is modeled with Fanno flow. For given upstream conditions at point 1 as shown in Figures 3 and 4, calculations can be made to determine the nozzle exit Mach number and the location of a normal shock in the constant area duct. Point 2 labels the nozzle throat, where M = 1 if the flow is choked. Point 3 labels the end of the nozzle where the flow transitions from isentropic to Fanno. With a high enough initial pressure, supersonic flow can be maintained through the constant area duct, similar to the desired performance of a blowdown-type supersonic wind tunnel. However, these figures show the shock wave before it has moved entirely through the duct. If a shock wave is present, the flow transitions from the supersonic portion of the Fanno line to the subsonic portion before continuing towards M = 1. The movement in Figure 4 is always from the left to the right in order to satisfy the second law of thermodynamics. The Fanno flow model is also used extensively with the Rayleigh flow model. These two models intersect at points on the enthalpy-entropy and Mach number-entropy diagrams, which is meaningful for many applications. However, the entropy values for each model are not equal at the sonic state. The change in entropy is 0 at M = 1 for each model, but the previous statement means the change in entropy from the same arbitrary point to the sonic point is different for the Fanno and Rayleigh flow models. If initial values of si and Mi are defined, a new equation for dimensionless entropy versus Mach number can be defined for each model. These equations are shown below for Fanno and Rayleigh flow, respectively. Figure 5 shows the Fanno and Rayleigh lines intersecting with each other for initial conditions of si = 0 and Mi = 3. The intersection points are calculated by equating the new dimensionless entropy equations with each other, resulting in the relation below. The intersection points occur at the given initial Mach number and its post-normal shock value. For Figure 5, these values are M = 3 and 0.4752, which can be found the normal shock tables listed in most compressible flow textbooks. A given flow with a constant duct area can switch between the Fanno and Rayleigh models at these points. See also Rayleigh flow Mass injection flow Isentropic process Isothermal flow Gas dynamics Compressible flow Choked flow Enthalpy Entropy Isentropic nozzle flow References External links Purdue University Adiabatic and Isothermal Fanno flow calculators University of Kentucky Fanno flow Webcalculator Maurice W. Downey, Gino Fanno Flow regimes Aerodynamics
Fanno flow
[ "Chemistry", "Engineering" ]
1,396
[ "Aerospace engineering", "Aerodynamics", "Flow regimes", "Fluid dynamics" ]
9,306,272
https://en.wikipedia.org/wiki/Shock%20%28fluid%20dynamics%29
Shock is an abrupt discontinuity in the flow field and it occurs in flows when the local flow speed exceeds the local sound speed. More specifically, it is a flow whose Mach number exceeds 1. Explanation of phenomena Shock is formed due to coalescence of various small pressure pulses. Sound waves are pressure waves and it is at the speed of the sound wave the disturbances are communicated in the medium. When an object is moving in a flow field the object sends out disturbances which propagate at the speed of sound and adjusts the remaining flow field accordingly. However, if the object itself happens to travel at speed greater than sound, then the disturbances created by the object would not have traveled and communicated to the rest of the flow field and this results in an abrupt change of property, which is termed as shock in gas dynamics terminology. Shocks are characterized by discontinuous changes in flow properties such as velocity, pressure, temperature, etc. Typically, shock thickness is of a few mean free paths (of the order of 10−8 m). Shocks are irreversible occurrences in supersonic flows (i.e. the entropy increases). Normal shock formulas Where, the index 1 refers to upstream properties, and the index 2 refers to down stream properties. The subscript 0 refers to total or stagnation properties. T is temperature, M is the mach number, P is pressure, ρ is density, and γ is the ratio of specific heats. See also Mach number Sound barrier supersonic flow References Fluid dynamics
Shock (fluid dynamics)
[ "Chemistry", "Engineering" ]
309
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
9,307,607
https://en.wikipedia.org/wiki/Planar%20laser-induced%20fluorescence
Planar laser-induced fluorescence (PLIF) is an optical diagnostic technique widely used for flow visualization and quantitative measurements. PLIF has been shown to be used for velocity, concentration, temperature and pressure measurements. Working A PLIF setup consists of a source of light (usually a laser), an arrangement of lenses to form a sheet, fluorescent medium, collection optics and a detector. The light from the source, illuminates the medium, which then fluoresces. This signal is captured by the detector and can be related to the various properties of the medium. The typical lasers used as light sources are pulsed, which provide a higher peak power than the continuous-wave lasers. Also the short pulse time is useful for good temporal resolution. Some of the widely used laser sources are Nd:YAG laser, dye lasers, excimer lasers, and ion lasers. The light from the laser (usually a beam) is passed through a set of lenses and/or mirrors to form a sheet, which is then used to illuminate the medium. This medium is either made up of fluorescent material or can be seeded with a fluorescent substance. The signal is usually captured by a CCD or CMOS camera (sometimes intensified cameras are also used). Timing electronics are often used to synchronize pulsed light sources with intensified cameras. Basic principles Comparison with other techniques Advantages - Unlike several other flow imaging techniques, PLIF may be combined with particle image velocimetry (PIV). This allows for the simultaneous measurement of a fluid velocity field and species concentration. Limitations flowfield must contain molecular species with an optical resonance wavelength that can be accessed by laser temperature measurements typically require two laser sources velocity measurements typically practical only for high Mach number flows (near sonic or supersonic) signal-to-noise ratio often limited by detector shot-noise fluorescence interferences from other species, especially from hydrocarbons in high pressure reacting flows attenuation of laser sheet across flow field or reabsorption of fluorescence before it reaches detector can lead to systematic errors Applications See also Fluorescence Laser-induced fluorescence Flow visualization Particle image velocimetry (PIV) References Measurement Fluid dynamics
Planar laser-induced fluorescence
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
448
[ "Physical quantities", "Chemical engineering", "Quantity", "Measurement", "Size", "Piping", "Fluid dynamics" ]
1,522,626
https://en.wikipedia.org/wiki/Wigner%E2%80%93Seitz%20cell
The Wigner–Seitz cell, named after Eugene Wigner and Frederick Seitz, is a primitive cell which has been constructed by applying Voronoi decomposition to a crystal lattice. It is used in the study of crystalline materials in crystallography. The unique property of a crystal is that its atoms are arranged in a regular three-dimensional array called a lattice. All the properties attributed to crystalline materials stem from this highly ordered structure. Such a structure exhibits discrete translational symmetry. In order to model and study such a periodic system, one needs a mathematical "handle" to describe the symmetry and hence draw conclusions about the material properties consequent to this symmetry. The Wigner–Seitz cell is a means to achieve this. A Wigner–Seitz cell is an example of a primitive cell, which is a unit cell containing exactly one lattice point. For any given lattice, there are an infinite number of possible primitive cells. However there is only one Wigner–Seitz cell for any given lattice. It is the locus of points in space that are closer to that lattice point than to any of the other lattice points. A Wigner–Seitz cell, like any primitive cell, is a fundamental domain for the discrete translation symmetry of the lattice. The primitive cell of the reciprocal lattice in momentum space is called the Brillouin zone. Overview Background The concept of Voronoi decomposition was investigated by Peter Gustav Lejeune Dirichlet, leading to the name Dirichlet domain. Further contributions were made from Evgraf Fedorov, (Fedorov parallelohedron), Georgy Voronoy (Voronoi polyhedron), and Paul Niggli (Wirkungsbereich). The application to condensed matter physics was first proposed by Eugene Wigner and Frederick Seitz in a 1933 paper, where it was used to solve the Schrödinger equation for free electrons in elemental sodium. They approximated the shape of the Wigner–Seitz cell in sodium, which is a truncated octahedron, as a sphere of equal volume, and solved the Schrödinger equation exactly using periodic boundary conditions, which require at the surface of the sphere. A similar calculation which also accounted for the non-spherical nature of the Wigner–Seitz cell was performed later by John C. Slater. There are only five topologically distinct polyhedra which tile three-dimensional space, . These are referred to as the parallelohedra. They are the subject of mathematical interest, such as in higher dimensions. These five parallelohedra can be used to classify the three dimensional lattices using the concept of a projective plane, as suggested by John Horton Conway and Neil Sloane. However, while a topological classification considers any affine transformation to lead to an identical class, a more specific classification leads to 24 distinct classes of voronoi polyhedra with parallel edges which tile space. For example, the rectangular cuboid, right square prism, and cube belong to the same topological class, but are distinguished by different ratios of their sides. This classification of the 24 types of voronoi polyhedra for Bravais lattices was first laid out by Boris Delaunay. Definition The Wigner–Seitz cell around a lattice point is defined as the locus of points in space that are closer to that lattice point than to any of the other lattice points. It can be shown mathematically that a Wigner–Seitz cell is a primitive cell. This implies that the cell spans the entire direct space without leaving any gaps or holes, a property known as tessellation. Constructing the cell The general mathematical concept embodied in a Wigner–Seitz cell is more commonly called a Voronoi cell, and the partition of the plane into these cells for a given set of point sites is known as a Voronoi diagram. The cell may be chosen by first picking a lattice point. After a point is chosen, lines are drawn to all nearby lattice points. At the midpoint of each line, another line is drawn normal to each of the first set of lines. The smallest area enclosed in this way is called the Wigner–Seitz primitive cell. For a 3-dimensional lattice, the steps are analogous, but in step 2 instead of drawing perpendicular lines, perpendicular planes are drawn at the midpoint of the lines between the lattice points. As in the case of all primitive cells, all area or space within the lattice can be filled by Wigner–Seitz cells and there will be no gaps. Nearby lattice points are continually examined until the area or volume enclosed is the correct area or volume for a primitive cell. Alternatively, if the basis vectors of the lattice are reduced using lattice reduction only a set number of lattice points need to be used. In two-dimensions only the lattice points that make up the 4 unit cells that share a vertex with the origin need to be used. In three-dimensions only the lattice points that make up the 8 unit cells that share a vertex with the origin need to be used. Composite lattices For composite lattices, (crystals which have more than one vector in their basis) each single lattice point represents multiple atoms. We can break apart each Wigner–Seitz cell into subcells by further Voronoi decomposition according to the closest atom, instead of the closest lattice point. For example, the diamond crystal structure contains a two atom basis. In diamond, carbon atoms have tetrahedral sp3 bonding, but since tetrahedra do not tile space, the voronoi decomposition of the diamond crystal structure is actually the triakis truncated tetrahedral honeycomb. Another example is applying Voronoi decomposition to the atoms in the A15 phases, which forms the polyhedral approximation of the Weaire–Phelan structure. Symmetry The Wigner–Seitz cell always has the same point symmetry as the underlying Bravais lattice. For example, the cube, truncated octahedron, and rhombic dodecahedron have point symmetry Oh, since the respective Bravais lattices used to generate them all belong to the cubic lattice system, which has Oh point symmetry. Brillouin zone In practice, the Wigner–Seitz cell itself is actually rarely used as a description of direct space, where the conventional unit cells are usually used instead. However, the same decomposition is extremely important when applied to reciprocal space. The Wigner–Seitz cell in the reciprocal space is called the Brillouin zone, which is used in constructing band diagrams to determine whether a material will be a conductor, semiconductor or an insulator. See also Delaunay triangulation Coordination geometry Crystal field theory Wigner crystal References Crystallography Mineralogy
Wigner–Seitz cell
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,371
[ "Crystallography", "Condensed matter physics", "Materials science" ]
1,522,677
https://en.wikipedia.org/wiki/Cope%20rearrangement
The Cope rearrangement is an extensively studied organic reaction involving the [3,3]-sigmatropic rearrangement of 1,5-dienes. It was developed by Arthur C. Cope and Elizabeth Hardy. For example, 3-methyl-hexa-1,5-diene heated to 300 °C yields hepta-1,5-diene. The Cope rearrangement causes the fluxional states of the molecules in the bullvalene family. Mechanism The Cope rearrangement is the prototypical example of a concerted sigmatropic rearrangement. It is classified as a [3,3]-sigmatropic rearrangement with the Woodward–Hoffmann symbol [π2s+σ2s+π2s] and is therefore thermally allowed. It is sometimes useful to think of it as going through a transition state energetically and structurally equivalent to a diradical, although the diradical is not usually a true intermediate (potential energy minimum). The chair transition state illustrated here is preferred in open-chain systems (as shown by the Doering-Roth experiments). However, conformationally constrained systems like cis-1,2-divinyl cyclopropanes can undergo the rearrangement in the boat conformation. It is currently generally accepted that most Cope rearrangements follow an allowed concerted route through a Hückel aromatic transition state and that a diradical intermediate is not formed. However, the concerted reaction can often be asynchronous and electronically perturbed systems may have considerable diradical character at the transition state. A representative illustration of the transition state of the Cope rearrangement of the electronically neutral hexa-1,5-diene is presented below. Here one can see that the two π-bonds are breaking while two new π-bonds are forming, and simultaneously the σ-bond is breaking while a new σ-bond is forming. In contrast to the Claisen rearrangement, Cope rearrangements without strain release or electronic perturbation are often close to thermally neutral, and may therefore reach only partial conversion due to an insufficiently favorable equilibrium constant. In the case of hexa-1,5-diene, the rearrangement is degenerate (the product is identical to the starting material), so K = 1 by necessity. In asymmetric dienes one often needs to consider the stereochemistry, which in the case of pericyclic reactions, such as the Cope rearrangement, can be predicted with the Woodward–Hoffmann rules and consideration of the preference for the chair transition state geometry. Examples The rearrangement is widely used in organic synthesis. It is symmetry-allowed when it is suprafacial on all components. The transition state of the molecule passes through a boat or chair like transition state. An example of the Cope rearrangement is the expansion of a cyclobutane ring to a cycloocta-1,5-diene ring: In this case, the reaction must pass through the boat transition state to produce the two cis double bonds. A trans double bond in the ring would be too strained. The reaction occurs under thermal conditions. The driving force of the reaction is the loss of strain from the cyclobutane ring. An organocatalytic Cope rearrangement was first reported in 2016. In this process, an aldehyde-substituted 1,5-diene is used, allowing "iminium catalysis" to be achieved using a hydrazide catalyst and moderate levels of enantioselectivity (up to 47% ee) to be achieved. A number of enzymes catalyze the Cope rearrangement, although its occurrence is rare in nature. Oxy-Cope rearrangement and related variants In the oxy-Cope rearrangement, a hydroxyl group is added at C3 forming an enal or enone after keto-enol tautomerism of the intermediate enol. In its original implementation, the oxy-Cope reaction required high temperatures. Subsequent work showed that the corresponding potassium alkoxides rearranged faster by 1010 to 1017. By virtue of this innovation, reaction proceed well at room temperature or even 0 °C. Typically potassium hydride and 18-crown-6 are employed to generate the dissociated potassium alkoxide: The diastereomer of the starting material shown above with an equatorial vinyl group does not react, providing evidence of the concerted nature of this reaction. Nevertheless, the transition state of the reaction is believed to have a high degree of diradical character. Consequently, the anion-accelerated oxy-Cope reaction can proceed with high efficiency even in systems that do not permit efficient orbital overlap, as illustrated by a key step in the synthesis periplanone B: The corresponding neutral oxy-Cope and siloxy-Cope rearrangements failed, giving only elimination products at 200 °C. Another variation of the Cope rearrangement is the aza-Cope rearrangements. See also Claisen rearrangement, another widely studied [3,3] sigmatropic rearrangement divinylcyclopropane-cycloheptadiene rearrangement References Rearrangement reactions Name reactions
Cope rearrangement
[ "Chemistry" ]
1,100
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
1,522,954
https://en.wikipedia.org/wiki/Network%20performance
Network performance refers to measures of service quality of a network as seen by the customer. There are many different ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled and simulated instead of measured; one example of this is using state transition diagrams to model queuing performance or to use a Network Simulator. Performance measures The following measures are often considered important: Bandwidth commonly measured in bits/second is the maximum rate that information can be transferred Throughput is the actual rate that information is transferred Latency the delay between the sender and the receiver decoding it, this is mainly a function of the signals travel time, and processing time at any nodes the information traverses Jitter variation in packet delay at the receiver of the information Error rate the number of corrupted bits expressed as a percentage or fraction of the total sent Bandwidth The available channel bandwidth and achievable signal-to-noise ratio determine the maximum possible throughput. It is not generally possible to send more data than dictated by the Shannon-Hartley Theorem. Throughput Throughput is the number of messages successfully delivered per unit time. Throughput is controlled by available bandwidth, as well as the available signal-to-noise ratio and hardware limitations. Throughput for the purpose of this article will be understood to be measured from the arrival of the first bit of data at the receiver, to decouple the concept of throughput from the concept of latency. For discussions of this type, the terms 'throughput' and 'bandwidth' are often used interchangeably. The Time Window is the period over which the throughput is measured. The choice of an appropriate time window will often dominate calculations of throughput, and whether latency is taken into account or not will determine whether the latency affects the throughput or not. Latency The speed of light imposes a minimum propagation time on all electromagnetic signals. It is not possible to reduce the latency below where s is the distance and cm is the speed of light in the medium (roughly 200,000 km/s for most fiber or electrical media, depending on their velocity factor). This approximately means an additional millisecond round-trip delay (RTT) per 100 km (or 62 miles) of distance between hosts. Other delays also occur in intermediate nodes. In packet switched networks delays can occur due to queueing. Jitter Jitter is the undesired deviation from true periodicity of an assumed periodic signal in electronics and telecommunications, often in relation to a reference clock source. Jitter may be observed in characteristics such as the frequency of successive pulses, the signal amplitude, or phase of periodic signals. Jitter is a significant, and usually undesired, factor in the design of almost all communications links (e.g., USB, PCI-e, SATA, OC-48). In clock recovery applications it is called timing jitter. Error rate In digital transmission, the number of bit errors is the number of received bits of a data stream over a communication channel that have been altered due to noise, interference, distortion or bit synchronization errors. The bit error rate or bit error ratio (BER) is the number of bit errors divided by the total number of transferred bits during a studied time interval. BER is a unitless performance measure, often expressed as a percentage. The bit error probability pe is the expectation value of the BER. The BER can be considered as an approximate estimate of the bit error probability. This estimate is accurate for a long time interval and a high number of bit errors. Interplay of factors All of the factors above, coupled with user requirements and user perceptions, play a role in determining the perceived 'fastness' or utility, of a network connection. The relationship between throughput, latency, and user experience is most aptly understood in the context of a shared network medium, and as a scheduling problem. Algorithms and protocols For some systems, latency and throughput are coupled entities. In TCP/IP, latency can also directly affect throughput. In TCP connections, the large bandwidth-delay product of high latency connections, combined with relatively small TCP window sizes on many devices, effectively causes the throughput of a high latency connection to drop sharply with latency. This can be remedied with various techniques, such as increasing the TCP congestion window size, or more drastic solutions, such as packet coalescing, TCP acceleration, and forward error correction, all of which are commonly used for high latency satellite links. TCP acceleration converts the TCP packets into a stream that is similar to UDP. Because of this, the TCP acceleration software must provide its own mechanisms to ensure the reliability of the link, taking the latency and bandwidth of the link into account, and both ends of the high latency link must support the method used. In the Media Access Control (MAC) layer, performance issues such as throughput and end-to-end delay are also addressed. Examples of latency or throughput dominated systems Many systems can be characterized as dominated either by throughput limitations or by latency limitations in terms of end-user utility or experience. In some cases, hard limits such as the speed of light present unique problems to such systems and nothing can be done to correct this. Other systems allow for significant balancing and optimization for best user experience. Satellite A telecom satellite in geosynchronous orbit imposes a path length of at least 71000 km between transmitter and receiver. which means a minimum delay between message request and message receipt, or latency of 473 ms. This delay can be very noticeable and affects satellite phone service regardless of available throughput capacity. Deep space communication These long path length considerations are exacerbated when communicating with space probes and other long-range targets beyond Earth's atmosphere. The Deep Space Network implemented by NASA is one such system that must cope with these problems. Largely latency driven, the GAO has criticized the current architecture. Several different methods have been proposed to handle the intermittent connectivity and long delays between packets, such as delay-tolerant networking. Even deeper space communication At interstellar distances, the difficulties in designing radio systems that can achieve any throughput at all are massive. In these cases, maintaining communication is a bigger issue than how long that communication takes. Offline data transport Transportation is concerned almost entirely with throughput, which is why physical deliveries of backup tape archives are still largely done by vehicle. See also Bitrate Measuring network throughput Network traffic measurement Response time Notes References Fall, Kevin, "A Delay-Tolerant Network Architecture for Challenged Internets", Intel Corporation, February, 2003, Doc No: IRB-TR-03-003 Government Accountability Office (GAO) report 06-445, NASA'S DEEP SPACE NETWORK: Current Management Structure is Not Conducive to Effectively Matching Resources with Future Requirements, April 27, 2006 External links NASA's Deep Space Network Website "It's the Latency, Stupid" Computing comparisons Information theory
Network performance
[ "Mathematics", "Technology", "Engineering" ]
1,443
[ "Telecommunications engineering", "Applied mathematics", "Computing comparisons", "Computer science", "Information theory" ]
1,524,030
https://en.wikipedia.org/wiki/Cyclohexane%20conformation
Cyclohexane conformations are any of several three-dimensional shapes adopted by cyclohexane. Because many compounds feature structurally similar six-membered rings, the structure and dynamics of cyclohexane are important prototypes of a wide range of compounds. The internal angles of a regular, flat hexagon are 120°, while the preferred angle between successive bonds in a carbon chain is about 109.5°, the tetrahedral angle (the arc cosine of −). Therefore, the cyclohexane ring tends to assume non-planar (warped) conformations, which have all angles closer to 109.5° and therefore a lower strain energy than the flat hexagonal shape. Consider the carbon atoms numbered from 1 to 6 around the ring. If we hold carbon atoms 1, 2, and 3 stationary, with the correct bond lengths and the tetrahedral angle between the two bonds, and then continue by adding carbon atoms 4, 5, and 6 with the correct bond length and the tetrahedral angle, we can vary the three dihedral angles for the sequences (2,3,4), (3,4,5), and (4,5,6). The next bond, from atom 6, is also oriented by a dihedral angle, so we have four degrees of freedom. But that last bond has to end at the position of atom 1, which imposes three conditions in three-dimensional space. If the bond angle in the chain (6,1,2) should also be the tetrahedral angle then we have four conditions. In principle this means that there are no degrees of freedom of conformation, assuming all the bond lengths are equal and all the angles between bonds are equal. It turns out that, with atoms 1, 2, and 3 fixed, there are two solutions called chair, depending on whether the dihedral angle for (1,2,3,4) is positive or negative, and these two solutions are the same under a rotation. But there is also a continuum of solutions, a topological circle where angle strain is zero, including the twist boat and the boat conformations. All the conformations on this continuum have a twofold axis of symmetry running through the ring, whereas the chair conformations do not (they have D symmetry, with a threefold axis running through the ring). It is because of the symmetry of the conformations on this continuum that it is possible to satisfy all four constraints with a range of dihedral angles at (1,2,3,4). On this continuum the energy varies because of Pitzer strain related to the dihedral angles. The twist boat has a lower energy than the boat. In order to go from the chair conformation to a twist-boat conformation or the other chair conformation, bond angles have to be changed, leading to a high-energy half-chair conformation. So the relative stabilities are: . All relative conformational energies are shown below. At room temperature the molecule can easily move among these conformations, but only chair and twist-boat can be isolated in pure form, because the others are not at local energy minima. The boat and twist-boat conformations, as said, lie along a continuum of zero angle strain. If there are substituents that allow the different carbon atoms to be distinguished, then this continuum is like a circle with six boat conformations and six twist-boat conformations between them, three "right-handed" and three "left-handed". (Which should be called right-handed is unimportant.) But if the carbon atoms are indistinguishable, as in cyclohexane itself, then moving along the continuum takes the molecule from the boat form to a "right-handed" twist-boat, and then back to the same boat form (with a permutation of the carbon atoms), then to a "left-handed" twist-boat, and then back again to the achiral boat. The passage boat⊣twist-boat⊣boat⊣twist-boat⊣boat constitutes a pseudorotation. Coplanar carbons Another way to compare the stability within two molecules of cyclohexane in the same conformation is to evaluate the number of coplanar carbons in each molecule. Coplanar carbons are carbons that are all on the same plane. Increasing the number of coplanar carbons increases the number of eclipsing substituents trying to form a 120°, which is unattainable due to the overlapping hydrogens. This overlap increases the overall torsional strain and decreases the stability of the conformation. Cyclohexane diminishes the torsional strain from eclipsing substituents through adopting a conformation with a lower number of nonplanar carbons. For example, if a half-chair conformation contains four coplanar carbons and another half-chair conformation contains five coplanar carbons, the conformation with four coplanar carbons will be more stable. Principal conformers The different conformations are called "conformers", a blend of the words "conformation" and "isomer". Chair conformation The chair conformation is the most stable conformer. At , 99.99% of all molecules in a cyclohexane solution adopt this conformation. The C–C ring of the chair conformation has the same shape as the 6-membered rings in the diamond cubic lattice. This can be modeled as follows. Consider a carbon atom to be a point with four half-bonds sticking out towards the vertices of a tetrahedron. Place it on a flat surface with one half-bond pointing straight up. Looking from directly above, the other three half-bonds will appear to point outwards towards the vertices of an equilateral triangle, so the bonds will appear to have an angle of 120° between them. Arrange six such atoms above the surface so that these 120° angles form a regular hexagon. Reflecting three of the atoms to be below the surface yields the desired geometry. All carbon centers are equivalent. They alternate between two parallel planes, one containing C1, C3 and C5, and the other containing C2, C4, and C6. The chair conformation is left unchanged after a rotation of 120° about the symmetry axis perpendicular to these planes, as well as after a rotation of 60° followed by a reflection in the midpoint plane, resulting in a symmetry group of D3d. While all C–C bonds are tilted relative to the plane, diametrically opposite bonds (such as C1–C2 and C4–C5) are parallel to each other. Six of the twelve C–H bonds are axial, pointing upwards or downwards almost parallel to the symmetry axis. The other six C–H bonds are equatorial, oriented radially outwards with an upwards or downwards tilt. Each carbon center has one axial C–H bond (pointed alternately upwards or downwards) and one equatorial C–H bond (tilted alternately downwards or upwards), enabling each X–C–C–Y unit to adopt a staggered conformation with minimal torsional strain. In this model, the dihedral angles for series of four carbon atoms going around the ring alternate between exactly +60° (gauche+) and −60° (gauche−). The chair conformation cannot be deformed without changing bond angles or lengths. It can be represented as two linked chains, C1–C2–C3–C4 and C1–C6–C5–C4, each mirroring the other, with opposite dihedral angles. The C1–C4 distance depends on the absolute value of this dihedral angle, so in a rigid model, changing one angle requires changing the other angle. If both dihedral angles change while remaining opposites of each other, it is not possible to maintain the correct C–C–C bond angles at C1 and C4. The chair geometry is often preserved when the hydrogen atoms are replaced by halogens or other simple groups. However, when these hydrogens are substituted for a larger group, additional strain may occur due to diaxial interactions between pairs of substituents occupying the same-orientation axial position, which are typically repulsive due to steric crowding. Boat and twist-boat conformations The boat conformations have higher energy than the chair conformations. The interaction between the two flagpole hydrogens, in particular, generates steric strain. Torsional strain also exists between the C2–C3 and C5–C6 bonds (carbon number 1 is one of the two on a mirror plane), which are eclipsed — that is, these two bonds are parallel one to the other across a mirror plane. Because of this strain, the boat configuration is unstable (i.e. is not a local energy minimum). The molecular symmetry is C2v. The boat conformations spontaneously distorts to twist-boat conformations. Here the symmetry is D2, a purely rotational point group with three twofold axes. This conformation can be derived from the boat conformation by applying a slight twist to the molecule so as to remove eclipsing of two pairs of methylene groups. The twist-boat conformation is chiral, existing in right-handed and left-handed versions. The concentration of the twist-boat conformation at room temperature is less than 0.1%, but at it can reach 30%. Rapid cooling of a sample of cyclohexane from to will freeze in a large concentration of twist-boat conformation, which will then slowly convert to the chair conformation upon heating. Dynamics Chair to chair The interconversion of chair conformers is called ring flipping or chair-flipping. Carbon–hydrogen bonds that are axial in one configuration become equatorial in the other, and vice versa. At room temperature the two chair conformations rapidly equilibrate. The proton NMR spectrum of cyclohexane is a singlet at room temperature, with no separation into separate signals for axial and equatorial hydrogens. In one chair form, the dihedral angle of the chain of carbon atoms (1,2,3,4) is positive whereas that of the chain (1,6,5,4) is negative, but in the other chair form, the situation is the opposite. So both these chains have to undergo a reversal of dihedral angle. When one of these two four-atom chains flattens to a dihedral angle of zero, we have the half-chair conformation, at a maximum energy along the conversion path. When the dihedral angle of this chain then becomes equal (in sign as well as magnitude) to that of the other four-atom chain, the molecule has reached the continuum of conformations, including the twist boat and the boat, where the bond angles and lengths can all be at their normal values and the energy is therefore relatively low. After that, the other four-carbon chain has to switch the sign of its dihedral angle in order to attain the target chair form, so again the molecule has to pass through the half-chair as the dihedral angle of this chain goes through zero. Switching the signs of the two chains sequentially in this way minimizes the maximum energy state along the way (at the half-chair state) — having the dihedral angles of both four-atom chains switch sign simultaneously would mean going through a conformation of even higher energy due to angle strain at carbons 1 and 4. The detailed mechanism of the chair-to-chair interconversion has been the subject of much study and debate. The half-chair state (D, in figure below) is the key transition state in the interconversion between the chair and twist-boat conformations. The half-chair has C2 symmetry. The interconversion between the two chair conformations involves the following sequence: chair → half-chair → twist-boat → half-chair′ → chair′. Twist-boat to twist-boat The boat conformation (C, below) is a transition state, allowing the interconversion between two different twist-boat conformations. While the boat conformation is not necessary for interconversion between the two chair conformations of cyclohexane, it is often included in the reaction coordinate diagram used to describe this interconversion because its energy is considerably lower than that of the half-chair, so any molecule with enough energy to go from twist-boat to chair also has enough energy to go from twist-boat to boat. Thus, there are multiple pathways by which a molecule of cyclohexane in the twist-boat conformation can achieve the chair conformation again. Substituted derivatives In cyclohexane, the two chair conformations have the same energy. The situation becomes more complex with substituted derivatives. Monosubstituted cyclohexanes A monosubstituted cyclohexane is one in which there is one non-hydrogen substituent in the cyclohexane ring. The most energetically favorable conformation for a monosubstituted cyclohexane is the chair conformation with the non-hydrogen substituent in the equatorial position because it prevents high steric strain from 1,3 diaxial interactions. In methylcyclohexane the two chair conformers are not isoenergetic. The methyl group prefers the equatorial orientation. The preference of a substituent towards the equatorial conformation is measured in terms of its A value, which is the Gibbs free energy difference between the two chair conformations. A positive A value indicates preference towards the equatorial position. The magnitude of the A values ranges from nearly zero for very small substituents such as deuterium, to about 5 kcal/mol (21 kJ/mol) for very bulky substituents such as the tert-butyl group. Thus, the magnitude of the A value will also correspond to the preference for the equatorial position. Though an equatorial substituent has no 1,3 diaxial interaction that causes steric strain, it has a Gauche interaction in which an equatorial substituent repels the electron density from a neighboring equatorial substituent. Disubstituted cyclohexanes For 1,2- and 1,4-disubstituted cyclohexanes, a cis configuration leads to one axial and one equatorial group. Such species undergo rapid, degenerate chair flipping. For 1,2- and 1,4-disubstituted cyclohexane, a trans configuration, the diaxial conformation is effectively prevented by its high steric strain. For 1,3-disubstituted cyclohexanes, the cis form is diequatorial and the flipped conformation suffers additional steric interaction between the two axial groups. trans-1,3-Disubstituted cyclohexanes are like cis-1,2- and cis-1,4- and can flip between the two equivalent axial/equatorial forms. Cis-1,4-Di-tert-butylcyclohexane has an axial tert-butyl group in the chair conformation and conversion to the twist-boat conformation places both groups in more favorable equatorial positions. As a result, the twist-boat conformation is more stable by at as measured by NMR spectroscopy. Also, for a disubstituted cyclohexane, as well as more highly substituted molecules, the aforementioned A values are additive for each substituent. For example, if calculating the A value of a dimethylcyclohexane, any methyl group in the axial position contributes 1.70 kcal/mol- this number is specific to methyl groups and is different for each possible substituent. Therefore, the overall A value for the molecule is 1.70 kcal/mol per methyl group in the axial position. 1,3 diaxial interactions and gauche interactions 1,3 Diaxial interactions occur when the non-hydrogen substituent on a cyclohexane occupies the axial position. This axial substituent is in the eclipsed position with the axial substituents on the 3-carbons relative to itself (there will be two such carbons and thus two 1,3 diaxial interactions). This eclipsed position increases the steric strain on the cyclohexane conformation and the confirmation will shift towards a more energetically favorable equilibrium. Gauche interactions occur when a non-hydrogen substituent on a cyclohexane occupies the equatorial position. The equatorial substituent is in a staggered position with the 2-carbons relative to itself (there will be two such carbons and thus two 1,2 gauche interactions). This creates a dihedral angle of ~60°. This staggered position is generally preferred to the eclipsed positioning. Effects of substituent size on stability Once again, the conformation and position of groups (ie. substituents) larger than a singular hydrogen are critical to the overall stability of the molecule. The larger the group, the less likely to prefer the axial position on its respective carbon. Maintaining said position with a larger size costs more energy from the molecule as a whole because of steric repulsion between the large groups' nonbonded electron pairs and the electrons of the smaller groups (ie. hydrogens). Such steric repulsions are absent for equatorial groups. The cyclohexane model thus assesses steric size of functional groups on the basis of gauche interactions. The gauche interaction will increase in energy as the size of the substituent involved increases. For example, a t-butyl substituent would sustain a higher energy gauche interaction as compared to a methyl group, and therefore, contribute more to the instability of the molecule as a whole. In comparison, a staggered conformation is thus preferred; the larger groups would maintain the equatorial position and lower the energy of the entire molecule. This preference for the equatorial position among bulkier groups lowers the energy barriers between different conformations of the ring. When the molecule is activated, there will be a loss in entropy due to the stability of the larger substituents. Therefore, the preference of the equatorial positions by large molecules (such as a methyl group) inhibits the reactivity of the molecule and thus makes the molecule more stable as a whole. Effects on conformational equilibrium Conformational equilibrium is the tendency to favor the conformation where cyclohexane is the most stable. This equilibrium depends on the interactions between the molecules in the compound and the solvent. Polarity and nonpolarity are the main factors in determining how well a solvent interacts with a compound. Cyclohexane is considered nonpolar, meaning that there is no electronegative difference between its bonds and its overall structure is symmetrical. Due to this, when cyclohexane is immersed in a polar solvent, it will have less solvent distribution, which signifies a poor interaction between the solvent and solute. This produces a limited catalytic effect. Moreover, when cyclohexane comes into contact with a nonpolar solvent, the solvent distribution is much greater, showing a strong interaction between the solvent and solute. This strong interaction yields a heighten catalytic effect. Heterocyclic analogs Heterocyclic analogs of cyclohexane are pervasive in sugars, piperidines, dioxanes, etc. They exist generally follow the trends seen for cyclohexane, i.e. the chair conformer being most stable. The axial–equatorial equilibria (A values) are however strongly affected by the replacement of a methylene by O or NH. Illustrative are the conformations of the glucosides. 1,2,4,5-Tetrathiane ((SCH2)3) lacks the unfavorable 1,3-diaxial interactions of cyclohexane. Consequently its twist-boat conformation is populated; in the corresponding tetramethyl structure, 3,3,6,6-tetramethyl-1,2,4,5-tetrathiane, the twist-boat conformation dominates. Historical background In 1890, , a 28-year-old assistant in Berlin, published instructions for folding a piece of paper to represent two forms of cyclohexane he called symmetrical and asymmetrical (what we would now call chair and boat). He clearly understood that these forms had two positions for the hydrogen atoms (again, to use modern terminology, axial and equatorial), that two chairs would probably interconvert, and even how certain substituents might favor one of the chair forms (). Because he expressed all this in mathematical language, few chemists of the time understood his arguments. He had several attempts at publishing these ideas, but none succeeded in capturing the imagination of chemists. His death in 1893 at the age of 31 meant his ideas sank into obscurity. It was only in 1918 that , based on the molecular structure of diamond that had recently been solved using the then very new technique of X-ray crystallography, was able to successfully argue that Sachse's chair was the pivotal motif. Derek Barton and Odd Hassel shared the 1969 Nobel Prize in Chemistry for work on the conformations of cyclohexane and various other molecules. Practical applications Cyclohexane is the most stable of the cycloalkanes, due to the stability of adapting to its chair conformer. This conformer stability allows cyclohexane to be used as a standard in lab analyses. More specifically, cyclohexane is used as a standard for pharmaceutical reference in solvent analysis of pharmaceutical compounds and raw materials. This specific standard signifies that cyclohexane is used in quality analysis of food and beverages, pharmaceutical release testing, and pharmaceutical method development; these various methods test for purity, biosafety, and bioavailability of products. The stability of the chair conformer of cyclohexane gives the cycloalkane a versatile and important application when regarding the safety and properties of pharmaceuticals. References Further reading Colin A. Russell, 1975, "The Origins of Conformational Analysis," in Van 't Hoff–Le Bel Centennial, O. B. Ramsay, Ed. (ACS Symposium Series 12), Washington, D.C.: American Chemical Society, pp. 159–178. William Reusch, 2010, "Ring Conformations" and "Substituted Cyclohexane Compounds," in Virtual Textbook of Organic Chemistry, East Lansing, MI, USA:Michigan State University, see and , accessed 20 June 2015. External links Java applets of all conformations from the University of Nijmegen Stereochemistry Cyclohexanes de:Konformation#Konformationen bei cyclischen Molekülen it:Cicloesano#Conformazione
Cyclohexane conformation
[ "Physics", "Chemistry" ]
4,788
[ "Spacetime", "Stereochemistry", "Space", "nan" ]
1,524,630
https://en.wikipedia.org/wiki/Alternatives%20to%20the%20Standard%20Higgs%20Model
Alternative models to the Standard Higgs Model are models which are considered by many particle physicists to solve some of the Higgs boson's existing problems. Two of the most currently researched models are quantum triviality, and Higgs hierarchy problem. Overview In particle physics, elementary particles and forces give rise to the world around us. Physicists explain the behaviors of these particles and how they interact using the Standard Model—a widely accepted framework believed to explain most of the world we see around us. Initially, when these models were being developed and tested, it seemed that the mathematics behind those models, which were satisfactory in areas already tested, would also forbid elementary particles from having any mass, which showed clearly that these initial models were incomplete. In 1964 three groups of physicists almost simultaneously released papers describing how masses could be given to these particles, using approaches known as symmetry breaking. This approach allowed the particles to obtain a mass, without breaking other parts of particle physics theory that were already believed reasonably correct. This idea became known as the Higgs mechanism, and later experiments confirmed that such a mechanism does exist—but they could not show exactly how it happens. The simplest theory for how this effect takes place in nature, and the theory that became incorporated into the Standard Model, was that if one or more of a particular kind of "field" (known as a Higgs field) happened to permeate space, and if it could interact with elementary particles in a particular way, then this would give rise to a Higgs mechanism in nature. In the basic Standard Model there is one field and one related Higgs boson; in some extensions to the Standard Model there are multiple fields and multiple Higgs bosons. In the years since the Higgs field and boson were proposed as a way to explain the origins of symmetry breaking, several alternatives have been proposed that suggest how a symmetry breaking mechanism could occur without requiring a Higgs field to exist. Models which do not include a Higgs field or a Higgs boson are known as Higgsless models. In these models, strongly interacting dynamics rather than an additional (Higgs) field produce the non-zero vacuum expectation value that breaks electroweak symmetry. List of alternative models A partial list of proposed alternatives to a Higgs field as a source for symmetry breaking includes: Technicolor models break electroweak symmetry through new gauge interactions, which were originally modeled on quantum chromodynamics. Extra-dimensional Higgsless models use the fifth component of the gauge fields to play the role of the Higgs fields. It is possible to produce electroweak symmetry breaking by imposing certain boundary conditions on the extra dimensional fields, increasing the unitarity breakdown scale up to the energy scale of the extra dimension. Through the AdS/QCD correspondence this model can be related to technicolor models and to "UnHiggs" models in which the Higgs field is of unparticle nature. Models of composite W and Z vector bosons. Top quark condensate. "Unitary Weyl gauge". By adding a suitable gravitational term to the standard model action in curved spacetime, the theory develops a local conformal (Weyl) invariance. The conformal gauge is fixed by choosing a reference mass scale based on the gravitational coupling constant. This approach generates the masses for the vector bosons and matter fields similar to the Higgs mechanism without traditional spontaneous symmetry breaking. Asymptotically safe weak interactions based on some nonlinear sigma models. Preon and models inspired by preons such as Ribbon model of Standard Model particles by Sundance Bilson-Thompson, based in braid theory and compatible with loop quantum gravity and similar theories. This model not only explains mass but leads to an interpretation of electric charge as a topological quantity (twists carried on the individual ribbons) and colour charge as modes of twisting. Symmetry breaking driven by non-equilibrium dynamics of quantum fields above the electroweak scale. Unparticle physics and the unhiggs. These are models that posit that the Higgs sector and Higgs boson are scaling invariant, also known as unparticle physics. In theory of superfluid vacuum masses of elementary particles can arise as a result of interaction with the physical vacuum, similarly to the gap generation mechanism in superconductors. UV-completion by classicalization, in which the unitarization of the WW scattering happens by creation of classical configurations. See also Composite Higgs models References External links Higgsless model on arxiv.org Physics beyond the Standard Model Electroweak theory Mass
Alternatives to the Standard Higgs Model
[ "Physics", "Mathematics" ]
933
[ "Scalar physical quantities", "Physical phenomena", "Physical quantities", "Quantity", "Mass", "Unsolved problems in physics", "Electroweak theory", "Size", "Fundamental interactions", "Particle physics", "Wikipedia categories named after physical quantities", "Physics beyond the Standard Mode...
1,524,792
https://en.wikipedia.org/wiki/ATP%20hydrolysis
ATP hydrolysis is the catabolic reaction process by which chemical energy that has been stored in the high-energy phosphoanhydride bonds in adenosine triphosphate (ATP) is released after splitting these bonds, for example in muscles, by producing work in the form of mechanical energy. The product is adenosine diphosphate (ADP) and an inorganic phosphate (Pi). ADP can be further hydrolyzed to give energy, adenosine monophosphate (AMP), and another inorganic phosphate (Pi). ATP hydrolysis is the final link between the energy derived from food or sunlight and useful work such as muscle contraction, the establishment of electrochemical gradients across membranes, and biosynthetic processes necessary to maintain life. Anhydridic bonds are often labelled as "high-energy bonds". P-O bonds are in fact fairly strong (~30 kJ/mol stronger than C-N bonds) and themselves not particularly easy to break. As noted below, energy is released by the hydrolysis of ATP. However, when the P-O bonds are broken, input of energy is required. It is the formation of new bonds and lower-energy inorganic phosphate with a release of a larger amount of energy that lowers the total energy of the system and makes it more stable. Hydrolysis of the phosphate groups in ATP is especially exergonic, because the resulting inorganic phosphate molecular ion is greatly stabilized by multiple resonance structures, making the products (ADP and Pi) lower in energy than the reactant (ATP). The high negative charge density associated with the three adjacent phosphate units of ATP also destabilizes the molecule, making it higher in energy. Hydrolysis relieves some of these electrostatic repulsions, liberating useful energy in the process by causing conformational changes in enzyme structure. In humans, approximately 60 percent of the energy released from the hydrolysis of ATP produces metabolic heat rather than fuel the actual reactions taking place. Due to the acid-base properties of ATP, ADP, and inorganic phosphate, the hydrolysis of ATP has the effect of lowering the pH of the reaction medium. Under certain conditions, high levels of ATP hydrolysis can contribute to lactic acidosis. Amount of energy produced Hydrolysis of the terminal phosphoanhydridic bond is a highly exergonic process. The amount of released energy depends on the conditions in a particular cell. Specifically, the energy released is dependent on concentrations of ATP, ADP and Pi. As the concentrations of these molecules deviate from values at equilibrium, the value of Gibbs free energy change (ΔG) will be increasingly different. In standard conditions (ATP, ADP and Pi concentrations are equal to 1M, water concentration is equal to 55 M) the value of ΔG is between -28 and -34 kJ/mol. The range of the ΔG value exists because this reaction is dependent on the concentration of Mg2+ cations, which stabilize the ATP molecule. The cellular environment also contributes to differences in the ΔG value since ATP hydrolysis is dependent not only on the studied cell, but also on the surrounding tissue and even the compartment within the cell. Variability in the ΔG values is therefore to be expected. The relationship between the standard Gibbs free energy change ΔrGo and chemical equilibrium is revealing. This relationship is defined by the equation ΔrGo = -RT ln(K), where K is the equilibrium constant, which is equal to the reaction quotient Q in equilibrium. The standard value of ΔG for this reaction is, as mentioned, between -28 and -34 kJ/mol; however, experimentally determined concentrations of the involved molecules reveal that the reaction is not at equilibrium. Given this fact, a comparison between the equilibrium constant, K, and the reaction quotient, Q, provides insight. K takes into consideration reactions taking place in standard conditions, but in the cellular environment the concentrations of the involved molecules (namely, ATP, ADP, and Pi) are far from the standard 1 M. In fact, the concentrations are more appropriately measured in mM, which is smaller than M by three orders of magnitude. Using these nonstandard concentrations, the calculated value of Q is much less than one. By relating Q to ΔG using the equation ΔG = ΔrGo + RT ln(Q), where ΔrGo is the standard change in Gibbs free energy for the hydrolysis of ATP, it is found that the magnitude of ΔG is much greater than the standard value. The nonstandard conditions of the cell actually result in a more favorable reaction. In one particular study, to determine ΔG in vivo in humans, the concentration of ATP, ADP, and Pi was measured using nuclear magnetic resonance. In human muscle cells at rest, the concentration of ATP was found to be around 4 mM and the concentration of ADP was around 9 μM. Inputing these values into the above equations yields ΔG = -64 kJ/mol. After ischemia, when the muscle is recovering from exercise, the concentration of ATP is as low as 1 mM and the concentration of ADP is around 7 μM. Therefore, the absolute ΔG would be as high as -69 kJ/mol. By comparing the standard value of ΔG and the experimental value of ΔG, one can see that the energy released from the hydrolysis of ATP, as measured in humans, is almost twice as much as the energy produced under standard conditions. See also Dephosphorylation References Further reading Cellular respiration Exercise physiology
ATP hydrolysis
[ "Chemistry", "Biology" ]
1,155
[ "Biochemistry", "Cellular respiration", "Metabolism" ]
1,524,935
https://en.wikipedia.org/wiki/K-65%20residues
K-65 residues are the very radioactive mill residues resulting from the uniquely concentrated uranium ore discovered before WW II in Katanga province (Shinkolobwe mine) of the Democratic Republic of the Congo (formerly Belgian Congo). According to Zoellner, "Remnants from typical uranium from the southwestern United States give a radioactive signature of about forty picocuries per gram, about ten times the amount of picocuries per liter of air that is considered safe for humans to breathe. The Shinkolobwe remnants, by contrast, emit a stunning 520,000 picocuries per gram." The Linde Air Products Company and Electro Metallurgical plant near Niagara Falls built the ring-and-plug in Little Boy. Linde Air used the Lake Ontario Ordnance Works site at the end of the war to dispose of its atomic waste, cuttings from the African uranium, some 200 dump trucks' worth. The eventual location of all this waste is called the "Interim Waste Containment Structure" of the Niagara Falls Storage Site of the U.S. Army Corps of Engineers. This ore, dubbed "K-65", had a record 65% uranium content. It also held very high concentrations of thorium and radium (and their decay products, including radon gas) which are retained in the tailings (residues). The very high concentrations of these extremely toxic, long-lived radionuclides present in these wastes prompted the National Academy of Sciences' National Research Council to categorize them as indistinguishable in hazard from High-Level Waste in its 1995 report, "Safety of the High-Level Uranium Ore Residues at the Niagara Falls Storage Site, Lewiston, New York" . The K-65 ores were refined as a key part of the Manhattan Project during World War II at the Linde Ceramics Plant at Tonawanda, NY, and at the Mallinckrodt Chemical Works in St. Louis, MO; these ores were the primary raw material source of ~80% of the uranium used in the Hiroshima bomb. The Mallinckrodt "K-65 residues" were later moved to the Feed Materials Production Center, a Cold War era uranium refinery at Fernald, OH (outside of Cincinnati) which commenced operations in 1951. The refining of "K-65" ore was continued at Fernald. The Linde "K-65 residues" were transported to a storage silo built at the federally appropriated Lake Ontario Ordnance Works site outside of Lewiston, NY, a short distance from Niagara Falls, NY. References Nuclear materials Uranium
K-65 residues
[ "Physics" ]
533
[ "Materials", "Nuclear materials", "Matter" ]
1,525,710
https://en.wikipedia.org/wiki/Assimilation%20%28biology%29
Assimilation is the process of absorption of vitamins, minerals, and other chemicals from food as part of the nutrition of an organism. In humans, this is always done with a chemical breakdown (enzymes and acids) and physical breakdown (oral mastication and stomach churning). Chemical alteration of substances in the bloodstream by the liver or cellular secretions. Although a few similar compounds can be absorbed in digestion bio assimilation, the bioavailability of many compounds is dictated by this second process since both the liver and cellular secretions can be very specific in their metabolic action (see chirality). This second process is where the absorbed food reaches the cells via the liver. Most foods are composed of largely indigestible components depending on the enzymes and effectiveness of an animal's digestive tract. The most well-known of these indigestible compounds is cellulose; the basic chemical polymer in the makeup of plant cell walls. Most animals, however, do not produce cellulase; the enzyme needed to digest cellulose. However, some animals and species have developed symbiotic relationships with cellulase-producing bacteria (see termites and metamonads.) This allows termites to use the energy-dense cellulose carbohydrate. Other such enzymes are known to significantly improve bio-assimilation of nutrients. Because of the use of bacterial derivatives, enzymatic dietary supplements now contain such enzymes as amylase, glucoamylase, protease, invertase, peptidase, lipase, lactase, phytase, and cellulase. Examples of biological assimilation Photosynthesis, a process whereby carbon dioxide and water are transformed into a number of organic molecules in plant cells. Nitrogen fixation from the soil into organic molecules by symbiotic bacteria which live in the roots of certain plants, such as Leguminosae. Magnesium supplements orotate, oxide, sulfate, citrate, and glycerate are all structurally similar. However, oxide and sulfate are not water-soluble and do not enter the bloodstream, while orotate and glycerate have normal exiguous liver conversion. Chlorophyll sources or magnesium citrate are highly bioassimilable. The absorption of nutrients into the body after digestion in the intestine and its transformation in biological tissues and fluids. See also Anabolism Biochemistry Nutrition Respiration Transportation Excretion References Biological processes Metabolism
Assimilation (biology)
[ "Chemistry", "Biology" ]
507
[ "Biochemistry", "Metabolism", "nan", "Cellular processes" ]
1,526,836
https://en.wikipedia.org/wiki/Plasmogamy
Plasmogamy is a stage in the sexual reproduction of fungi, in which the protoplasm of two parent cells (usually from the mycelia) fuse without the fusion of nuclei, effectively bringing two haploid nuclei close together in the same cell. This state is followed by karyogamy, where the two nuclei fuse and then undergo meiosis to produce spores. The dikaryotic state that comes after plasmogamy will often persist for many generations before the fungi undergoes karyogamy. In lower fungi however, plasmogamy is usually immediately followed by karyogamy. A comparative genomic study indicated the presence of the machinery for plasmogamy, karyogamy and meiosis in the Amoebozoa. References Mycology
Plasmogamy
[ "Biology" ]
166
[ "Mycology" ]
1,527,098
https://en.wikipedia.org/wiki/Clairaut%27s%20theorem%20%28gravity%29
Clairaut's theorem characterizes the surface gravity on a viscous rotating ellipsoid in hydrostatic equilibrium under the action of its gravitational field and centrifugal force. It was published in 1743 by Alexis Claude Clairaut in a treatise which synthesized physical and geodetic evidence that the Earth is an oblate rotational ellipsoid. It was initially used to relate the gravity at any point on the Earth's surface to the position of that point, allowing the ellipticity of the Earth to be calculated from measurements of gravity at different latitudes. Today it has been largely supplanted by the Somigliana equation. History Although it had been known since antiquity that the Earth was spherical, by the 17th century evidence was accumulating that it was not a perfect sphere. In 1672 Jean Richer found the first evidence that gravity was not constant over the Earth (as it would be if the Earth were a sphere); he took a pendulum clock to Cayenne, French Guiana and found that it lost minutes per day compared to its rate at Paris. This indicated the acceleration of gravity was less at Cayenne than at Paris. Pendulum gravimeters began to be taken on voyages to remote parts of the world, and it was slowly discovered that gravity increases smoothly with increasing latitude, gravitational acceleration being about 0.5% greater at the poles than at the equator. British physicist Isaac Newton explained this in his Principia Mathematica (1687) in which he outlined his theory and calculations on the shape of the Earth. Newton theorized correctly that the Earth was not precisely a sphere but had an oblate ellipsoidal shape, slightly flattened at the poles due to the centrifugal force of its rotation. Using geometric calculations, he gave a concrete argument as to the hypothetical ellipsoid shape of the Earth. The goal of Principia was not to provide exact answers for natural phenomena, but to theorize potential solutions to these unresolved factors in science. Newton pushed for scientists to look further into the unexplained variables. Two prominent researchers that he inspired were Alexis Clairaut and Pierre Louis Maupertuis. They both sought to prove the validity of Newton's theory on the shape of the Earth. In order to do so, they went on an expedition to Lapland in an attempt to accurately measure a meridian arc. From such measurements they could calculate the eccentricity of the Earth, its degree of departure from a perfect sphere. Clairaut confirmed that Newton's theory that the Earth was ellipsoidal was correct, but that his calculations were in error, and he wrote a letter to the Royal Society of London with his findings. The society published an article in Philosophical Transactions the following year, 1737. In it Clairaut pointed out (Section XVIII) that Newton's Proposition XX of Book 3 does not apply to the real earth. It stated that the weight of an object at some point in the earth depended only on the proportion of its distance from the centre of the earth to the distance from the centre to the surface at or above the object, so that the total weight of a column of water at the centre of the earth would be the same no matter in which direction the column went up to the surface. Newton had in fact said that this was on the assumption that the matter inside the earth was of a uniform density (in Proposition XIX). Newton realized that the density was probably not uniform, and proposed this as an explanation for why gravity measurements found a greater difference between polar regions and equatorial regions than what his theory predicted. However, he also thought this would mean the equator was further from the centre than what his theory predicted, and Clairaut points out that the opposite is true. Clairaut points out at the beginning of his article that Newton did not explain why he thought the earth was ellipsoid rather than like some other oval, but that Clairaut, and James Stirling almost simultaneously, had shown why the earth should be an ellipsoid in 1736. Clairaut's article did not provide a valid equation to back up his argument as well. This created much controversy in the scientific community. It was not until Clairaut wrote Théorie de la figure de la terre in 1743 that a proper answer was provided. In it, he promulgated what is more formally known today as Clairaut's theorem. Formula Clairaut's theorem says that the acceleration due to gravity g (including the effect of centrifugal force) on the surface of a spheroid in hydrostatic equilibrium (being a fluid or having been a fluid in the past, or having a surface near sea level) at latitude is: where is the value of the acceleration of gravity at the equator, m the ratio of the centrifugal force to gravity at the equator, and f the flattening of a meridian section of the earth, defined as: (where a = semimajor axis, b = semiminor axis). The contribution of centrifugal force is approximately whereas the gravitational attraction itself varies approximately as This formula holds when the surface is perpendicular to the direction of gravity (including centrifugal force), even if (as usually) the density is not constant (in which case the gravitational attraction can be calculated at any point from the shape alone, without reference to ). For the earth, and while so is greater at the poles than on the equator. Clairaut derived the formula under the assumption that the body was composed of concentric coaxial spheroidal layers of constant density. This work was subsequently pursued by Laplace, who assumed surfaces of equal density which were nearly spherical. The English mathematician George Stokes showed in 1849 that the theorem applied to any law of density so long as the external surface is a spheroid of equilibrium. A history of more recent developments and more detailed equations for g can be found in Khan. The above expression for g has been supplanted by the Somigliana equation (after Carlo Somigliana). Geodesy The spheroidal shape of the Earth is the result of the interplay between gravity and centrifugal force caused by the Earth's rotation about its axis. In his Principia, Newton proposed the equilibrium shape of a homogeneous rotating Earth was a rotational ellipsoid with a flattening f given by 1/230. As a result, gravity increases from the equator to the poles. By applying Clairaut's theorem, Laplace found from 15 gravity values that f = 1/330. A modern estimate is 1/298.25642. See Figure of the Earth for more detail. For a detailed account of the construction of the reference Earth model of geodesy, see Chatfield. References Eponymous theorems of physics Geodesy Navigation Surveying Gravimetry
Clairaut's theorem (gravity)
[ "Physics", "Mathematics", "Engineering" ]
1,417
[ "Equations of physics", "Applied mathematics", "Eponymous theorems of physics", "Surveying", "Civil engineering", "Geodesy", "Physics theorems" ]
2,997,610
https://en.wikipedia.org/wiki/Bauer%E2%80%93Fike%20theorem
In mathematics, the Bauer–Fike theorem is a standard result in the perturbation theory of the eigenvalue of a complex-valued diagonalizable matrix. In its substance, it states an absolute upper bound for the deviation of one perturbed matrix eigenvalue from a properly chosen eigenvalue of the exact matrix. Informally speaking, what it says is that the sensitivity of the eigenvalues is estimated by the condition number of the matrix of eigenvectors. The theorem was proved by Friedrich L. Bauer and C. T. Fike in 1960. The setup In what follows we assume that: is a diagonalizable matrix; is the non-singular eigenvector matrix such that , where is a diagonal matrix. If is invertible, its condition number in -norm is denoted by and defined by: The Bauer–Fike Theorem Bauer–Fike Theorem. Let be an eigenvalue of . Then there exists such that: Proof. We can suppose , otherwise take and the result is trivially true since . Since is an eigenvalue of , we have and so However our assumption, , implies that: and therefore we can write: This reveals to be an eigenvalue of Since all -norms are consistent matrix norms we have where is an eigenvalue of . In this instance this gives us: But is a diagonal matrix, the -norm of which is easily computed: whence: An Alternate Formulation The theorem can also be reformulated to better suit numerical methods. In fact, dealing with real eigensystem problems, one often has an exact matrix , but knows only an approximate eigenvalue-eigenvector couple, and needs to bound the error. The following version comes in help. Bauer–Fike Theorem (Alternate Formulation). Let be an approximate eigenvalue-eigenvector couple, and . Then there exists such that: Proof. We can suppose , otherwise take and the result is trivially true since . So exists, so we can write: since is diagonalizable; taking the -norm of both sides, we obtain: However is a diagonal matrix and its -norm is easily computed: whence: A Relative Bound Both formulations of Bauer–Fike theorem yield an absolute bound. The following corollary is useful whenever a relative bound is needed: Corollary. Suppose is invertible and that is an eigenvalue of . Then there exists such that: Note. can be formally viewed as the relative variation of , just as is the relative variation of . Proof. Since is an eigenvalue of and , by multiplying by from left we have: If we set: then we have: which means that is an eigenvalue of , with as an eigenvector. Now, the eigenvalues of are , while it has the same eigenvector matrix as . Applying the Bauer–Fike theorem to with eigenvalue , gives us: The Case of Normal Matrices If is normal, is a unitary matrix, therefore: so that . The Bauer–Fike theorem then becomes: Or in alternate formulation: which obviously remains true if is a Hermitian matrix. In this case, however, a much stronger result holds, known as the Weyl's theorem on eigenvalues. In the hermitian case one can also restate the Bauer–Fike theorem in the form that the map that maps a matrix to its spectrum is a non-expansive function with respect to the Hausdorff distance on the set of compact subsets of . References Spectral theory Theorems in analysis Articles containing proofs
Bauer–Fike theorem
[ "Mathematics" ]
756
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical problems", "Articles containing proofs", "Mathematical theorems" ]
2,997,965
https://en.wikipedia.org/wiki/Sewer%20alligator
The sewer alligator is an urban legend centered around alligators that live in sewers outside alligators' native range. Some cities in which sewer alligators have supposedly been found are New York City and Paris. Accounts of fully grown sewer alligators are unproven, but small alligators are sometimes rescued from sewers. Stories date back to the late 1920s and early 1930s; in most instances they are part of contemporary legend. The New York Times reports the city rescues 100 alligators per year, some directly from homes where they are kept as illegal pets (which can be legally ordered online in other states and are legal to mail when small), and some from outside (where they can attract considerable attention) though mostly above-ground. Though escapees and former pets may survive for a short time in New York sewers, longer-term survival is not possible due to the low temperatures and the bacteria in human feces. Sewer maintenance crews insist there is no underground population of alligators in sewers. Legend The legend of alligators inhabiting the sewer system of New York City is a widely circulated urban myth. It suggests that alligators navigate the city's sewers, preying on rats and other refuse, and posing a threat to sewer workers, who are said to carry firearms for protection. According to the lore, these alligators are often described as large and vicious, with some attributing a lack of pigmentation to their purported status as "albinos." The urban myth has permeated popular culture, featuring in various forms of media including books, television shows, and movies. It has also inspired hoaxes and artistic projects, and is commemorated in the city with a quasi-holiday known as Alligator in the Sewer Day, celebrated on February 9. Following reports of sewer alligators in the 1930s, the story built up over the decades and became more of a contemporary legend. It is questionable how accurate the original stories are, and some have even suggested they are fictions created by Teddy May, who was the Commissioner of Sewers at the time. Interviews with him were the basis of the first published accounts of sewer alligators. In their honor, February 9 is Alligators in the Sewers Day in Manhattan. A similar story from 1851 involves feral pigs in the sewers of Hampstead, London. Louisiana or Florida to New York City As late as the middle of the 20th century, souvenir shops in Florida sold live baby alligators (in small fish tanks) as novelty souvenirs. Tourists from New York City would buy a baby alligator and try to raise it as a pet. When the alligator grew too large for comfort, the family would proceed to flush the reptile down the toilet. The most common story is that the alligators survive and reside within the sewer and reproduce, feeding on rats and garbage, growing to huge sizes and striking fear into sewer workers. In Robert Daley's book The World Beneath the City (1959) he comments that one night a sewer worker in New York City was shocked to find a large albino alligator swimming toward him. Weeks of hunting followed. The Journal of American Folklore has this to say on the subject: An additional reference to the sewer alligator exists in Thomas Pynchon's first novel, V. It fictionalizes the account, stating Macy's was selling them for a time for 50 cents. Eventually the children became bored with the pets, setting them loose in the streets as well as flushing them into the sewers. Rather than poison, shotguns were used as the remedy. Benny Profane, one of the main characters in the book, continues to hunt them as a full-time job until the population is reduced. A 1973 children's book, The Great Escape: Or, The Sewer Story by Peter Lippman anthropomorphizes these alligators and has them dress up in disguise as humans and charter an airplane to fly them home to the Florida swamps. Versions including albinos and mutants Some versions go further to suggest that, after the alligator was disposed of at such a young age, it would live the majority of its life in an environment not exposed to sunlight, and thus it would apparently in time lose its eyesight and the pigment in its hide and that the reptile would grow to be blind and completely albino (pure white in color with red or pink eyes). Another reason why an albino alligator would retreat to an underground sewer is its vulnerability to the sun in the wild; as there is no dark pigment in the creature's skin, it has no protection from the sun, which makes it very hard for it to survive in the wild. Some people even spoke of mutant alligators living in the sewers which have been exposed to many different types of toxic chemical waste which altered them, making them deformed and sometimes even larger and with strange colouring. A gigantic mutant alligator based on these myths appears in the 1980 film Alligator. Contemporary accounts One 1927 account describes an experience of a Pittsburgh Bureau of Highways and Sewers employee who was assigned the task of clearing out a section of sewer pipe on Royal Street in the Northside Section of the city. The account reads, "[He] removed the manhole cover and began to clear an obstruction when he realized that a set of 'evil looking eyes' was staring at him." He then removed a alligator and took it home with him. There are other numerous recent media accounts of alligators occupying storm drains and sewer pipes, all from states in the southern US. In Paris, France, a Nile crocodile was captured by firefighters in the sewers below the Pont Neuf bridge on March 7, 1984. The crocodile, named Eleonore (or Eleanore), lived at the Aquarium in Vannes and died in May 2022. A baby alligator was caught in August 2010 by the NYPD in the sewers in Queens. However, it is unlikely that a fully grown adult would survive for long in New York, due to the cold winter temperatures. Alligators have been sighted in the drains and sewers of Florida as recently as 2017, due to many of these waste outlets' backing out onto the swamps. During storm surges and in the colder winter months, alligators sometimes shelter in convenient drains and hunt for rats to supplement their diet. See also Sewer Gators (film) Killer Croc Leatherhead (Teenage Mutant Ninja Turtles) References Notes Sources Tales From the Urban Crypt: Legendary whoppers about Gotham run the ghastly and ghostly gamut Urbanlegends.com. Retrieved April 26, 2010 External links Gatorhole.com Urbanlegends.About.com SewerGator.com Gator Guide Lake Eufaula IMDB page for 'Alligator' News. Alligator found in sewer in Florida. Oct. 2005 Man Falls in with Alligator – St. Petersburg Times, June 16, 2000 Gator Aid – Houston Press, May 25, 2006 Alligator Stomp – Houston Press, January 27, 2005 See Ya Later, Alligator – Bluffton Today, May 8, 2006 No. 3: Reggie – DailyBreeze.com, December 28, 2005 Alligators and humans Culture of New York City American urban legends Legendary reptiles American legendary creatures Subterranea (geography) Alligator
Sewer alligator
[ "Chemistry", "Engineering", "Environmental_science" ]
1,474
[ "Sewerage", "Environmental engineering", "Water pollution" ]
3,000,842
https://en.wikipedia.org/wiki/Unique%20games%20conjecture
In computational complexity theory, the unique games conjecture (often referred to as UGC) is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time (as postulated by the P versus NP problem), but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines. The conjecture is unusual in that the academic world seems about evenly divided on whether it is true or not. Formulations The unique games conjecture can be stated in a number of equivalent ways. Unique label cover The following formulation of the unique games conjecture is often used in hardness of approximation. The conjecture postulates the NP-hardness of the following promise problem known as label cover with unique constraints. For each edge, the colors on the two vertices are restricted to some particular ordered pairs. Unique constraints means that for each edge none of the ordered pairs have the same color for the same node. This means that an instance of label cover with unique constraints over an alphabet of size k can be represented as a directed graph together with a collection of permutations πe: [k] → [k], one for each edge e of the graph. An assignment to a label cover instance gives to each vertex of G a value in the set [k] = {1, 2, ... k}, often called “colours.” Such instances are strongly constrained in the sense that the colour of a vertex uniquely defines the colours of its neighbours, and hence for its entire connected component. Thus, if the input instance admits a valid assignment, then such an assignment can be found efficiently by iterating over all colours of a single node. In particular, the problem of deciding if a given instance admits a satisfying assignment can be solved in polynomial time. The value of a unique label cover instance is the fraction of constraints that can be satisfied by any assignment. For satisfiable instances, this value is 1 and is easy to find. On the other hand, it seems to be very difficult to determine the value of an unsatisfiable game, even approximately. The unique games conjecture formalises this difficulty. More formally, the (c, s)-gap label-cover problem with unique constraints is the following promise problem (Lyes, Lno): Lyes = {G: Some assignment satisfies at least a c-fraction of constraints in G} Lno = {G: Every assignment satisfies at most an s-fraction of constraints in G} where G is an instance of the label cover problem with unique constraints. The unique games conjecture states that for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the (1 − δ, ε)-gap label-cover problem with unique constraints over alphabet of size k is NP-hard. Maximizing Linear Equations Modulo k Consider the following system of linear equations over the integers modulo k: When each equation involves exactly two variables, this is an instance of the label cover problem with unique constraints; such instances are known as instances of the Max2Lin(k) problem. It is not immediately obvious that the inapproximability of Max2Lin(k) is equivalent to the UGC, but this is in fact the case, by a reduction. Namely, the UGC is equivalent to: for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the (1 − δ, ε)-gap Max2Lin(k) problem is NP-hard. Connection with computational topology It has been argued that the UGC is essentially a question of computational topology, involving local-global principles (the latter are also evident in the proof of the 2-2 Games Conjecture, see below). Linial observed that unique label cover is an instance of the Maximum Section of a Covering Graph problem (covering graphs is the terminology from topology; in the context of unique games these are often referred to as graph lifts). To date, all known problems whose inapproximability is equivalent to the UGC are instances of this problem, including Unique Label Cover and Max2Lin(k). When the latter two problems are viewed as instances of Max Section of a Covering Graph, the reduction between them preserves the structure of the graph covering spaces, so not only the problems, but the reduction between them has a natural topological interpretation. Grochow and Tucker-Foltz exhibited a third computational topology problem whose inapproximability is equivalent to the UGC: 1-Cohomology Localization on Triangulations of 2-Manifolds. Two-prover proof systems A unique game is a special case of a two-prover one-round (2P1R) game. A two-prover one-round game has two players (also known as provers) and a referee. The referee sends each player a question drawn from a known probability distribution, and the players each have to send an answer. The answers come from a set of fixed size. The game is specified by a predicate that depends on the questions sent to the players and the answers provided by them. The players may decide on a strategy beforehand, although they cannot communicate with each other during the game. The players win if the predicate is satisfied by their questions and their answers. A two-prover one-round game is called a unique game if for every question and every answer by the first player, there is exactly one answer by the second player that results in a win for the players, and vice versa. The value of a game is the maximum winning probability for the players over all strategies. The unique games conjecture states that for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the following promise problem (Lyes, Lno) is NP-hard: Lyes = {G: the value of G is at least 1 − δ} Lno = {G: the value of G is at most ε} where G is a unique game whose answers come from a set of size k. Probabilistically checkable proofs Alternatively, the unique games conjecture postulates the existence of a certain type of probabilistically checkable proof for problems in NP. A unique game can be viewed as a special kind of nonadaptive probabilistically checkable proof with query complexity 2, where for each pair of possible queries of the verifier and each possible answer to the first query, there is exactly one possible answer to the second query that makes the verifier accept, and vice versa. The unique games conjecture states that for every sufficiently small pair of constants there is a constant such that every problem in NP has a probabilistically checkable proof over an alphabet of size with completeness , soundness , and randomness complexity which is a unique game. Relevance The unique games conjecture was introduced by Subhash Khot in 2002 in order to make progress on certain questions in the theory of hardness of approximation. The truth of the unique games conjecture would imply the optimality of many known approximation algorithms (assuming P ≠ NP). For example, the approximation ratio achieved by the algorithm of Goemans and Williamson for approximating the maximum cut in a graph is optimal to within any additive constant assuming the unique games conjecture and P ≠ NP. A list of results that the unique games conjecture is known to imply is shown in the adjacent table together with the corresponding best results for the weaker assumption P ≠ NP. A constant of or means that the result holds for every constant (with respect to the problem size) strictly greater than or less than , respectively. Discussion and alternatives Currently, there is no consensus regarding the truth of the unique games conjecture. Certain stronger forms of the conjecture have been disproved. A different form of the conjecture postulates that distinguishing the case when the value of a unique game is at least from the case when the value is at most is impossible for polynomial-time algorithms (but perhaps not NP-hard). This form of the conjecture would still be useful for applications in hardness of approximation. The constant in the above formulations of the conjecture is necessary unless P = NP. If the uniqueness requirement is removed the corresponding statement is known to be true by the parallel repetition theorem, even Results Marek Karpinski and Warren Schudy have constructed linear time approximation schemes for dense instances of unique games problem. In 2008, Prasad Raghavendra has shown that if the unique games conjecture is true, then for every constraint satisfaction problem the best approximation ratio is given by a certain simple semidefinite programming instance, which is in particular polynomial. In 2010, Prasad Raghavendra and David Steurer defined the gap-small-set expansion problem, and conjectured that it is NP-hard. The resulting small set expansion hypothesis implies the unique games conjecture. It has also been used to prove strong hardness of approximation results for finding complete bipartite subgraphs. In 2010, Sanjeev Arora, Boaz Barak and David Steurer found a subexponential time approximation algorithm for the unique games problem. A key ingredient in their result was the spectral algorithm of Alexandra Kolla (see also the earlier manuscript of A. Kolla and Madhur Tulsiani). The latter also re-proved that unique games on expander graphs could be solved in polynomial time, and was one of (if not the) first graph algorithms to take advantage of the full spectrum of a graph rather than just its first two eigenvalues. In 2012, it was shown that distinguishing instances with value at most from instances with value at least is NP-hard. In 2018, after a series of papers, a weaker version of the conjecture, called the 2-2 games conjecture, was proven. In a certain sense, this proves "a half" of the original conjecture. This also improves the best known gap for unique label cover: it is NP-hard to distinguish instances with value at most from instances with value at least . References Further reading . 2002 in computing Approximation algorithms Computational complexity theory Computational hardness assumptions Unsolved problems in computer science Conjectures
Unique games conjecture
[ "Mathematics" ]
2,167
[ "Unsolved problems in mathematics", "Unsolved problems in computer science", "Approximation algorithms", "Conjectures", "Mathematical relations", "Mathematical problems", "Approximations" ]
3,000,968
https://en.wikipedia.org/wiki/Solvatochromism
In chemistry, solvatochromism is the phenomenon observed when the colour of a solution is different when the solute is dissolved in different solvents. The solvatochromic effect is the way the spectrum of a substance (the solute) varies when the substance is dissolved in a variety of solvents. In this context, the dielectric constant and hydrogen bonding capacity are the most important properties of the solvent. With various solvents there is a different effect on the electronic ground state and excited state of the solute, so that the size of energy gap between them changes as the solvent changes. This is reflected in the absorption or emission spectrum of the solute as differences in the position, intensity, and shape of the spectroscopic bands. When the spectroscopic band occurs in the visible part of the electromagnetic spectrum, solvatochromism is observed as a change of colour. This is illustrated by Reichardt's dye, as shown in the image. Negative solvatochromism corresponds to a hypsochromic shift (or blue shift) with increasing solvent polarity. An examples of negative solvatochromism is provided by 4-(4-hydroxystyryl)-N-methylpyridinium iodide, which is red in 1-propanol, orange in methanol, and yellow in water. Positive solvatochromism corresponds to a bathochromic shift (or red shift) with increasing solvent polarity. An example of positive solvatochromism is provided by 4,4'-bis(dimethylamino)fuchsone, which is orange in toluene, red in acetone. The main value of the concept of solvatochromism is the context it provides to predict colors of solutions. Solvatochromism can in principle be used in sensors and in molecular electronics for construction of molecular switches. Solvatochromic dyes are used to measure solvent parameters, which can be used to explain solubility phenomena and predict suitable solvents for particular uses. Solvatochromism of the photoluminescence/fluorescence of carbon nanotubes has been identified and used for optical sensor applications. In one such application, the wavelength of the fluorescence of peptide-coated carbon nanotubes was found to change when exposed to explosives, facilitating detection. However, more recently the small chromophore solvatochromism hypothesis has been challenged for carbon nanotubes in light of older and newer data showing electrochromic behavior. These and other observations regarding non-linear processes on the semiconducting nanotube suggest colloidal models will require new interpretations that are in line with classic semiconductor optical processes, including electrochemical processes, rather than small molecule physical descriptions. Conflicting hypotheses may be due to the fact the nanotube is only a single atom thick material interface unlike other "bulk" nanomaterials. References Further reading External links Negative solvatochromism experiment Positive solvatochromism experiment Chromism Absorption spectroscopy
Solvatochromism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
630
[ "Spectrum (physical sciences)", "Chromism", "Materials science", "Absorption spectroscopy", "Smart materials", "Spectroscopy" ]
3,001,955
https://en.wikipedia.org/wiki/Terephthaloyl%20chloride
Terephthaloyl chloride (TCL, 1,4-benzenedicarbonyl chloride) is the acyl chloride of terephthalic acid. It is a white solid at room temperature. It is one of two precursors used to make Kevlar, the other being p-phenylenediamine. TCL is used as a key component in performance polymers and aramid fibers, where it imparts flame resistance, chemical resistance, temperature stability, light weight, and very high strength. TCL is also an effective water scavenger, used to stabilize isocyanates and urethane prepolymers. Preparation Terephthalic acid dichloride is produced commercially by the reaction of 1,4-bis(trichloromethyl)benzene with terephthalic acid: C6H4(CCl3)2 + C6H4(CO2H)2 → 2 C6H4(COCl)2 + 2 HCl It can also be obtained by chlorination of dimethyl terephthalate. Use TCL is used for making various copolymers and aramid polymers such as Heracron, Twaron and Kevlar: References External links Aramid Acyl chlorides Benzene derivatives Monomers
Terephthaloyl chloride
[ "Chemistry", "Materials_science" ]
280
[ "Monomers", "Polymer chemistry" ]
3,003,010
https://en.wikipedia.org/wiki/Electroforming
Electroforming is a metal forming process in which parts are fabricated through electrodeposition on a model, known in the industry as a mandrel. Conductive (metallic) mandrels are treated to create a mechanical parting layer, or are chemically passivated to limit electroform adhesion to the mandrel and thereby allow its subsequent separation. Non-conductive (glass, silicon, plastic) mandrels require the deposition of a conductive layer prior to electrodeposition. Such layers can be deposited chemically, or using vacuum deposition techniques (e.g., gold sputtering). The outer surface of the mandrel forms the inner surface of the form. The process involves passing direct current through an electrolyte containing salts of the metal being electroformed. The anode is the solid metal being electroformed, and the cathode is the mandrel, onto which the electroform gets plated (deposited). The process continues until the required electroform thickness is achieved. The mandrel is then either separated intact, melted away, or chemically dissolved. The surface of the finished part that was in intimate contact with the mandrel is replicated in fine detail with respect to the original, and is not subject to the shrinkage that would normally be experienced in a foundry-cast metal object, or the tool marks of a milled part. The solution side of the part is less well defined, and that loss of definition increases with thickness of the deposit. In extreme cases, where a thickness of several millimetres is required, there is preferential build-up of material on sharp outside edges and corners. This tendency can be reduced by shielding, or a process known as periodic reverse, where the electroforming current is reversed for short periods and the excess is preferentially dissolved electrochemically. The finished form can either be the finished part, or can be used in a subsequent process to produce a positive of the original mandrel shape, such as with vinyl records or CD and DVD stamper manufacture. In recent years, due to its ability to replicate a mandrel surface with practically no loss of fidelity, electroforming has taken on new importance in the fabrication of micro- and nano-scale metallic devices and in producing precision injection molds with micro- and nano-scale features for production of non-metallic micro-molded objects. Process In the basic electroforming process, an electrolytic bath is used to deposit nickel or other electroformable metal onto a conductive surface of a model (mandrel). Once the deposited material has been built up to the desired thickness, the electroform is parted from the substrate. This process allows the precise replication of the mandrel surface texture and geometry at low unit cost with high repeatability and excellent process control. If the mandrel is made of a non-conductive material, then it can be coated with a thin conductive layer. Advantages and disadvantages The main advantage of electroforming is that it accurately replicates the external shape of the mandrel. Generally, machining a cavity accurately is more challenging than machining a convex shape; however, the opposite holds true for electroforming because the mandrel's exterior can be accurately machined and then used to electroform a precision cavity. Compared to other basic metal forming processes (casting, forging, stamping, deep drawing, machining, and fabricating), electroforming is very effective when requirements call for extreme tolerances, complexity, or light weight. The precision and resolution inherent in the photo-lithographically produced conductive patterned substrate allows finer geometries to be produced to tighter tolerances while maintaining superior edge definition with a near-optical finish. Electroformed metal can be extremely pure, with superior properties over wrought metal due to its refined crystal structure. Multiple layers of electroformed metals can be bonded together, or to different substrate materials, to produce complex structures with "grown-on" flanges and bosses. Tolerances of 1.5 to 3 nanometers have been reported. A wide variety of shapes and sizes can be made by electroforming, the principal limitation being the need to part the product from the mandrel. Since the fabrication of a product requires only a single model or mandrel, low production quantities can be made economically. See also LIGA Electrotyping Electroplating Electrochemical engineering References Further reading Spiro, P. Electroforming: A comprehensive survey of theory, practice and commercial applications, London, 1971. External links Metal forming Metallurgical processes de:Galvanik
Electroforming
[ "Chemistry", "Materials_science" ]
944
[ "Metallurgical processes", "Metallurgy" ]
3,003,284
https://en.wikipedia.org/wiki/Weighting
The process of frequency weighting involves emphasizing the contribution of particular aspects of a phenomenon (or of a set of data) over others to an outcome or result; thereby highlighting those aspects in comparison to others in the analysis. That is, rather than each variable in the data set contributing equally to the final result, some of the data is adjusted to make a greater contribution than others. This is analogous to the practice of adding (extra) weight to one side of a pair of scales in order to favour either the buyer or seller. While weighting may be applied to a set of data, such as epidemiological data, it is more commonly applied to measurements of light, heat, sound, gamma radiation, and in fact any stimulus that is spread over a spectrum of frequencies. Weighting in acoustics Weighting and loudness In the measurement of loudness, for example, a weighting filter is commonly used to emphasise frequencies around 3 to 6 kHz where the human ear is most sensitive, while attenuating very high and very low frequencies to which the ear is insensitive. A commonly used weighting is the A-weighting curve, which results in units of dBA sound pressure level. Because the frequency response of human hearing varies with loudness, the A-weighting curve is correct only at a level of 40-phon and other curves known as B-, C- and D-weighting are also used, the latter being particularly intended for the measurement of aircraft noise. Weighting in audio measurement In broadcasting and audio equipment measurements 468-weighting is the preferred weighting to use because it was specifically devised to allow subjectively valid measurements on noise, rather than pure tones. It is often not realised that equal loudness curves, and hence A-weighting, really apply only to tones, as tests with noise bands show increased sensitivity in the 5 to 7 kHz region on noise compared to tones. Other weighting curves are used in rumble measurement and flutter measurement to properly assess subjective effect. In each field of measurement, special units are used to indicate a weighted measurement as opposed to a basic physical measurement of energy level. For sound, the unit is the phon (1 kHz equivalent level). In the fields of acoustics and audio engineering, it is common to use a standard curve referred to as A-weighting, one of a set that are said to be derived from equal-loudness contours. Application to hearing in aquatic animals Auditory frequency weighting functions for marine mammals were introduced by Southall et al. (2007). Weighting in electromagnetism Weighting and gamma rays In the measurement of gamma rays or other ionising radiation, a radiation monitor or dosimeter will commonly use a filter to attenuate those energy levels or wavelengths that cause the least damage to the human body but letting through those that do the most damage, so any source of radiation may be measured in terms of its true danger rather than just its strength. The resulting unit is the sievert or microsievert. Weighting and television colour components Another use of weighting is in television, in which the red, green and blue components of the signal are weighted according to their perceived brightness. This ensures compatibility with black and white receivers and also benefits noise performance and allows separation into meaningful luminance and chrominance signals for transmission. Weighting and UV factor derivation for sun exposure Skin damage due to sun exposure is very wavelength dependent over the UV range 295 to 325 nm, with power at the shorter wavelength causing around 30 times as much damage as the longer one. In the calculation of UV Index, a weighting curve is used which is known as the McKinlay-Diffey Erythema action spectrum. See also Audio quality measurement G-weighting ITU-R 468 noise weighting M-weighting Psophometric weighting Weight function Weighting filter Z-weighting References External links Noise measurement briefing Calculator for A,C,U, and AU weighting values A-weighting filter circuit for audio measurements AES pro audio reference definition of "weighting filters" What is a decibel? Weighting filter according DIN EN 61672-1 2003-10 (DIN-IEC 651) Calculation: frequency f to dBA and dBC Statistical analysis Applied and interdisciplinary physics
Weighting
[ "Physics" ]
886
[ "Applied and interdisciplinary physics" ]
5,491,659
https://en.wikipedia.org/wiki/Complex%20metallic%20alloy
Complex metallic alloys (CMAs) or complex intermetallics (CIMs) are intermetallic compounds characterized by the following structural features: large unit cells, comprising some tens up to thousands of atoms, the presence of well-defined atom clusters, frequently of icosahedral point group symmetry, the occurrence of inherent disorder in the ideal structure. Overview Complex metallic alloys is an umbrella term for intermetallic compounds with a relatively large unit cell. There is no precise definition of how large the unit cell of a complex metallic alloy has to be, but the broadest definition includes Zintl phases, skutterudites, and Heusler compounds on the most simple end, and quasicrystals on the more complex end. Research Following the invention of X-ray crystallography techniques in the 1910s, the atomic structure of many compounds was investigated. Most metals have relatively simple structures. However, in 1923 Linus Pauling reported on the structure of the intermetallic NaCd2, which had such a complicated structure he was unable to fully explain it. Thirty years later, he concluded that NaCd2 contains 384 sodium and 768 cadmium atoms in each unit cell. Most physical properties of CMAs show distinct differences with respect to the behavior of normal metallic alloys and therefore these materials possess a high potential for technological application. The European Commission funded the Network of Excellence CMA from 2005 to 2010, uniting 19 core groups in 12 countries. From this emerged the European Integrated Center for the Development of New Metallic Alloys and Compounds (previously C-MAC, now ECMetAC), which connects researchers at 21 universities. Examples Example phases are: β-Mg2Al3: 1168 atoms per unit cell, face-centred cubic, atoms arranged in Friauf polyhedra. ξ'–Al74Pd22Mn4: 318 atoms per unit cell, face-centred orthorhombic, atoms arranged in Mackay-type clusters. (Bergman phase): 163 atoms per unit cell, body centred cubic, atoms arranged in Bergman clusters. (Taylor phase): 204 atoms per unit cell, face-centred orthorhombic, atoms arranged in Mackay-type clusters. See also High-entropy alloys, alloys of multiple elements which ideally form no intermetallics Holmium–magnesium–zinc quasicrystal Frank–Kasper phases Laves phase Hume-Rothery rules References Further reading Intermetallics Crystal structure types
Complex metallic alloy
[ "Physics", "Chemistry", "Materials_science" ]
501
[ "Inorganic compounds", "Metallurgy", "Crystal structure types", "Crystallography", "Intermetallics", "Condensed matter physics", "Alloys" ]
5,491,905
https://en.wikipedia.org/wiki/Poly%284-vinylphenol%29
Poly(4-vinylphenol), also called polyvinylphenol or PVP, is a plastic structurally similar to polystyrene. It is produced from the monomer 4-vinylphenol, which is also referred to as 4-hydroxystyrene. PVP is used in electronics as a dielectric layer in organic transistors in organic TFT LCD displays. Thin films of cross-linked PVP can be used in this application, often in combination with pentacene. By varying the dielectric properties of PVP, the field-effect mobility of the TFTs can be tuned. Other applications include its use in photoresist materials, dielectric materials for energy storage, water-resistant adhesives and antimicrobial coatings. PVP, when mixed with a polyelectrolyte, has been demonstrated to moderately inhibit the growth of microorganisms. PVP has also been employed in gas sensors, such as by mixing polymer-carbon black with PVP to analyse organic solvents. PVP brushes are able to sense toxic gases such as hydrogen sulfide with microgravimetric techniques. Molecularly Imprinted Poly-4-vinylphenol can be produced for the selective electrochemical detection of small molecules, such as cotinine or nicotine. PVP is typically prepared by free radical polymerization of 4-vinylphenol or a protected form of 4-vinylphenol. The protected monomers can be prepared from 4-hydroxybenzaldehyde, by vinylation of phenols, or acylation of polystyrene followed by oxidation at room temperature. If poly(4-methoxystyrene) is produced, the methoxy group can be cleaved by treating it with trimethylsilyliodide. There are several patents on the synthesis of 4-hydroxystyrene due its importance in the development of photoresist materials. RAFT polymerization can be used to prepare well-defined PVP chains. This can be done by mediating free radical polymerization of acetoxystyrene, which is then followed by deacetylation. Nitroxide mediated polymerization can also be used to prepare polyacetoxystyrene, which can transformed in polyphenols by UV irradiation. ATRP can also be used for the preparation of defined block copolymers of PVP, by polymerization of 4-acetoxystyrene that is subsequently selectively hydrolysed. See also 4-Vinylphenol References Organic polymers Plastics Vinyl polymers
Poly(4-vinylphenol)
[ "Physics", "Chemistry" ]
545
[ "Organic polymers", "Unsolved problems in physics", "Organic compounds", "Amorphous solids", "Plastics" ]
5,492,199
https://en.wikipedia.org/wiki/List%20of%20solid%20waste%20treatment%20technologies
The article contains a list of different forms of solid waste treatment technologies and facilities employed in waste management infrastructure. Waste handling facilities Civic amenity site (CA site) Transfer station Established waste treatment technologies Incineration Landfill Recycling Specific to organic waste: Anaerobic digestion Composting Windrow composting Alternative waste treatment technologies In the UK some of these are sometimes termed advanced waste treatment technologies Biodrying Gasification Plasma gasification: Gasification assisted by plasma torches Hydrothermal carbonization Hydrothermal liquefaction Mechanical biological treatment (sorting into selected fractions) Refuse-derived fuel Mechanical heat treatment Molten salt oxidation Pyrolysis UASB (applied to solid wastes) Waste autoclave Specific to organic waste: Bioconversion of biomass to mixed alcohol fuels In-vessel composting Landfarming Sewage treatment Tunnel composting See also Bioethanol Biodiesel List of waste management companies List of wastewater treatment technologies Pollution control Waste-to-energy Burn pit References Anaerobic digestion Thermal treatment Waste treatment technology Solid waste treatment technologies
List of solid waste treatment technologies
[ "Chemistry", "Engineering" ]
213
[ "Water treatment", "Anaerobic digestion", "Environmental engineering", "Water technology", "Waste treatment technology" ]
5,494,713
https://en.wikipedia.org/wiki/Stable%20manifold%20theorem
In mathematics, especially in the study of dynamical systems and differential equations, the stable manifold theorem is an important result about the structure of the set of orbits approaching a given hyperbolic fixed point. It roughly states that the existence of a local diffeomorphism near a fixed point implies the existence of a local stable center manifold containing that fixed point. This manifold has dimension equal to the number of eigenvalues of the Jacobian matrix of the fixed point that are less than 1. Stable manifold theorem Let be a smooth map with hyperbolic fixed point at . We denote by the stable set and by the unstable set of . The theorem states that is a smooth manifold and its tangent space has the same dimension as the stable space of the linearization of at . is a smooth manifold and its tangent space has the same dimension as the unstable space of the linearization of at . Accordingly is a stable manifold and is an unstable manifold. See also Center manifold theorem Lyapunov exponent Notes References External links Dynamical systems Theorems in dynamical systems
Stable manifold theorem
[ "Physics", "Mathematics" ]
214
[ "Theorems in dynamical systems", "Mechanics", "Mathematical problems", "Mathematical theorems", "Dynamical systems" ]
5,496,415
https://en.wikipedia.org/wiki/Spin%20Hall%20effect
The spin Hall effect (SHE) is a transport phenomenon predicted by Russian physicists Mikhail I. Dyakonov and Vladimir I. Perel in 1971. It consists of the appearance of spin accumulation on the lateral surfaces of an electric current-carrying sample, the signs of the spin directions being opposite on the opposing boundaries. In a cylindrical wire, the current-induced surface spins will wind around the wire. When the current direction is reversed, the directions of spin orientation is also reversed. Definition The spin Hall effect is a transport phenomenon consisting of the appearance of spin accumulation on the lateral surfaces of a sample carrying electric current. The opposing surface boundaries will have spins of opposite sign. It is analogous to the classical Hall effect, where charges of opposite sign appear on the opposing lateral surfaces in an electric-current carrying sample in a magnetic field. In the case of the classical Hall effect the charge build up at the boundaries is in compensation for the Lorentz force acting on the charge carriers in the sample due to the magnetic field. No magnetic field is needed for the spin Hall effect which is a purely spin-based phenomenon. The spin Hall effect belongs to the same family as the anomalous Hall effect, known for a long time in ferromagnets, which also originates from spin–orbit interaction. History The spin Hall effect (direct and inverse) was predicted by Russian physicists Mikhail I. Dyakonov and Vladimir I. Perel in 1971. They also introduced for the first time the notion of spin current. In 1983 Averkiev and Dyakonov proposed a way to measure the inverse spin Hall effect under optical spin orientation in semiconductors. The first experimental demonstration of the inverse spin Hall effect, based on this idea, was performed by Bakun et al. in 1984 The term "spin Hall effect" was introduced by Hirsch who re-predicted this effect in 1999. Experimentally, the (direct) spin Hall effect was observed in semiconductors more than 30 years after the original prediction. Physical origin Two possible mechanisms give origin to the spin Hall effect, in which an electric current (composed of moving charges) transforms into a spin current (a current of moving spins without charge flow). The original (extrinsic) mechanism devised by Dyakonov and Perel consisted of spin-dependent Mott scattering, where carriers with opposite spin diffuse in opposite directions when colliding with impurities in the material. The second mechanism is due to intrinsic properties of the material, where the carrier's trajectories are distorted due to spin–orbit interaction as a consequence of the asymmetries in the material. One can intuitively picture the intrinsic effect by using the classical analogy between an electron and a spinning tennis ball. The tennis ball deviates from its straight path in air in a direction depending on the sense of rotation, also known as the Magnus effect. In a solid, the air is replaced by an effective electric field due to asymmetries in the material, the relative motion between the magnetic moment (associated to the spin) and the electric field creates a coupling that distorts the motion of the electrons. Similar to the standard Hall effect, both the extrinsic and the intrinsic mechanisms lead to an accumulation of spins of opposite signs on opposing lateral boundaries. Mathematical description The spin current is described by a second-rank tensor , where the first index refers to the direction of flow, and the second one to the spin component that is flowing. Thus denotes the flow density of the y-component of spin in the x-direction. Introduce also the vector qi of charge flow density (which is related to the normal current density j=eq), where e is the elementary charge. The coupling between spin and charge currents is due to spin-orbit interaction. It may be described in a very simple way by introducing a single dimensionless coupling parameter ʏ. Spin Hall magnetoresistance No magnetic field is needed for spin Hall effect. However, if a strong enough magnetic field is applied in the direction perpendicular to the orientation of the spins at the surfaces, spins will precess around the direction of the magnetic field and the spin Hall effect will disappear. Thus in the presence of magnetic field, the combined action of the direct and inverse spin Hall effect leads to a change of the sample resistance, an effect that is of second order in spin-orbit interaction. This was noted by Dyakonov and Perel already in 1971 and later elaborated in more detail by Dyakonov. In recent years, the spin Hall magnetoresistance was extensively studied experimentally both in magnetic and non-magnetic materials (heavy metals, such as Pt, Ta, Pd, where the spin-orbit interaction is strong). Swapping spin currents A transformation of spin currents consisting in interchanging (swapping) of the spin and flow directions (qij → qji) was predicted by Lifshits and Dyakonov. Thus a flow in the x-direction of spins polarized along y is transformed to a flow in the y-direction of spins polarized along x. This prediction has yet not been confirmed experimentally. Optical monitoring The direct and inverse spin Hall effect can be monitored by optical means. The spin accumulation induces circular polarization of the emitted light, as well as the Faraday (or Kerr) polarization rotation of the transmitted (or reflected) light. Observing the polarization of emitted light allows the spin Hall effect to be observed. More recently, the existence of both direct and inverse effects was demonstrated not only in semiconductors, but also in metals. Applications The spin Hall effect can be used to manipulate electron spins electrically. For example, in combination with the electric stirring effect, the spin Hall effect leads to spin polarization in a localized conducting region. Further reading For a review of spin Hall effect, see for example: See also Quantum spin Hall effect Spin Nernst effect References Hall effect Condensed matter physics Spintronics
Spin Hall effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,214
[ "Physical phenomena", "Hall effect", "Spintronics", "Phases of matter", "Electric and magnetic fields in matter", "Materials science", "Electrical phenomena", "Condensed matter physics", "Solid state engineering", "Matter" ]
5,497,456
https://en.wikipedia.org/wiki/Kinetic%20resolution
In organic chemistry, kinetic resolution is a means of differentiating two enantiomers in a racemic mixture. In kinetic resolution, two enantiomers react with different reaction rates in a chemical reaction with a chiral catalyst or reagent, resulting in an enantioenriched sample of the less reactive enantiomer. As opposed to chiral resolution, kinetic resolution does not rely on different physical properties of diastereomeric products, but rather on the different chemical properties of the racemic starting materials. The enantiomeric excess (ee) of the unreacted starting material continually rises as more product is formed, reaching 100% just before full completion of the reaction. Kinetic resolution relies upon differences in reactivity between enantiomers or enantiomeric complexes. Kinetic resolution can be used for the preparation of chiral molecules in organic synthesis. Kinetic resolution reactions utilizing purely synthetic reagents and catalysts are much less common than the use of enzymatic kinetic resolution in application towards organic synthesis, although a number of useful synthetic techniques have been developed in the past 30 years. History The first reported kinetic resolution was achieved by Louis Pasteur. After reacting aqueous racemic ammonium tartrate with a mold from Penicillium glaucum, he reisolated the remaining tartrate and found it was levorotatory. The chiral microorganisms present in the mold catalyzed the metabolization of (R,R)-tartrate selectively, leaving an excess of (S,S)-tartrate. Kinetic resolution by synthetic means was first reported by Marckwald and McKenzie in 1899 in the esterification of racemic mandelic acid with optically active (−)-menthol. With an excess of the racemic acid present, they observed the formation of the ester derived from (+)-mandelic acid to be quicker than the formation of the ester from (−)-mandelic acid. The unreacted acid was observed to have a slight excess of (−)-mandelic acid, and the ester was later shown to yield (+)-mandelic acid upon saponification. The importance of this observation was that, in theory, if a half equivalent of (−)-menthol had been used, a highly enantioenriched sample of (−)-mandelic acid could have been prepared. This observation led to the successful kinetic resolution of other chiral acids, the beginning of the use of kinetic resolution in organic chemistry. Theory Kinetic resolution is a possible method for irreversibly differentiating a pair of enantiomers due to (potentially) different activation energies. While both enantiomers are at the same Gibbs free energy level by definition, and the products of the reaction with both enantiomers are also at equal levels, the , or transition state energy, can differ. In the image below, the R enantiomer has a lower and would thus react faster than the S enantiomer. The ideal kinetic resolution is that in which only one enantiomer reacts, i.e. kR>>kS. The selectivity (s) of a kinetic resolution is related to the rate constants of the reaction of the R and S enantiomers, kR and kS respectively, by s=kR/kS, for kR>kS. This selectivity can also be referred to as the relative rates of reaction. This can be written in terms of the free energy difference between the high- and low-energy transitions states, . The selectivity can also be expressed in terms of ee of the recovered starting material and conversion (c), if first-order kinetics (in substrate) are assumed. If it is assumed that the S enantiomer of the starting material racemate will be recovered in excess, it is possible to express the concentrations (mole fractions) of the S and R enantiomers as where ee is the ee of the starting material. Note that for c=0, which signifies the beginning of the reaction, , where these signify the initial concentrations of the enantiomers. Then, for stoichiometric chiral resolving agent B*, Note that, if the resolving agent is stoichiometric and achiral, with a chiral catalyst, the [B*] term does not appear. Regardless, with a similar expression for R, we can express s as If we wish to express this in terms of the enantiomeric excess of the product, ee", we must make use of the fact that, for products R' and S' from R and S, respectively From here, we see that which gives us which, when we plug into our expression for s derived above, yield The conversion (c) and selectivity factor (s) can be expressed in terms of starting material and product enantiomeric excesses (ee and ee'', respectively) only: Additionally, the expressions for c and ee can be parametrized to give explicit expressions for C and ee in terms of t. First, solving explicitly for [S] and [R] as functions of t yields which, plugged into expressions for ee and c, gives Without loss of generality, we can allow kS=1, which gives kR=s, simplifying the above expressions. Similarly, an expression for ee″ as a function of t can be derived Thus, plots of ee and ee″ vs. c can be generated with t as the parameter and different values of s generating different curves, as shown below. As can be seen, high enantiomeric excesses are much more readily attainable for the unreacted starting material. There is however a tradeoff between ee and conversion, with higher ee (of the recovered substrate) obtained at higher conversion, and therefore lower isolated yield. For example, with a selectivity factor of just 10, 99% ee is possible with approximately 70% conversion, resulting in a yield of about 30%. In contrast, in order to get good ee's and yield of the product, very high selectivity factors are necessary. For example, with a selectivity factor of 10, ee″ above approximately 80% is unattainable, and significantly lower ee″ values are obtained for more realistic conversions. A selectivity in excess of 50 is required for highly enantioenriched product, in reasonable yield. This is a simplified version of the true kinetics of kinetic resolution. The assumption that the reaction is first order in substrate is limiting, and it is possible that the dependence on substrate may depend on conversion, resulting in a much more complicated picture. As a result, a common approach is to measure and report only yields and ee's, as the formula for krel only applies to an idealized kinetic resolution. It is simple to consider an initial substrate-catalyst complex forming, which could negate the first-order kinetics. However, the general conclusions drawn are still helpful to understand the effect of selectivity and conversion on ee. Practicality With the advent of asymmetric catalysis, it is necessary to consider the practicality of utilizing kinetic resolution for the preparation of enantiopure products. Even for a product which can be attained through an asymmetric catalytic or auxiliary-based route, the racemate may be significantly less expensive than the enantiopure material, resulting in heightened cost-effectiveness even with the inherent "loss" of 50% of the material. The following have been proposed as necessary conditions for a practical kinetic resolution: inexpensive racemate and catalyst no appropriate enantioselective, chiral pool, or classical resolution route is possible resolution proceeds selectively at low catalyst loadings separation of starting material and product is easy To date, a number of catalysts for kinetic resolution have been developed that satisfy most, if not all of the above criteria, making them highly practical for use in organic synthesis. The following sections will discuss a number of key examples. Reactions utilizing synthetic reagents Acylation reactions Gregory Fu and colleagues have developed a methodology utilizing a chiral DMAP analogue to achieve excellent kinetic resolution of secondary alcohols. Initial studies utilizing ether as a solvent, low catalyst loadings (2 mol %), acetic anhydride as the acylating agent, and triethylamine at room temperature gave selectivities ranging from 14-52, corresponding to ee's of the recovered alcohol product as high as 99.2%. However, solvent screening proved that the use of tert-amyl alcohol increased both the reactivity and selectivity. With the benchmark substrate 1-phenylethanol, this corresponded to 99% ee of the unreacted alcohol at 55% conversion when run at 0 °C. This system proved to be adept at resolution of a number of arylalkylcarbinols, with selectivities as high as 95 and low catalyst loadings of 1%, as shown below utilizing the (-)-enantiomer of the catalyst. This resulted in highly enantioenriched alcohols at very low conversions, giving excellent yields as well. In addition, the high selectivities result in highly enantioenriched acylated products, with a 90% ee sample of acylated alcohol for o-tolylmethylcarbinol, with s=71. In addition, Fu reported the first highly selective acylation of racemic diols (as well as desymmetrization of meso diols). With low catalyst loading of 1%, enantioenriched diol was recovered in 98% ee and 43% yield, with the diacetate in 39% yield and 99% ee. The remainder of the material was recovered as a mixture of monoacetate. The planar-chiral DMAP catalyst was also shown to be effective at kinetically resolving propargylic alcohols. In this case, though, selectivities were found to be highest without any base present. When run with 1 mol% of the catalyst at 0 °C, selectivities as high as 20 could be attained. The limitations of this method include the requirement of an unsaturated functionality, such as carbonyl or alkenes, at the remote alkynyl position. Alcohols resolved using the (+)-enantiomer of the DMAP catalyst are shown below. Fu also showed his chiral DMAP catalyst's ability to resolve allylic alcohols. Effective selectivity was dependent upon the presence of either a geminal or cis substituent to the alcohol-bearing group, with a notable exception of a trans-phenyl alcohol which exhibited the highest selectivity. Using 1-2.5 mol% of the (+)-enantiomer of the DMAP catalyst, the alcohols shown below were resolved in the presence of triethylamine. While Fu's DMAP analogue catalyst worked exceptionally well to kinetically resolve racemic alcohols, it was not successful in use for the kinetic resolution of amines. A similar catalyst, PPY*, was developed that, in use with a novel acylating agent, allowed for the successful kinetic resolution acylation of amines. With 10 mol% (-)-PPY* in chloroform at –50 °C, good to very good selectivities were observed in the acylation of amines, shown below. A similar protocol was developed for the kinetic resolution of indolines. Epoxidations and dihydroxylations The Sharpless epoxidation, developed by K. Barry Sharpless in 1980, has been utilized for the kinetic resolution of a racemic mixture of allylic alcohols. While extremely effective at resolving a number of allylic alcohols, this method has a number of drawbacks. Reaction times can run as long as 6 days, and the catalyst is not recyclable. However, the Sharpless asymmetric epoxidation kinetic resolution remains one of the most effective synthetic kinetic resolutions to date. A number of different tartrates can be used for the catalyst; a representative scheme is shown below utilizing diisopropyl tartrate. This method has seen general use on a number of secondary allylic alcohols. Sharpless asymmetric dihydroxylation has also seen use as a method for kinetic resolution. This method is not widely used, however, since the same resolution can be accomplished in different manners that are more economical. Additionally, the Shi epoxidation has been shown to affect kinetic resolution of a limited selection of olefins. This method is also not widely used, but is of mechanistic interest. Epoxide openings While enantioselective epoxidations have been successfully achieved utilizing Sharpless epoxidation, Shi epoxidation, and Jacobsen epoxidation, none of these methods allows for the efficient asymmetric synthesis of terminal epoxides, which are key chiral building blocks. Due to the inexpensiveness of most racemic terminal epoxides and their inability to generally be subjected to classical resolution, an effective kinetic resolution of terminal epoxides would serve as a highly important synthetic methodology. In 1996, Jacobsen and coworkers developed a methodology for the kinetic resolution of epoxides via nucleophilic ring-opening with attack by an azide anion. The (R,R) catalyst is shown. The catalyst could effectively, with loadings as low as 0.5 mol%, open the epoxide at the terminal position enantioselectively, yielding enantioenriched epoxide starting material and 1,2-azido alcohols. Yields are nearly quantitative and ee's were excellent (≥95% in nearly all cases). The 1,2-azido alcohols can be hydrogenated to give 1,2-amino alcohols, as shown below. In 1997, Jacobsen's group published a methodology which improved upon their earlier work, allowing for the use of water as the nucleophile in the epoxide opening. Utilizing a nearly identical catalyst, ee's in excess of 98% for both the recovered starting material epoxide and 1,2-diol product were observed. In the example below, hydrolytic kinetic resolution (HKR) was carried out on a 58 gram scale, resulting in 26 g (44%) of the enantioriched epoxide in >99% ee and 38 g (50%) of the diol in 98% ee. A multitude of other substrates were examined, with yields of the recovered epoxide ranging from 36-48% for >99% ee. Jacobsen hydrolytic kinetic resolution can be used in tandem with Jacobsen epoxidation to yield enantiopure epoxides from certain olefins, as shown below. The first epoxidation yields a slightly enantioenriched epoxide, and subsequent kinetic resolution yields essentially a single enantiomer. The advantage of this approach is the ability to reduce the amount of hydrolytic cleavage necessary to achieve high enantioselectivity, allowing for overall yields up to approximately 90%, based on the olefin. Ultimately, the Jacobsen epoxide opening kinetic resolutions produce high enantiomeric purity in the epoxide and product, in solvent-free or low-solvent conditions, and have been applicable on a large scale. The Jacobsen methodology for HKR in particular is extremely attractive since it can be carried out on a multiton scale and utilizes water as the nucleophile, resulting in extremely cost-effective industrial processes. Despite impressive achievements, HKR has generally been applied to the resolution of simple terminal epoxides with one stereocentre. Quite recently, D. A. Devalankar et al. reported an elegant protocol involving a two-stereocentered Co-catalyzed HKR of racemic terminal epoxides bearing adjacent C–C binding substituents. Oxidations Ryōji Noyori and colleagues have developed a methodology for the kinetic resolution of benzylic and allylic secondary alcohols via transfer hydrogenation. The ruthenium complex catalyzes oxidation of the more reactive enantiomer from acetone, yielding an unreacted enantiopure alcohol, an oxidized ketone, and isopropanol. In the example illustrated below, exposure of 1-phenylethanol to the (S,S) enantiomer of the catalyst in the presence of acetone results in a 51% yield of 94% ee (R)-1-phenylethanol, along with 49% acetophenone and isopropanol as a byproduct. This methodology is essentially the reverse of Noyori's asymmetric transfer hydrogenation of ketones, which yield enantioenriched alcohols via reduction. This limits the attractiveness of the kinetic resolution method, since there is a similar method to achieve the same products without the loss of half the material. Thus, the kinetic resolution would only be carried out in an instance for which the racemic alcohol was at least one half the price of the ketone or significantly easier to access. In addition, Uemura and Hidai have developed a ruthenium catalyst for the kinetic resolution oxidation of benzylic alcohols, yielding highly enantioenriched alcohols in good yields. The complex can, like Noyori's catalyst, affect transfer hydrogenation between a ketone and isopropanol to give an enantioenriched alcohol as well as affect kinetic resolution of a racemic alcohol, giving enantiopure alcohol (>99% ee) and oxidized ketone, with acetone as the byproduct. It is highly effective at reducing ketones enantioselectively, giving most benzylic alcohols in >99% ee and can resolve a number of racemic benzylic alcohols to give high yields (up to 49%) of single enantiomers, as shown below. This method has the same disadvantages as the Noyori kinetic resolution, namely that the alcohols can also be accessed via reduction of the ketones enantioselectively. Additionally, only one enantiomer of the catalyst has been reported. Hydrogenation Noyori has also demonstrated the kinetic resolution of allylic alcohols by asymmetric hydrogenation of the olefin. Utilizing the Ru[BINAP] complex, selective hydrogenation can give high ee's of the unsaturated alcohol in addition to the hydrogenated alcohol, as shown below. Thus, a second hydrogenation of the enantioenriched allylic alcohol remaining will give enantiomerically pure samples of both enantiomers of the saturated alcohol. Noyori has resolved a number of allylic alcohols with good to excellent yields and good to excellent ee's (up to >99%). Ring closing metathesis Hoveyda and Schrock have developed a catalyst for ring-closing metathesis kinetic resolution of dienyl allylic alcohols. The molybdenum alkylidene catalyst selectively catalyzes one enantiomer to perform ring closing metathesis, resulting in an enantiopure alcohol, and an enantiopure closed ring, as shown below. The catalyst is most effective at resolving 1,6-dienes. However, slight structural changes in the substrate, such as increasing the inter-alkene distance to 1,7, can sometimes necessitate the use of a different catalyst, reducing the efficacy of this method. Enzymatic reactions Acylations As with synthetic kinetic resolution procedures, enzymatic acylation kinetic resolutions have seen the broadest application in a synthetic context. Especially important has been the use of enzymatic kinetic resolution to efficiently and cheaply prepare amino acids. On a commercial scale, Degussa's methodology employing acylases is capable of resolving numerous natural and unnatural amino acids. The racemic mixtures can be prepared via Strecker synthesis, and the use of either porcine kidney acylase (for straight chain substrates) or an enzyme from the mold Aspergillus oryzae (for branched side chain substrates) can effectively yield enantioenriched amino acids in high (85-90%) yields. The unreacted starting material can be racemized in situ, thus making this a dynamic kinetic resolution. In addition, lipases are used extensively for kinetic resolution in both academic and industrial settings. Lipases have been used to resolve primary alcohols, secondary alcohols, a limited number of tertiary alcohols, carboxylic acids, diols, and even chiral allenes. Lipase from Pseudomonas cepacia (PSL) is the most widely used in the resolution of primary alcohols and has been used with vinyl acetate as an acylating agent to kinetically resolve the primary alcohols shown below. For the resolution of secondary alcohols, pseudomonas cepecia lipase (PSL-C) has been employed effectively to generate excellent ee's of the (R)-enantiomer of the alcohol. The use of isopropenyl acetate as the acylating agent results in acetone as the byproduct, which is effectively removed from the reaction using molecular sieves. Oxidations and reductions Baker's yeast (BY) has been utilized for the kinetic resolution of α-stereogenic carbonyl compounds. The enzyme selectively reduces one enantiomer, yielding a highly enantioenriched alcohol and ketone, as shown below. Baker's yeast has also been used in the kinetic resolution of secondary benzylic alcohols by oxidation. While excellent ee's of the recovered alcohol have been reported, they typically require >60% conversion, resulting in diminished yields. Baker's yeast has also been used in the kinetic resolution via reduction of β-ketoesters. However, given the success of Noyori's resolution of the same substrates, detailed later in this article, this has not seen much use. Dynamic kinetic resolution Dynamic kinetic resolution (DKR) occurs when the starting material racemate is able to epimerize easily, resulting in an essentially racemic starting material mix at all points during the reaction. Then, the enantiomer with the lower barrier to activation can form in, theoretically, up to 100% yield. This is in contrast to standard kinetic resolution, which necessarily has a maximum yield of 50%. For this reason, dynamic kinetic resolution has extremely practical applications to organic synthesis. The observed dynamics are based on the Curtin-Hammett principle. The barrier to reaction of either enantiomer is necessarily higher than the barrier to epimerization, resulting in a kinetic well containing the racemate. This is equivalent to writing, for kR>kS, A number of excellent reviews have been published, most recently in 2008, detailing the theory and practical applications of DKR. Noyori asymmetric hydrogenation The Noyori asymmetric hydrogenation of ketones is an excellent example of dynamic kinetic resolution at work. The enantiomeric β-ketoesters can undergo epimerization, and the choice of chiral catalyst, typically of the form Ru[(R)-BINAP]X2, where X is a halogen, leads to one of the enantiomers reacting preferentially faster. The relative free energy for a representative reaction is shown below. As can be seen, the epimerization intermediate is lower in free energy than the transition states for hydrogenation, resulting in rapid racemization and high yields of a single enantiomer of the product. The enantiomers interconvert through their common enol, which is the energetic minimum located between the enantiomers. The shown reaction yields a 93% ee sample of the anti product shown above. Solvent choice appears to have a major influence on the diastereoselectivity, as dichloromethane and methanol both show effectiveness for certain substrates. Noyori and others have also developed newer catalysts which have improved on both ee and diastereomeric ratio (dr). Genêt and coworkers developed SYNPHOS, a BINAP analogue which forms ruthenium complexes, which perform highly selective asymmetric hydrogenations. Enantiopure Ru[SYNPHOS]Br2 was shown to selectively hydrogenate racemic α-amino-β-ketoesters to enantiopure aminoalcohols, as shown below utilizing (R)-SYNPHOS. 1,2-syn amino alcohols were prepared from benzoyl protected amino compounds, whereas anti products were prepared from hydrochloride salts of the amine. Fu acylation modification Recently, Gregory Fu and colleagues reported a modification of their earlier kinetic resolution work to produce an effective dynamic kinetic resolution. Using the ruthenium racemization catalyst shown to the right, and his planar chiral DMAP catalyst, Fu has demonstrated the dynamic kinetic resolution of secondary alcohols yielding up to 99% and 93% ee, as shown below. Work is ongoing to further develop the applications of the widely used DMAP catalyst to dynamic kinetic resolution. Enzymatic dynamic kinetic resolutions A number of enzymatic dynamic kinetic resolutions have been reported. A prime example using PSL effectively resolves racemic acyloins in the presence of triethylamine and vinyl acetate as the acylating agent. As shown below, the product was isolated in 75% yield and 97% ee. Without the presence of the base, regular kinetic resolution occurred, resulting in 45% yield of >99% ee acylated product and 53% of the starting material in 92% ee. Another excellent, though not high-yielding, example is the kinetic resolution of (±)-8-amino-5,6,7,8-tetrahydroquinoline. When exposed to Candida antarctica lipase B (CALB) in toluene and ethyl acetate for 3–24 hours, normal kinetic resolution occurs, resulting in 45% yield of 97% ee of starting material and 45% yield of >97% ee acylated amine product. However, when the reaction is allowed to stir for 40–48 hours, racemic starting material and >60% of >95% ee acylated product are recovered. Here, the unreacted starting material racemizes in situ via a dimeric enamine, resulting in a recovery of greater than 50% yield of the enantiopure acylated amine product. Chemoenzymatic dynamic kinetic resolutions There have been a number of reported procedures which take advantage of a chemical reagent/catalyst to perform racemization of the starting material and an enzyme to selectively react with one enantiomer, called chemoenzymatic dynamic kinetic resolutions. PSL-C was utilized along with a ruthenium catalyst (for racemization) to produce enantiopure (>95% ee) δ-hydroxylactones. More recently, secondary alcohols have been resolved by Bäckvall with yields up to 99% and ee's up to >99% utilizing CALB and a ruthenium racemization complex. A second type of chemoenzymatic dynamic kinetic resolution involves a π-allyl complex from an allylic acetate with palladium. Here, racemization occurs with loss of the acetate, forming a cationic complex with the transition metal center, as shown below. Palladium has been shown to facilitate this reaction, while ruthenium has been shown to affect a similar reaction, also shown below. Parallel kinetic resolution In parallel kinetic resolution (PKR), a racemic mixture reacts to form two non-enantiomeric products, often through completely different reaction pathways. With PKR, there is no tradeoff between conversion and ee, as the formed products are not enantiomers. One strategy for PKR is to remove the less reactive enantiomer (towards the desired chiral catalyst) from the reaction mixture by subjecting it to a second set of reaction conditions that preferentially react with it, ideally with an approximately equal reaction rate. Thus, both enantiomers are consumed in different pathways at equal rates. PKR experiments can be stereodivergent, regiodivergent, or structurally divergent. One of the most highly efficient PKR's reported to date was accomplished by Yoshito Kishi in 1998; CBS reduction of a racemic steroidal ketone resulted in stereoselective reduction, producing two diastereomers of >99% ee, as shown below. PKR have also been accomplished with the use of enzyme catalysts. Using the fungus Mortierella isabellina NRRL 1757, reduction of racemic β-ketonitriles affords two diastereomers, which can be separated and re-oxidized to give highly enantiopure β-ketonitriles. Highly synthetically useful parallel kinetic resolutions have truly yet to be discovered, however. A number of procedures have been discovered that give acceptable ee's and yields, but there are very few examples which give highly selective parallel kinetic resolution and not simply somewhat selective reactions. For example, Fu's parallel kinetic resolution of 4-alkynals yields very enantioenriched cyclobutanone in low yield and slightly enantioenriched cyclopentenone, as shown below. In theory, parallel kinetic resolution can give the highest ee's of products, since only one enantiomer gives each desired product. For example, for two complementary reactions both with s=49, 100% conversion would give products in 50% yield and 96% ee. These same values would require s=200 for a simple kinetic resolution. As such, the promise of PKR continues to attract much attention. The Kishi CBS reduction remains one of the few examples to fulfill this promise. See also Chiral auxiliaries Chiral pool synthesis Chiral resolution Enantioselective synthesis References Further reading Dynamic Kinetic Resolutions. A MacMillan Group Meeting. Jake Wiener Link Dynamic Kinetic Resolution:A Powerful Approach to Asymmetric Synthesis. Erik Alexanian Supergroup Meeting March 30, 2005 Link Dynamic Kinetic Resolution: Practical Applications in Synthesis. Valerie Keller 3rd-Year Seminar November 1, 2001 Link Kinetic Resolution. David Ebner Stoltz Group Literature Seminar. June 4, 2003 link Kinetic Resolutions. UT Southwestern Presentation. link Stereochemistry
Kinetic resolution
[ "Physics", "Chemistry" ]
6,390
[ "Spacetime", "Stereochemistry", "Space", "nan" ]
5,497,504
https://en.wikipedia.org/wiki/Seasonal%20thermal%20energy%20storage
Seasonal thermal energy storage (STES), also known as inter-seasonal thermal energy storage, is the storage of heat or cold for periods of up to several months. The thermal energy can be collected whenever it is available and be used whenever needed, such as in the opposing season. For example, heat from solar collectors or waste heat from air conditioning equipment can be gathered in hot months for space heating use when needed, including during winter months. Waste heat from industrial process can similarly be stored and be used much later or the natural cold of winter air can be stored for summertime air conditioning. STES stores can serve district heating systems, as well as single buildings or complexes. Among seasonal storages used for heating, the design peak annual temperatures generally are in the range of , and the temperature difference occurring in the storage over the course of a year can be several tens of degrees. Some systems use a heat pump to help charge and discharge the storage during part or all of the cycle. For cooling applications, often only circulation pumps are used. Sorption and thermochemical heat storage are considered the most suitable for seasonal storage due to the theoretical absence of heat loss between charging and discharging. However, studies have shown that actual heat losses currently are usually significant. Examples for district heating include Drake Landing Solar Community where ground storage provides 97% of yearly consumption without heat pumps, and Danish pond storage with boosting. STES technologies There are several types of STES technology, covering a range of applications from single small buildings to community district heating networks. Generally, efficiency increases and the specific construction cost decreases with size. Underground thermal energy storage UTES (underground thermal energy storage), in which the storage medium may be geological strata ranging from earth or sand to solid bedrock, or aquifers. UTES technologies include: ATES (aquifer thermal energy storage). An ATES store is composed of a doublet, totaling two or more wells into a deep aquifer that is contained between impermeable geological layers above and below. One half of the doublet is for water extraction and the other half for reinjection, so the aquifer is kept in hydrological balance, with no net extraction. The heat (or cold) storage medium is the water and the substrate it occupies. Germany's Reichstag building has been both heated and cooled since 1999 with ATES stores, in two aquifers at different depths.In the Netherlands there are well over 1,000 ATES systems, which are now a standard construction option.A significant system has been operating at Richard Stockton College (New Jersey) for several years. ATES has a lower installation cost than borehole thermal energy storage (BTES) because usually fewer holes are drilled, but ATES has a higher operating cost. Also, ATES requires particular underground conditions to be feasible, including the presence of an aquifer. BTES (borehole thermal energy storage). BTES stores can be constructed wherever boreholes can be drilled, and are composed of one to hundreds of vertical boreholes, typically in diameter. Systems of all sizes have been built, including many quite large.The strata can be anything from sand to crystalline hardrock, and depending on engineering factors the depth can be from . Spacings have ranged from . Thermal models can be used to predict seasonal temperature variation in the ground, including the establishment of a stable temperature regime which is achieved by matching the inputs and outputs of heat over one or more annual cycles. Warm-temperature seasonal heat stores can be created using borehole fields to store surplus heat captured in summer to actively raise the temperature of large thermal banks of soil so that heat can be extracted more easily (and more cheaply) in winter. Interseasonal Heat Transfer uses water circulating in pipes embedded in asphalt solar collectors to transfer heat to Thermal Banks created in borehole fields. A ground source heat pump is used in winter to extract the warmth from the Thermal Bank to provide space heating via underfloor heating. A high Coefficient of performance is obtained because the heat pump starts with a warm temperature of from the thermal store, instead of a cold temperature of from the ground. A BTES operating at Richard Stockton College since 1995 at a peak of about consists of 400 boreholes deep under a parking lot. It has a heat loss of 2% over six months. The upper temperature limit for a BTES store is due to characteristics of the PEX pipe used for BHEs, but most do not approach that limit. Boreholes can be either grout- or water-filled depending on geological conditions, and usually have a life expectancy in excess of 100 years. Both a BTES and its associated district heating system can be expanded incrementally after operation begins, as at Neckarsulm, Germany.BTES stores generally do not impair use of the land, and can exist under buildings, agricultural fields and parking lots. An example of one of the several kinds of STES illustrates well the capability of interseasonal heat storage. In Alberta, Canada, the homes of the Drake Landing Solar Community (in operation since 2007), get 97% of their year-round heat from a district heat system that is supplied by solar heat from solar-thermal panels on garage roofs. This feat – a world record – is enabled by interseasonal heat storage in a large mass of native rock that is under a central park. The thermal exchange occurs via a cluster of 144 boreholes, drilled into the earth. Each borehole is in diameter and contains a simple heat exchanger made of small diameter plastic pipe, through which water is circulated. No heat pumps are involved. CTES (cavern or mine thermal energy storage). STES stores are possible in flooded mines, purpose-built chambers, or abandoned underground oil stores (e.g. those mined into crystalline hardrock in Norway), if they are close enough to a heat (or cold) source and market. Energy Pilings. During construction of large buildings, BHE heat exchangers much like those used for BTES stores have been spiraled inside the cages of reinforcement bars for pilings, with concrete then poured in place. The pilings and surrounding strata then become the storage medium. GIITS (geo interseasonal insulated thermal storage). During construction of any building with a primary slab floor, an area approximately the footprint of the building to be heated, and > 1 m in depth, is insulated on all 6 sides typically with HDPE closed cell insulation. Pipes are used to transfer solar energy into the insulated area, as well as extracting heat as required on demand. If there is significant internal ground water flow, remedial actions are needed to prevent it. Surface and above ground technologies Pit Storage. Lined, shallow dug pits that are filled with gravel and water as the storage medium are used for STES in many Danish district heating systems. Storage pits are covered with a layer of insulation and then soil, and are used for agriculture or other purposes. A system in Marstal, Denmark, includes a pit storage supplied with heat from a field of solar-thermal panels. It is initially providing 20% of the year-round heat for the village and is being expanded to provide twice that. The world's largest pit store () was commissioned in Vojens, Denmark, in 2015, and allows solar heat to provide 50% of the annual energy for the world's largest solar-enabled district heating system. In these Danish systems, a capital expenditure per capacity unit between 0,4 and €0,6 /kWh could be achieved. Large-scale thermal storage with water. Large scale STES water storage tanks can be built above ground, insulated, and then covered with soil. Horizontal heat exchangers. For small installations, a heat exchanger of corrugated plastic pipe can be shallow-buried in a trench to create a STES. Earth-bermed buildings. Stores heat passively in surrounding soil. Salt hydrate technology. This technology achieves significantly higher storage densities than water-based heat storage. See Thermal energy storage: Salt hydrate technology Conferences and organizations The International Energy Agency's Energy Conservation through Energy Storage (ECES) Programme has held triennial global energy conferences since 1981. The conferences originally focused exclusively on STES, but now that those technologies are mature other topics such as phase change materials (PCM) and electrical energy storage are also being covered. Since 1985 each conference has had "stock" (for storage) at the end of its name; e.g. EcoStock, ThermaStock. They are held at various locations around the world. Most recent were InnoStock 2012 (the 12th International Conference on Thermal Energy Storage) in Lleida, Spain and GreenStock 2015 in Beijing. EnerStock 2018 will be held in Adana, Turkey in April 2018. The IEA-ECES programme continues the work of the earlier International Council for Thermal Energy Storage which from 1978 to 1990 had a quarterly newsletter and was initially sponsored by the U.S. Department of Energy. The newsletter was initially called ATES Newsletter, and after BTES became a feasible technology it was changed to STES Newsletter. Use of STES for small, passively heated buildings Small passively heated buildings typically use the soil adjoining the building as a low-temperature seasonal heat store that in the annual cycle reaches a maximum temperature similar to average annual air temperature, with the temperature drawn down for heating in colder months. Such systems are a feature of building design, as some simple but significant differences from 'traditional' buildings are necessary. At a depth of about in the soil, the temperature is naturally stable within a year-round range, if the drawdown does not exceed the natural capacity for solar restoration of heat. Such storage systems operate within a narrow range of storage temperatures over the course of a year, as opposed to the other STES systems described above for which large annual temperature differences are intended. Two basic passive solar building technologies were developed in the US during the 1970s and 1980s. They use direct heat conduction to and from thermally isolated, moisture-protected soil as a seasonal storage method for space heating, with direct conduction as the heat return mechanism. In one method, "passive annual heat storage" (PAHS), the building's windows and other exterior surfaces capture solar heat which is transferred by conduction through the floors, walls, and sometimes the roof, into adjoining thermally buffered soil. When the interior spaces are cooler than the storage medium, heat is conducted back to the living space. The other method, “annualized geothermal solar” (AGS) uses a separate solar collector to capture heat. The collected heat is delivered to a storage device (soil, gravel bed or water tank) either passively by the convection of the heat transfer medium (e.g. air or water) or actively by pumping it. This method is usually implemented with a capacity designed for six months of heating. A number of examples of the use of solar thermal storage from across the world include: Suffolk One a college in East Anglia, England, that uses a thermal collector of pipe buried in the bus turning area to collect solar energy that is then stored in 18 boreholes each deep for use in winter heating. Drake Landing Solar Community in Canada uses solar thermal collectors on the garage roofs of 52 homes, which is then stored in an array of deep boreholes. The ground can reach temperatures in excess of 70 °C which is then used to heat the houses passively. The scheme has been running successfully since 2007. In Brædstrup, Denmark, some of solar thermal collectors are used to collect some 4,000,000 kWh/year similarly stored in an array of deep boreholes. Liquid engineering Architect Matyas Gutai obtained an EU grant to construct a house in Hungary which uses extensive water filled wall panels as heat collectors and reservoirs with underground heat storage water tanks. The design uses microprocessor control. Small buildings with internal STES water tanks A number of homes and small apartment buildings have demonstrated combining a large internal water tank for heat storage with roof-mounted solar-thermal collectors. Storage temperatures of are sufficient to supply both domestic hot water and space heating. The first such house was MIT Solar House #1, in 1939. An eight-unit apartment building in Oberburg, Switzerland was built in 1989, with three tanks storing a total of that store more heat than the building requires. Since 2011, that design is now being replicated in new buildings. In Berlin, the “Zero Heating Energy House”, was built in 1997 in as part of the IEA Task 13 low energy housing demonstration project. It stores water at temperatures up to inside a tank in the basement. A similar example was built in Ireland in 2009, as a prototype. The solar seasonal store consists of a tank, filled with water, which was installed in the ground, heavily insulated all around, to store heat from evacuated solar tubes during the year. The system was installed as an experiment to heat the world's first standardized pre-fabricated passive house in Galway, Ireland. The aim was to find out if this heat would be sufficient to eliminate the need for any electricity in the already highly efficient home during the winter months. Based on improvements in glazing the Zero heating buildings are now possible without seasonal energy storage. Use of STES in greenhouses STES is also used extensively for the heating of greenhouses. ATES is the kind of storage commonly in use for this application. In summer, the greenhouse is cooled with ground water, pumped from the “cold well” in the aquifer. The water is heated in the process, and is returned to the “warm well” in the aquifer. When the greenhouse needs heat, such as to extend the growing season, water is withdrawn from the warm well, becomes chilled while serving its heating function, and is returned to the cold well. This is a very efficient system of free cooling, which uses only circulation pumps and no heat pumps. Annualized geo-solar Annualized geo-solar (AGS) enables passive solar heating in even cold, foggy north temperate areas. It uses the ground under or around a building as thermal mass to heat and cool the building. After a designed, conductive thermal lag of 6 months the heat is returned to, or removed from, the inhabited spaces of the building. In hot climates, exposing the collector to the frigid night sky in winter can cool the building in summer. The six-month thermal lag is provided by about three meters (ten feet) of dirt. A six-meter-wide (20 ft) buried skirt of insulation around the building keeps rain and snow melt out of the dirt, which is usually under the building. The dirt does radiant heating and cooling through the floor or walls. A thermal siphon moves the heat between the dirt and the solar collector. The solar collector may be a sheet-metal compartment in the roof, or a wide flat box on the side of a building or hill. The siphons may be made from plastic pipe and carry air. Using air prevents water leaks and water-caused corrosion. Plastic pipe doesn't corrode in damp earth, as metal ducts can. AGS heating systems typically consist of: A very well-insulated, energy efficient, eco-friendly living space; Heat captured in the summer months from a sun-warmed sub-roof or attic space, a sunspace or greenhouse, a ground-based, flat-plate, thermosyphon collector, or other solar-heat collection device; Heat transported from the collection source into (typically) the earth mass under the living space (for storage), this mass surrounded by a sub-surface perimeter "cape" or "umbrella" providing both insulation from easy heat-loss back up to the outdoors air and a barrier against moisture migration through that heat-storage mass; A high-density floor whose thermal properties are designed to radiate heat back into the living space, but only after the proper sub-floor-insulation-regulated time-lag; A control-scheme or system which activates (often PV-powered) fans and dampers, when the warm-season air is sensed to be hotter in the collection area(s) than in the storage mass, or allows the heat to be moved into the storage-zone by passive convection (often using a solar chimney and thermally activated dampers.) Usually it requires several years for the storage earth-mass to fully preheat from the local at-depth soil temperature (which varies widely by region and site-orientation) to an optimum Fall level at which it can provide up to 100% of the heating requirements of the living space through the winter. This technology continues to evolve, with a range of variations (including active-return devices) being explored. The listserve where this innovation is most often discussed is "Organic Architecture" at Yahoo. This system is almost exclusively deployed in northern Europe. One system has been built at Drake Landing in North America. A more recent system is a Do-it-yourself energy-neutral home in progress in Collinsville, IL that will rely solely on Annualized Solar for conditioning. See also Central solar heating District heating Geosolar Ice house (building) Ice pond List of energy storage projects Solar pond Solar thermal collector Thermal energy storage Zero energy building Zero heating building References External links DOE EERE Research Reports December 2005, Seasonal thermal store being fitted in an ENERGETIKhaus100 October 1998, Fujita Research report Earth Notes: Milk Tanker Thermal Store with Heat Pump Heliostats used for concentrating solar power (photos) Wofati Eco building with annualized thermal inertia Energy storage Energy conservation Energy recovery Sustainable energy Renewable energy Appropriate technology Geothermal energy Solar architecture Solar power
Seasonal thermal energy storage
[ "Engineering" ]
3,640
[ "Building engineering", "Civil engineering", "Architecture" ]
5,498,670
https://en.wikipedia.org/wiki/Power%20dividers%20and%20directional%20couplers
Power dividers (also power splitters and, when used in reverse, power combiners) and directional couplers are passive devices used mostly in the field of radio technology. They couple a defined amount of the electromagnetic power in a transmission line to a port enabling the signal to be used in another circuit. An essential feature of directional couplers is that they only couple power flowing in one direction. Power entering the output port is coupled to the isolated port but not to the coupled port. A directional coupler designed to split power equally between two ports is called a hybrid coupler. Directional couplers are most frequently constructed from two coupled transmission lines set close enough together such that energy passing through one is coupled to the other. This technique is favoured at the microwave frequencies where transmission line designs are commonly used to implement many circuit elements. However, lumped component devices are also possible at lower frequencies, such as the audio frequencies encountered in telephony. Also at microwave frequencies, particularly the higher bands, waveguide designs can be used. Many of these waveguide couplers correspond to one of the conducting transmission line designs, but there are also types that are unique to waveguide. Directional couplers and power dividers have many applications. These include providing a signal sample for measurement or monitoring, feedback, combining feeds to and from antennas, antenna beam forming, providing taps for cable distributed systems such as cable TV, and separating transmitted and received signals on telephone lines. Notation and symbols The symbols most often used for directional couplers are shown in figure 1. The symbol may have the coupling factor in dB marked on it. Directional couplers have four ports. Port 1 is the input port where power is applied. Port 3 is the coupled port where a portion of the power applied to port 1 appears. Port 2 is the transmitted port where the power from port 1 is outputted, less the portion that went to port 3. Directional couplers are frequently symmetrical so there also exists port 4, the isolated port. A portion of the power applied to port 2 will be coupled to port 4. However, the device is not normally used in this mode and port 4 is usually terminated with a matched load (typically 50 ohms). This termination can be internal to the device and port 4 is not accessible to the user. Effectively, this results in a 3-port device, hence the utility of the second symbol for directional couplers in figure 1. Symbols of the form; in this article have the meaning "parameter P at port a due to an input at port b". A symbol for power dividers is shown in figure 2. Power dividers and directional couplers are in all essentials the same class of device. Directional coupler tends to be used for 4-port devices that are only loosely coupled – that is, only a small fraction of the input power appears at the coupled port. Power divider is used for devices with tight coupling (commonly, a power divider will provide half the input power at each of its output ports – a divider) and is usually considered a 3-port device. Parameters Common properties desired for all directional couplers are wide operational bandwidth, high directivity, and a good impedance match at all ports when the other ports are terminated in matched loads. Some of these, and other, general characteristics are discussed below. Coupling factor The coupling factor is defined as: where P1 is the input power at port 1 and P3 is the output power from the coupled port (see figure 1). The coupling factor represents the primary property of a directional coupler. Coupling factor is a negative quantity, it cannot exceed for a passive device, and in practice does not exceed since more than this would result in more power output from the coupled port than power from the transmitted port – in effect their roles would be reversed. Although a negative quantity, the minus sign is frequently dropped (but still implied) in running text and diagrams and a few authors go so far as to define it as a positive quantity. Coupling is not constant, but varies with frequency. While different designs may reduce the variance, a perfectly flat coupler theoretically cannot be built. Directional couplers are specified in terms of the coupling accuracy at the frequency band center. Loss The main line insertion loss from port 1 to port 2 (P1 – P2) is: Insertion loss: Part of this loss is due to some power going to the coupled port and is called coupling loss and is given by: Coupling loss: The insertion loss of an ideal directional coupler will consist entirely of the coupling loss. In a real directional coupler, however, the insertion loss consists of a combination of coupling loss, dielectric loss, conductor loss, and VSWR loss. Depending on the frequency range, coupling loss becomes less significant above coupling where the other losses constitute the majority of the total loss. The theoretical insertion loss (dB) vs coupling (dB) for a dissipationless coupler is shown in the graph of figure 3 and the table below. Isolation Isolation of a directional coupler can be defined as the difference in signal levels in dB between the input port and the isolated port when the two other ports are terminated by matched loads, or: Isolation: Isolation can also be defined between the two output ports. In this case, one of the output ports is used as the input; the other is considered the output port while the other two ports (input and isolated) are terminated by matched loads. Consequently: The isolation between the input and the isolated ports may be different from the isolation between the two output ports. For example, the isolation between ports 1 and 4 can be while the isolation between ports 2 and 3 can be a different value such as . Isolation can be estimated from the coupling plus return loss. The isolation should be as high as possible. In actual couplers the isolated port is never completely isolated. Some RF power will always be present. Waveguide directional couplers will have the best isolation. Directivity Directivity is directly related to isolation. It is defined as: Directivity: where: P3 is the output power from the coupled port and P4 is the power output from the isolated port. The directivity should be as high as possible. The directivity is very high at the design frequency and is a more sensitive function of frequency because it depends on the cancellation of two wave components. Waveguide directional couplers will have the best directivity. Directivity is not directly measurable, and is calculated from the addition of the isolation and (negative) coupling measurements as: Note that if the positive definition of coupling is used, the formula results in: S-parameters The S-matrix for an ideal (infinite isolation and perfectly matched) symmetrical directional coupler is given by, is the transmission coefficient and, is the coupling coefficient In general, and are complex, frequency dependent, numbers. The zeroes on the matrix main diagonal are a consequence of perfect matching – power input to any port is not reflected back to that same port. The zeroes on the matrix antidiagonal are a consequence of perfect isolation between the input and isolated port. For a passive lossless directional coupler, we must in addition have, since the power entering the input port must all leave by one of the other two ports. Insertion loss is related to by; Coupling factor is related to by; Non-zero main diagonal entries are related to return loss, and non-zero antidiagonal entries are related to isolation by similar expressions. Some authors define the port numbers with ports 3 and 4 interchanged. This results in a scattering matrix that is no longer all-zeroes on the antidiagonal. Amplitude balance This terminology defines the power difference in dB between the two output ports of a hybrid. In an ideal hybrid circuit, the difference should be . However, in a practical device the amplitude balance is frequency dependent and departs from the ideal difference. Phase balance The phase difference between the two output ports of a hybrid coupler should be 0°, 90°, or 180° depending on the type used. However, like amplitude balance, the phase difference is sensitive to the input frequency and typically will vary a few degrees. Transmission line types Directional couplers Coupled transmission lines The most common form of directional coupler is a pair of coupled transmission lines. They can be realised in a number of technologies including coaxial and the planar technologies (stripline and microstrip). An implementation in stripline is shown in figure 4 of a quarter-wavelength (λ/4) directional coupler. The power on the coupled line flows in the opposite direction to the power on the main line, hence the port arrangement is not the same as shown in figure 1, but the numbering remains the same. For this reason it is sometimes called a backward coupler. The main line is the section between ports 1 and 2 and the coupled line is the section between ports 3 and 4. Since the directional coupler is a linear device, the notations on figure 1 are arbitrary. Any port can be the input, (an example is seen in figure 20) which will result in the directly connected port being the transmitted port, the adjacent port being the coupled port, and the diagonal port being the isolated port. On some directional couplers, the main line is designed for high power operation (large connectors), while the coupled port may use a small connector, such as an SMA connector. The internal load power rating may also limit operation on the coupled line. Accuracy of coupling factor depends on the dimensional tolerances for the spacing of the two coupled lines. For planar printed technologies this comes down to the resolution of the printing process which determines the minimum track width that can be produced and also puts a limit on how close the lines can be placed to each other. This becomes a problem when very tight coupling is required and couplers often use a different design. However, tightly coupled lines can be produced in air stripline which also permits manufacture by printed planar technology. In this design the two lines are printed on opposite sides of the dielectric rather than side by side. The coupling of the two lines across their width is much greater than the coupling when they are edge-on to each other. The λ/4 coupled-line design is good for coaxial and stripline implementations but does not work so well in the now popular microstrip format, although designs do exist. The reason for this is that microstrip is not a homogeneous medium – there are two different mediums above and below the transmission strip. This leads to transmission modes other than the usual TEM mode found in conductive circuits. The propagation velocities of even and odd modes are different leading to signal dispersion. A better solution for microstrip is a coupled line much shorter than λ/4, shown in figure 5, but this has the disadvantage of a coupling factor which rises noticeably with frequency. A variation of this design sometimes encountered has the coupled line a higher impedance than the main line such as shown in figure 6. This design is advantageous where the coupler is being fed to a detector for power monitoring. The higher impedance line results in a higher RF voltage for a given main line power making the work of the detector diode easier. The frequency range specified by manufacturers is that of the coupled line. The main line response is much wider: for instance a coupler specified as might have a main line which could operate at . The coupled response is periodic with frequency. For example, a λ/4 coupled-line coupler will have responses at nλ/4 where n is an odd integer. This preferred response gets obvious when a short impulse on the main line is followed through the coupler. When the impulse on the main line reaches the coupled line a signal of the same polarity is induced on the coupled line similar to the response of an RC-high-pass. This leads to two non-inverted pulses on the coupled line that travel in opposite direction to each other. When the pulse on the main line leaves the coupled line an inverted signal is induced on the coupled line, triggering two inverted impulses that travel in opposite direction to each other. Both impulses on the coupled line that go in the same direction as the pulse on the main line are of opposite polarity. They cancel each other so there is no response on the exit of the coupled line in forward direction. This is the decoupled port. The pulses on the coupled line that travel in the opposite direction to the pulse on the main line are also of opposite polarity to each other but the second impulse is delayed by twice the delay of the parallel line. For a λ/4 coupled-line the total delay length is λ/2 so the second signal is inverted and this gives a maximum response on the coupled port. A single λ/4 coupled section is good for bandwidths of less than an octave. To achieve greater bandwidths multiple λ/4 coupling sections are used. The design of such couplers proceeds in much the same way as the design of distributed-element filters. The sections of the coupler are treated as being sections of a filter, and by adjusting the coupling factor of each section the coupled port can be made to have any of the classic filter responses such as maximally flat (Butterworth filter), equal-ripple (Cauer filter), or a specified-ripple (Chebychev filter) response. Ripple is the maximum variation in output of the coupled port in its passband, usually quoted as plus or minus a value in dB from the nominal coupling factor. It can be shown that coupled-line directional couplers have purely real and purely imaginary at all frequencies. This leads to a simplification of the S-matrix and the result that the coupled port is always in quadrature phase (90°) with the output port. Some applications make use of this phase difference. Letting , the ideal case of lossless operation simplifies to, Branch-line coupler The branch-line coupler consists of two parallel transmission lines physically coupled together with two or more branch lines between them. The branch lines are spaced λ/4 apart and represent sections of a multi-section filter design in the same way as the multiple sections of a coupled-line coupler except that here the coupling of each section is controlled with the impedance of the branch lines. The main and coupled line are of the system impedance. The more sections there are in the coupler, the higher is the ratio of impedances of the branch lines. High impedance lines have narrow tracks and this usually limits the design to three sections in planar formats due to manufacturing limitations. A similar limitation applies for coupling factors looser than ; low coupling also requires narrow tracks. Coupled lines are a better choice when loose coupling is required, but branch-line couplers are good for tight coupling and can be used for hybrids. Branch-line couplers usually do not have such a wide bandwidth as coupled lines. This style of coupler is good for implementing in high-power, air dielectric, solid bar formats as the rigid structure is easy to mechanically support. Branch line couplers can be used as crossovers as an alternative to air bridges, which in some applications cause an unacceptable amount of coupling between the lines being crossed. An ideal branch-line crossover theoretically has no coupling between the two paths through it. The design is a 3-branch coupler equivalent to two 90° hybrid couplers connected in cascade. The result is effectively a coupler. It will cross over the inputs to the diagonally opposite outputs with a phase delay of 90° in both lines. Lange coupler The construction of the Lange coupler is similar to the interdigital filter with paralleled lines interleaved to achieve the coupling. It is used for strong couplings in the range to . Power dividers The earliest transmission line power dividers were simple T-junctions. These suffer from very poor isolation between the output ports – a large part of the power reflected back from port 2 finds its way into port 3. It can be shown that it is not theoretically possible to simultaneously match all three ports of a passive, lossless three-port and poor isolation is unavoidable. It is, however, possible with four-ports and this is the fundamental reason why four-port devices are used to implement three-port power dividers: four-port devices can be designed so that power arriving at port 2 is split between port 1 and port 4 (which is terminated with a matching load) and none (in the ideal case) goes to port 3. The term hybrid coupler originally applied to coupled-line directional couplers, that is, directional couplers in which the two outputs are each half the input power. This synonymously meant a quadrature coupler with outputs 90° out of phase. Now any matched 4-port with isolated arms and equal power division is called a hybrid or hybrid coupler. Other types can have different phase relationships. If 90°, it is a 90° hybrid, if 180°, a 180° hybrid and so on. In this article hybrid coupler without qualification means a coupled-line hybrid. Wilkinson power divider The Wilkinson power divider consists of two parallel uncoupled λ/4 transmission lines. The input is fed to both lines in parallel and the outputs are terminated with twice the system impedance bridged between them. The design can be realised in planar format but it has a more natural implementation in coax – in planar, the two lines have to be kept apart so that they do not couple but have to be brought together at their outputs so they can be terminated whereas in coax the lines can be run side-by-side relying on the coax outer conductors for screening. The Wilkinson power divider solves the matching problem of the simple T-junction: it has low VSWR at all ports and high isolation between output ports. The input and output impedances at each port are designed to be equal to the characteristic impedance of the microwave system. This is achieved by making the line impedance of the system impedance – for a system the Wilkinson lines are approximately Hybrid coupler Coupled-line directional couplers are described above. When the coupling is designed to be it is called a hybrid coupler. The S-matrix for an ideal, symmetric hybrid coupler reduces to; The two output ports have a 90° phase difference (-i to −1) and so this is a 90° hybrid. Hybrid ring coupler The hybrid ring coupler, also called the rat-race coupler, is a four-port directional coupler consisting of a 3λ/2 ring of transmission line with four lines at the intervals shown in figure 12. Power input at port 1 splits and travels both ways round the ring. At ports 2 and 3 the signal arrives in phase and adds whereas at port 4 it is out of phase and cancels. Ports 2 and 3 are in phase with each other, hence this is an example of a 0° hybrid. Figure 12 shows a planar implementation but this design can also be implemented in coax or waveguide. It is possible to produce a coupler with a coupling factor different from by making each λ/4 section of the ring alternately low and high impedance but for a coupler the entire ring is made of the port impedances – for a design the ring would be approximately . The S-matrix for this hybrid is given by; The hybrid ring is not symmetric on its ports; choosing a different port as the input does not necessarily produce the same results. With port 1 or port 3 as the input the hybrid ring is a 0° hybrid as stated. However using port 2 or port 4 as the input results in a 180° hybrid. This fact leads to another useful application of the hybrid ring: it can be used to produce sum (Σ) and difference (Δ) signals from two input signals as shown in figure 12. With inputs to ports 2 and 3, the Σ signal appears at port 1 and the Δ signal appears at port 4. Multiple output dividers A typical power divider is shown in figure 13. Ideally, input power would be divided equally between the output ports. Dividers are made up of multiple couplers and, like couplers, may be reversed and used as multiplexers. The drawback is that for a four channel multiplexer, the output consists of only 1/4 the power from each, and is relatively inefficient. The reason for this is that at each combiner half the input power goes to port 4 and is dissipated in the termination load. If the two inputs were coherent the phases could be so arranged that cancellation occurred at port 4 and then all the power would go to port 1. However, multiplexer inputs are usually from entirely independent sources and therefore not coherent. Lossless multiplexing can only be done with filter networks. Waveguide types Waveguide directional couplers Waveguide branch-line coupler The branch-line coupler described above can also be implemented in waveguide. Bethe-hole directional coupler One of the most common, and simplest, waveguide directional couplers is the Bethe-hole directional coupler. This consists of two parallel waveguides, one stacked on top of the other, with a hole between them. Some of the power from one guide is launched through the hole into the other. The Bethe-hole coupler is another example of a backward coupler. The concept of the Bethe-hole coupler can be extended by providing multiple holes. The holes are spaced λ/4 apart. The design of such couplers has parallels with the multiple section coupled transmission lines. Using multiple holes allows the bandwidth to be extended by designing the sections as a Butterworth, Chebyshev, or some other filter class. The hole size is chosen to give the desired coupling for each section of the filter. Design criteria are to achieve a substantially flat coupling together with high directivity over the desired band. Riblet short-slot coupler The Riblet short-slot coupler is two waveguides side-by-side with the side-wall in common instead of the long side as in the Bethe-hole coupler. A slot is cut in the sidewall to allow coupling. This design is frequently used to produce a coupler. Schwinger reversed-phase coupler The Schwinger reversed-phase coupler is another design using parallel waveguides, this time the long side of one is common with the short side-wall of the other. Two off-centre slots are cut between the waveguides spaced λ/4 apart. The Schwinger is a backward coupler. This design has the advantage of a substantially flat directivity response and the disadvantage of a strongly frequency-dependent coupling compared to the Bethe-hole coupler, which has little variation in coupling factor. Moreno crossed-guide coupler The Moreno crossed-guide coupler has two waveguides stacked one on top of the other like the Bethe-hole coupler but at right angles to each other instead of parallel. Two off-centre holes, usually cross-shaped are cut on the diagonal between the waveguides a distance apart. The Moreno coupler is good for tight coupling applications. It is a compromise between the properties of the Bethe-hole and Schwinger couplers with both coupling and directivity varying with frequency. Waveguide power dividers Waveguide hybrid ring The hybrid ring discussed above can also be implemented in waveguide. Magic tee Coherent power division was first accomplished by means of simple Tee junctions. At microwave frequencies, waveguide tees have two possible forms – the E-plane and H-plane. These two junctions split power equally, but because of the different field configurations at the junction, the electric fields at the output arms are in phase for the H-plane tee and are 180° out of phase for the E-plane tee. The combination of these two tees to form a hybrid tee is known as the magic tee. The magic tee is a four-port component which can perform the vector sum (Σ) and difference (Δ) of two coherent microwave signals. Discrete element types Hybrid transformer The standard 3 dB hybrid transformer is shown in figure 16. Power at port 1 is split equally between ports 2 and 3 but in antiphase to each other. The hybrid transformer is therefore a 180° hybrid. The centre-tap is usually terminated internally but it is possible to bring it out as port 4; in which case the hybrid can be used as a sum and difference hybrid. However, port 4 presents as a different impedance to the other ports and will require an additional transformer for impedance conversion if it is required to use this port at the same system impedance. Hybrid transformers are commonly used in telecommunications for 2 to 4 wire conversion. Telephone handsets include such a converter to convert the 2-wire line to the 4 wires from the earpiece and mouthpiece. Cross-connected transformers For lower frequencies (less than ) a compact broadband implementation by means of RF transformers is possible. In figure 17 a circuit is shown which is meant for weak coupling and can be understood along these lines: A signal is coming in one line pair. One transformer reduces the voltage of the signal the other reduces the current. Therefore, the impedance is matched. The same argument holds for every other direction of a signal through the coupler. The relative sign of the induced voltage and current determines the direction of the outgoing signal. The coupling is given by; where n is the secondary to primary turns ratio. For a coupling, that is equal splitting of the signal between the transmitted port and the coupled port, and the isolated port is terminated in twice the characteristic impedance – for a system. A power divider based on this circuit has the two outputs in 180° phase to each other, compared to λ/4 coupled lines which have a 90° phase relationship. Resistive tee A simple tee circuit of resistors can be used as a power divider as shown in figure 18. This circuit can also be implemented as a delta circuit by applying the Y-Δ transform. The delta form uses resistors that are equal to the system impedance. This can be advantageous because precision resistors of the value of the system impedance are always available for most system nominal impedances. The tee circuit has the benefits of simplicity, low cost, and intrinsically wide bandwidth. It has two major drawbacks; first, the circuit will dissipate power since it is resistive: an equal split will result in insertion loss instead of . The second problem is that there is directivity leading to very poor isolation between the output ports. The insertion loss is not such a problem for an unequal split of power: for instance at port 3 has an insertion loss less than at port 2. Isolation can be improved at the expense of insertion loss at both output ports by replacing the output resistors with T pads. The isolation improvement is greater than the insertion loss added. 6 dB resistive bridge hybrid A true hybrid divider/coupler with, theoretically, infinite isolation and directivity can be made from a resistive bridge circuit. Like the tee circuit, the bridge has insertion loss. It has the disadvantage that it cannot be used with unbalanced circuits without the addition of transformers; however, it is ideal for balanced telecommunication lines if the insertion loss is not an issue. The resistors in the bridge which represent ports are not usually part of the device (with the exception of port 4 which may well be left permanently terminated internally) these being provided by the line terminations. The device thus consists essentially of two resistors (plus the port 4 termination). Applications Monitoring The coupled output from the directional coupler can be used to monitor frequency and power level on the signal without interrupting the main power flow in the system (except for a power reduction – see figure 3). Making use of isolation If isolation is high, directional couplers are good for combining signals to feed a single line to a receiver for two-tone receiver tests. In figure 20, one signal enters port P3 and one enters port P2, while both exit port P1. The signal from port P3 to port P1 will experience of loss, and the signal from port P2 to port P1 will have loss. The internal load on the isolated port will dissipate the signal losses from port P3 and port P2. If the isolators in figure 20 are neglected, the isolation measurement (port P2 to port P3) determines the amount of power from the signal generator F2 that will be injected into the signal generator F1. As the injection level increases, it may cause modulation of signal generator F1, or even injection phase locking. Because of the symmetry of the directional coupler, the reverse injection will happen with the same possible modulation problems of signal generator F2 by F1. Therefore, the isolators are used in figure 20 to effectively increase the isolation (or directivity) of the directional coupler. Consequently, the injection loss will be the isolation of the directional coupler plus the reverse isolation of the isolator. Hybrids Applications of the hybrid include monopulse comparators, mixers, power combiners, dividers, modulators, and phased array radar antenna systems. Both in-phase devices (such as the Wilkinson divider) and quadrature (90°) hybrid couplers may be used for coherent power divider applications. An example of quadrature hybrids being used in a coherent power combiner application is given in the next section. An inexpensive version of the power divider is used in the home to divide cable TV or over-the-air TV signals to multiple TV sets and other devices. Multiport splitters with more than two output ports usually consist internally of a number of cascaded couplers. Domestic broadband internet service can be provided by cable TV companies (cable internet). The domestic user's internet cable modem is connected to one port of the splitter. Power combiners Since hybrid circuits are bi-directional, they can be used to coherently combine power as well as splitting it. In figure 21, an example is shown of a signal split up to feed multiple low power amplifiers, then recombined to feed a single antenna with high power. The phases of the inputs to each power combiner are arranged such that the two inputs are 90° out of phase with each other. Since the coupled port of a hybrid combiner is 90° out of phase with the transmitted port, this causes the powers to add at the output of the combiner and to cancel at the isolated port: a representative example from figure 21 is shown in figure 22. Note that there is an additional fixed 90° phase shift to both ports at each combiner/divider which is not shown in the diagrams for simplicity. Applying in-phase power to both input ports would not get the desired result: the quadrature sum of the two inputs would appear at both output ports – that is half the total power out of each. This approach allows the use of numerous less expensive and lower-power amplifiers in the circuitry instead of a single high-power TWT. Yet another approach is to have each solid state amplifier (SSA) feed an antenna and let the power be combined in space or be used to feed a lens attached to an antenna. Phase difference The phase properties of a 90° hybrid coupler can be used to great advantage in microwave circuits. For example, in a balanced microwave amplifier the two input stages are fed through a hybrid coupler. The FET device normally has a very poor match and reflects much of the incident energy. However, since the devices are essentially identical the reflection coefficients from each device are equal. The reflected voltage from the FETs are in phase at the isolated port and are 180° different at the input port. Therefore, all of the reflected power from the FETs goes to the load at the isolated port and no power goes to the input port. This results in a good input match (low VSWR). If phase-matched lines are used for an antenna input to a 180° hybrid coupler as shown in figure 23, a null will occur directly between the antennas. To receive a signal in that position, one would have to either change the hybrid type or line length. To reject a signal from a given direction, or create the difference pattern for a monopulse radar, this is a good approach. Phase-difference couplers can be used to create beam tilt in a VHF FM radio station, by delaying the phase to the lower elements of an antenna array. More generally, phase-difference couplers, together with fixed phase delays and antenna arrays, are used in beam-forming networks such as the Butler matrix, to create a radio beam in any prescribed direction. See also Star coupler Beam splitter References Bibliography Stephen J. Bigelow, Joseph J. Carr, Steve Winder, Understanding telephone electronics Newnes, 2001 . Geoff H. Bryant, Principles of Microwave Measurements, Institution of Electrical Engineers, 1993 . Robert J. Chapuis, Amos E. Joel, 100 Years of Telephone Switching (1878–1978): Electronics, computers, and telephone switching (1960–1985), IOS Press, 2003 . Walter Y. Chen, Home Networking Basis, Prentice Hall Professional, 2003 . R. Comitangelo, D. Minervini, B. Piovano, "Beam forming networks of optimum size and compactness for multibeam antennas at 900 MHz", IEEE Antennas and Propagation Society International Symposium 1997, vol. 4, pp. 2127-2130, 1997. Stephen A. Dyer, Survey of instrumentation and measurement Wiley-IEEE, 2001 . Kyōhei Fujimoto, Mobile Antenna Systems Handbook, Artech House, 2008 . Preston Gralla, How the Internet Works, Que Publishing, 1998 . Ian Hickman, Practical Radio-frequency Handbook, Newnes, 2006 . Apinya Innok, Peerapong Uthansakul, Monthippa Uthansakul, "Angular beamforming technique for MIMO beamforming system", International Journal of Antennas and Propagation, vol. 2012, iss. 11, December 2012. Thomas Koryu Ishii, Handbook of Microwave Technology: Components and devices, Academic Press, 1995 . Y. T. Lo, S. W. Lee, Antenna Handbook: Applications, Springer, 1993 . Matthaei, George L.; Young, Leo and Jones, E. M. T. Microwave Filters, Impedance-Matching Networks, and Coupling Structures McGraw-Hill 1964 D. Morgan, A Handbook for EMC Testing and Measurement, IET, 1994 . Antti V. Räisänen, Arto Lehto, Radio engineering for wireless communication and sensor applications, Artech House, 2003 . K.R. Reddy, S. B. Badami, V. Balasubramanian, Oscillations And Waves, Universities Press, 1994 . Peter Vizmuller, RF design guide: systems, circuits, and equations, Volume 1, Artech House, 1995 . A. Franzen, Impulsantwort eines Leitungskopplers, CQ DL, vol. 7, pp. 28-31, 2020. Radio electronics Microwave technology Distributed element circuits
Power dividers and directional couplers
[ "Engineering" ]
7,229
[ "Radio electronics", "Electronic engineering", "Distributed element circuits" ]
4,116,487
https://en.wikipedia.org/wiki/PComb3H
pComb3H, a derivative of pComb3 optimized for expression of human fragments, is a phagemid used to express proteins such as zinc finger proteins and antibody fragments on phage pili for the purpose of phage display selection. For the purpose of phage production, it contains the bacterial ampicillin resistance gene (for B-lactamase), allowing the growth of only transformed bacteria. References Molecular biology Plasmids
PComb3H
[ "Chemistry", "Biology" ]
97
[ "Biochemistry", "Plasmids", "Bacteria", "Molecular biology" ]
4,116,762
https://en.wikipedia.org/wiki/Adenosine%20A1%20receptor
{{DISPLAYTITLE:Adenosine A1 receptor}} The adenosine A1 receptor (A1AR) is one member of the adenosine receptor group of G protein-coupled receptors with adenosine as endogenous ligand. Biochemistry A1 receptors are implicated in sleep promotion by inhibiting wake-promoting cholinergic neurons in the basal forebrain. A1 receptors are also present in smooth muscle throughout the vascular system. The adenosine A1 receptor has been found to be ubiquitous throughout the entire body. Signaling Activation of the adenosine A1 receptor by an agonist causes binding of Gi1/2/3 or Go protein. Binding of Gi1/2/3 causes an inhibition of adenylate cyclase and, therefore, a decrease in the cAMP concentration. An increase of the inositol triphosphate/diacylglycerol concentration is caused by an activation of phospholipase C, whereas the elevated levels of arachidonic acid are mediated by DAG lipase, which cleaves DAG to form arachidonic acid. Several types of potassium channels are activated but N-, P-, and Q-type calcium channels are inhibited. Effect This receptor has an inhibitory function on most of the tissues in which it rests. In the brain, it slows metabolic activity by a combination of actions. At the neuron's synapse, it reduces synaptic vesicle release. Ligands Caffeine, as well as theophylline, has been found to antagonize both A1 and A2A receptors in the brain. Agonists 2-Chloro-N(6)-cyclopentyladenosine (CCPA). N6-Cyclopentyladenosine N(6)-cyclohexyladenosine Tecadenoson ((2R,3S,4R)-2-(hydroxymethyl)-5-(6- ((R)-tetrahydrofuran-3-ylamino)-9H-purin-9-yl)-tetrashydrofuran3,4-diol) Selodenoson ((2S,3S,4R)-5-(6-(cyclopentylamino)-9Hpurin-9-yl)-N-ethyl-3,4-dihydroxytetrahydrofuran-2-carboxamide) Capadenoson (BAY68-4986) Benzyloxy-cyclopentyladenosine (BnOCPA) is an A1R selective agonist. PAMs 2‑Amino-3-(4′-chlorobenzoyl)-4-substituted-5-arylethynyl thiophene # 4e Antagonists Non-selective Caffeine Theophylline CGS-15943 Selective 8-Cyclopentyl-1,3-dimethylxanthine (CPX / 8-cyclopentyltheophylline) 8-Cyclopentyl-1,3-dipropylxanthine (DPCPX) 8-Phenyl-1,3-dipropylxanthine Bamifylline BG-9719 BG-9928 FK-453 FK-838 Rolofylline (KW-3902) N-0861 ISAM-CV202 In the heart In the heart, A1 receptors play roles in electrical pacing (chronotropy and dromotropy), fluid balance, local sympathetic regulation, and metabolism. When bound by adenosine, A1 receptors inhibit impulses generated in supraventricular tissue (SA node, AV node) and the Bundle of His/Purkinje system, leading to negative chronotropy (slowing of the heart rate). Specifically, A1 receptor activation leads to inactivation of the inwardly rectifying K+ current and inhibition of the inward Ca2+ current (ICa) and the 'funny' hyperpolarization-activated current (If). Adenosine agonism of A1ARs also inhibits release of norepinephrine from cardiac nerves. Norepinephrine is a positive chronotrope, inotrope, and dromotrope, through its agonism of β adrenergic receptors on pacemaker cells and ventricular myocytes. Collectively, these mechanisms lead to an myocardial depressant effect by decreasing the conduction of electrical impulses and suppressing pacemaker cells function, resulting in a decrease in heart rate. This makes adenosine a useful medication for treating and diagnosing tachyarrhythmias, or excessively fast heart rates. This effect on the A1 receptor also explains why there is a brief moment of cardiac standstill when adenosine is administered as a rapid IV push during cardiac resuscitation. The rapid infusion causes a momentary myocardial stunning effect. In normal physiological states, this serves as protective mechanisms. However, in altered cardiac function, such as hypoperfusion caused by hypotension, heart attack or cardiac arrest caused by nonperfusing bradycardias, adenosine has a negative effect on physiological functioning by preventing necessary compensatory increases in heart rate and blood pressure that attempt to maintain cerebral perfusion. Metabolically, A1AR activation by endogenous adenosine across the body reduces plasma glucose, lactate, and insulin levels, however A2aR activation increased glucose and lactate levels to an extent greater than the A1AR effect on glucose and lactate. Thus, intravascular administration of adenosine increases the amount of glucose and lactate available in the blood for cardiac myocytes. A1AR activation also partially inhibits glycolysis, slowing its rate to align with oxidative metabolism, which limits post-ischemic damage through reduced H+ generation. In the state of myocardial hypertrophy and remodeling, interstitial adenosine and the expression of the A1AR receptor are both increased. After transition to heart failure however, overexpression of A1AR is no longer present. Excess A1AR expression can induce cardiomyopathy, cardiac dilatation, and cardiac hypertrophy. Cardiac failure may involve increased A1AR expression and decreased adenosine in physical models of cardiac overload and in dysfunction induced by TNFα. Heart failure often involves secretion of atrial natriuretic peptide to compensate for reduced renal perfusion and thus, secretion of electrolytes. A1AR activation also increases secretion of atrial natriuretic peptide from atrial myocytes. References External links Adenosine receptors
Adenosine A1 receptor
[ "Chemistry" ]
1,425
[ "Adenosine receptors", "Signal transduction" ]
4,116,838
https://en.wikipedia.org/wiki/ETwinning
The eTwinning action is an initiative of the European Commission that aims to encourage European schools to collaborate using Information and Communication Technologies (ICT) by providing the necessary infrastructure (online tools, services, support). Teachers registered in the eTwinning action are enabled to form partnerships and develop collaborative, pedagogical school projects in any subject area with the sole requirements to employ ICT to develop their project and collaborate with teachers from other European countries. Formation The project was founded in 2005 under the European Union's e-Learning program and it has been integrated in the Lifelong Learning program since 2007. eTwinning is part of Erasmus+, the EU program for education, training, and youth. History The eTwinning action was launched in January 2005. Its main objectives complied with the decision by the Barcelona European Council in March 2002 to promote school twinning as an opportunity for all students to learn and practice ICT skills and to promote awareness of the multicultural European model of society. More than 13,000 schools were involved in eTwinning within its first year. In 2008, over 50,000 teachers and 4,000 projects have been registered, while a new eTwinning platform was launched. As of January 2018, over 70,000 projects are running in classrooms across Europe. By 2021, more than 226,000 schools in taken part in this work. In early 2009, the eTwinning motto changed from "School partnerships in Europe" to "The community for schools in Europe". In 2022, eTwinning moved to a new platform. Participating countries Member States of the European Union are part of eTwinning: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden and The Netherlands. Overseas territories and countries are also eligible. In addition, Albania, Bosnia and Herzegovina, North Macedonia, Iceland, Liechtenstein, Norway, Serbia and Turkey can also take part. Seven countries from the European neighbourhood (including Armenia, Azerbaijan, Georgia, Moldova and Ukraine) are also part of eTwinning via the eTwinning Plus scheme, as well as countries which are part of the Eastern Partnership, and Tunisia and Jordan (which are part of the Euro-Mediterranean Partnership, EUROMED). Operation The main concept behind eTwinning is that schools are paired with another school elsewhere in Europe and they collaboratively develop a project, also known as eTwinning project. The two schools then communicate online (for example, by e-mail or video conferencing) to collaborate, share and learn from each other. eTwinning encourages and develops ICT skills as the main activities inherently use information technology. Being 'twinned' with a foreign school also encourages cross-cultural exchanges of knowledge, fosters students' intercultural awareness, and improves their communication skills. eTwinning projects can last from one week to several months, and can go on to create permanent relationships between schools. Primary and secondary schools within the European Union member states can participate, in addition to schools from Turkey, Norway and Iceland. In contrast with other European programs, such as the Comenius programme program, all communication is via the internet; therefore there is no need for grants. Along the same lines, face-to-face meetings between partners schools are not required, although they are not prohibited. European schoolnet has been granted the role of Central Support Service (CSS) at European level. eTwinning is also supported by a network of National Support Services. References Gilleran, A. (2007) eTwinning - A New Path for European Schools, eLearning Papers European Schoolnet (2007) Learning with eTwinning: A Handbook for Teachers 2007 European Schoolnet (2006) Learning with eTwinning European Schoolnet (2008) eTwinning: Adventures in language and culture Konstantinidis, A. (2012). Implementing Learning-Oriented Assessment in an eTwinning Online Course for Greek Teachers. MERLOT Journal of Online Learning and Teaching, 8(1), 45-62 External links The official portal for eTwinning (available in 28 languages) European Schoolnet German eTwinning website British Council eTwinning Greek eTwinning website eTwinning - Italy Spanish eTwinning website French eTwinning website Página Portuguesa do eTwinning Press Release for 2008 etwinning prizes Video clips eTwinning YouTube channel Education in the European Union Educational organizations based in Europe Educational projects Educational technology non-profits Information technology organizations based in Europe Information technology projects
ETwinning
[ "Technology", "Engineering" ]
942
[ "Information technology", "Information technology projects" ]
4,116,856
https://en.wikipedia.org/wiki/Adherence%20%28medicine%29
In medicine, patient compliance (also adherence, capacitance) describes the degree to which a patient correctly follows medical advice. Most commonly, it refers to medication or drug compliance, but it can also apply to other situations such as medical device use, self care, self-directed exercises, or therapy sessions. Both patient and health-care provider affect compliance, and a positive physician-patient relationship is the most important factor in improving compliance. Access to care plays a role in patient adherence, whereby greater wait times to access care contributing to greater absenteeism. The cost of prescription medication also plays a major role. Compliance can be confused with concordance, which is the process by which a patient and clinician make decisions together about treatment. Worldwide, non-compliance is a major obstacle to the effective delivery of health care. 2003 estimates from the World Health Organization indicated that only about 50% of patients with chronic diseases living in developed countries follow treatment recommendations with particularly low rates of adherence to therapies for asthma, diabetes, and hypertension. Major barriers to compliance are thought to include the complexity of modern medication regimens, poor health literacy and not understanding treatment benefits, the occurrence of undiscussed side effects, poor treatment satisfaction, cost of prescription medicine, and poor communication or lack of trust between a patient and his or her health-care provider. Efforts to improve compliance have been aimed at simplifying medication packaging, providing effective medication reminders, improving patient education, and limiting the number of medications prescribed simultaneously. Studies show a great variation in terms of characteristics and effects of interventions to improve medicine adherence. It is still unclear how adherence can consistently be improved in order to promote clinically important effects. Terminology In medicine, compliance (synonymous with adherence, capacitance) describes the degree to which a patient correctly follows medical advice. Most commonly, it refers to medication or drug compliance, but it can also apply to medical device use, self care, self-directed exercises, or therapy sessions. Both patient and health-care provider affect compliance, and a positive physician-patient relationship is the most important factor in improving compliance. As of 2003, US health care professionals more commonly used the term "adherence" to a regimen rather than "compliance", because it has been thought to reflect better the diverse reasons for patients not following treatment directions in part or in full. Additionally, the term adherence includes the ability of the patient to take medications as prescribed by their physician with regards to the correct drug, dose, route, timing, and frequency. It has been noted that compliance may only refer to passively following orders. The term adherence is often used to imply a collaborative approach to decision-making and treatment between a patient and clinician. The term concordance has been used in the United Kingdom to involve a patient in the treatment process to improve compliance, and refers to a 2003 NHS initiative. In this context, the patient is informed about their condition and treatment options, involved in the decision as to which course of action to take, and partially responsible for monitoring and reporting back to the team. Informed intentional non-adherence is when the patient, after understanding the risks and benefits, chooses not to take the treatment. As of 2005, the preferred terminology remained a matter of debate. As of 2007, concordance has been used to refer specifically to patient adherence to a treatment regimen which the physician sets up collaboratively with the patient, to differentiate it from adherence to a physician-only prescribed treatment regimen. Despite the ongoing debate, adherence has been the preferred term for the World Health Organization, The American Pharmacists Association, and the U.S. National Institutes of Health Adherence Research Network. The Medical Subject Headings of the United States National Library of Medicine defines various terms with the words adherence and compliance. Patient Compliance and Medication Adherence are distinguished under the MeSH tree of Treatment Adherence and Compliance. Adherence factors An estimated half of those for whom treatment regimens are prescribed do not follow them as directed. Side effects Negative side effects of a medicine can influence adherence. Health literacy Cost and poor understanding of the directions for the treatment, referred to as 'health literacy' have been known to be major barriers to treatment adherence. There is robust evidence that education and physical health are correlated. Poor educational attainment is a key factor in the cycle of health inequalities. Educational qualifications help to determine an individual's position in the labour market, their level of income and therefore their access to resources. Literacy In 1999 one fifth of UK adults, nearly seven million people, had problems with basic skills, especially functional literacy and functional numeracy, described as: "The ability to read, write and speak in English, and to use mathematics at a level necessary to function at work and in society in general." This made it impossible for them to effectively take medication, read labels, follow drug regimes, and find out more. In 2003, 20% of adults in the UK had a long-standing illness or disability and a national study for the UK Department of Health, found more than one-third of people with poor or very poor health had literary skills of Entry Level 3 or below. Low levels of literacy and numeracy were found to be associated with socio-economic deprivation. Adults in more deprived areas, such as the North East of England, performed at a lower level than those in less deprived areas such as the South East. Local authority tenants and those in poor health were particularly likely to lack basic skills. A 2002 analysis of over 100 UK local education authority areas found educational attainment at 15–16 years of age to be strongly associated with coronary heart disease and subsequent infant mortality. A study of the relationship of literacy to asthma knowledge revealed that 31% of asthma patients with a reading level of a ten-year-old knew they needed to see the doctors, even when they were not having an asthma attack, compared to 90% with a high school graduate reading level. Treatment cost In 2013 the US National Community Pharmacists Association sampled for one month 1,020 Americans above age 40 for with an ongoing prescription to take medication for a chronic condition and gave a grade C+ on adherence. In 2009, this contributed to an estimated cost of $290 billion annually. In 2012, increase in patient medication cost share was found to be associated with low adherence to medication. The United States is among the countries with the highest prices of prescription drugs mainly attributed to the government's lack of negotiating lower prices with monopolies in the pharmaceutical industry especially with brand name drugs. In order to manage medication costs, many US patients on long term therapies fail to fill their prescription, skip or reduce doses. According to a Kaiser Family Foundation survey in 2015, about three quarters (73%) of the public think drug prices are unreasonable and blame pharmaceutical companies for setting prices so high. In the same report, half of the public reported that they are taking prescription drugs and a "quarter (25%) of those currently taking prescription medicine report they or a family member have not filled a prescription in the past 12 months due to cost, and 18 percent report cutting pills in half or skipping doses". In a 2009 comparison to Canada, only 8% of adults reported to have skipped their doses or not filling their prescriptions due to the cost of their prescribed medications. Age The elderly often have multiple health conditions, and around half of all NHS medicines are prescribed for people over retirement age, despite representing only about 20% of the UK population. The recent National Service Framework on the care of older people highlighted the importance of taking and effectively managing medicines in this population. However, elderly individuals may face challenges, including multiple medications with frequent dosing, and potentially decreased dexterity or cognitive functioning. Patient knowledge is a concern that has been observed. In 1999 Cline et al. identified several gaps in knowledge about medication in elderly patients discharged from hospital. Despite receiving written and verbal information, 27% of older people discharged after heart failure were classed as non-adherent within 30 days. Half the patients surveyed could not recall the dose of the medication that they were prescribed and nearly two-thirds did not know what time of day to take them. A 2001 study by Barat et al. evaluated the medical knowledge and factors of adherence in a population of 75-year-olds living at home. They found that 40% of elderly patients do not know the purpose of their regimen and only 20% knew the consequences of non-adherence. Comprehension, polypharmacy, living arrangement, multiple doctors, and use of compliance aids was correlated with adherence. In children with asthma, self-management compliance is critical and co-morbidities have been noted to affect outcomes; in 2013 it has been suggested that electronic monitoring may help adherence. Ethnicity People of different ethnic backgrounds have unique adherence issues through literacy, physiology, culture or poverty. There are few published studies on adherence in medicine taking in ethnic minority communities. Ethnicity and culture influence some health-determining behaviour, such as participation in screening programmes and attendance at follow-up appointments. Prieto et al emphasised the influence of ethnic and cultural factors on adherence. They pointed out that groups differ in their attitudes, values and beliefs about health and illness. This view could affect adherence, particularly with preventive treatments and medication for asymptomatic conditions. Additionally, some cultures fatalistically attribute their good or poor health to their god(s), and attach less importance to self-care than others. Measures of adherence may need to be modified for different ethnic or cultural groups. In some cases, it may be advisable to assess patients from a cultural perspective before making decisions about their individual treatment. Recent studies have shown that black patients and those with non-private insurance are more likely to be labeled as non-adherent. The increased risk is observed even in patients with a controlled A1c, and after controlling for other socioeconomic factors. Prescription fill rates Not all patients will fill the prescription at a pharmacy. In a 2010 U.S. study, 20–30% of prescriptions were never filled at the pharmacy. Reasons people do not fill prescriptions include the cost of the medication, A US nationwide survey of 1,010 adults in 2001 found that 22% chose not to fill prescriptions because of the price, which is similar to the 20–30% overall rate of unfilled prescriptions. Other factors are doubting the need for medication, or preference for self-care measures other than medication. Convenience, side effects and lack of demonstrated benefit are also factors. Medication Possession Ratio Prescription medical claims records can be used to estimate medication adherence based on fill rate. Patients can be routinely defined as being 'Adherent Patients' if the amount of medication furnished is at least 80% based on days' supply of medication divided by the number of days patient should be consuming the medication. This percentage is called the medication possession ratio (MPR). 2013 work has suggested that a medication possession ratio of 90% or above may be a better threshold for deeming consumption as 'Adherent'. Two forms of MPR can be calculated, fixed and variable. Calculating either is relatively straightforward, for Variable MPR (VMPR) it is calculated as the number of days' supply divided by the number of elapsed days including the last prescription. For the Fixed MPR (FMPR) the calculation is similar but the denominator is the number of days in a year whilst the numerator is constrained to be the number of days' supply within the year that the patient has been prescribed. For medication in tablet form it is relatively straightforward to calculate the number of days' supply based on a prescription. Some medications are less straightforward though because a prescription of a given number of doses may have a variable number of days' supply because the number of doses to be taken per day varies, for example with preventative corticosteroid inhalers prescribed for asthma where the number of inhalations to be taken daily may vary between individuals based on the severity of the disease. Contextual factors Contextual factors along with intrapersonal circumstances such as mental states affect decisions. They can accurately predict decisions where most contextual information is identified. General compliance with recommendations to follow isolation is influenced beliefs such as taking health precaution to be protected against infection, perceived vulnerability, getting COVID-19 and trust in the government. Mobility reduction, compliance with quarantine regulations in European regions where level of trust in policymakers is high can influence whether one complies with isolation rules. In addition, perceived infectiousness of COVID-19 is a strong predictor of rule compliance such that the more contagious people think COVID-19 is, the less willing social distancing measures are taken, while the sense of duty and fear of the virus contribute to staying at home. People might not leave their homes due to trusting regulations to be effective or placing it in a higher power such that individuals who trust others demonstrate more compliance than those who do not. Compliant individuals see protective measures as effective, while non-compliant people see them as problematic. Course completion Once started, patients seldom follow treatment regimens as directed, and seldom complete the course of treatment. In respect of hypertension, 50% of patients completely drop out of care within a year of diagnosis. Persistence with first-line single antihypertensive drugs is extremely low during the first year of treatment. As far as lipid-lowering treatment is concerned, only one third of patients are compliant with at least 90% of their treatment. Intensification of patient care interventions (e.g. electronic reminders, pharmacist-led interventions, healthcare professional education of patients) improves patient adherence rates to lipid-lowering medicines, as well as total cholesterol and LDL-cholesterol levels. The World Health Organization (WHO) estimated in 2003 that only 50% of people complete long-term therapy for chronic illnesses as they were prescribed, which puts patient health at risk. For example, in 2002 statin compliance dropped to between 25 and 40% after two years of treatment, with patients taking statins for what they perceive to be preventative reasons being unusually poor compliers. A wide variety of packaging approaches have been proposed to help patients complete prescribed treatments. These approaches include formats that increase the ease of remembering the dosage regimen as well as different labels for increasing patient understanding of directions. For example, medications are sometimes packed with reminder systems for the day and/or time of the week to take the medicine. Some evidence shows that reminder packaging may improve clinical outcomes such as blood pressure. A not-for-profit organisation called the Healthcare Compliance Packaging Council of Europe] (HCPC-Europe) was set up between the pharmaceutical industry, the packaging industry with representatives of European patients organisations. The mission of HCPC-Europe is to assist and to educate the healthcare sector in the improvement of patient compliance through the use of packaging solutions. A variety of packaging solutions have been developed by this collaboration. World Health Organization Barriers to Adherence The World Health Organization (WHO) groups barriers to medication adherence into five categories; health care team and system-related factors, social and economic factors, condition-related factors, therapy-related factors, and patient-related factors. Common barriers include: Improving adherence rates Role of health care providers Health care providers play a great role in improving adherence issues. Providers can improve patient interactions through motivational interviewing and active listening. Health care providers should work with patients to devise a plan that is meaningful for the patient's needs. A relationship that offers trust, cooperation, and mutual responsibility can greatly improve the connection between provider and patient for a positive impact. The wording that health care professionals take when sharing health advice may have an impact on adherence and health behaviours, however, further research is needed to understand if positive framing (e.g., the chance of surviving is improved if you go for screening) versus negative framing (e.g., the chance of dying is higher if you do not go for screening) is more effective for specific conditions. Technology In 2012 it was predicted that as telemedicine technology improves, physicians will have better capabilities to remotely monitor patients in real-time and to communicate recommendations and medication adjustments using personal mobile devices, such as smartphones, rather than waiting until the next office visit. Medication Event Monitoring Systems (MEMS), as in the form of smart medicine bottle tops, smart pharmacy vials or smart blister packages as used in clinical trials and other applications where exact compliance data are required, work without any patient input, and record the time and date the bottle or vial was accessed, or the medication removed from a blister package. The data can be read via proprietary readers, or NFC enabled devices, such as smartphones or tablets. A 2009 study stated that such devices can help improve adherence. More recently a 2016 scoping review suggested that in comparison to MEMS, median mediction adherence was grossly overestimated by 17% using self-report, by 8% using pill count and by 6% using rating as alternative methods for measuring medication adherence. The effectiveness of two-way email communication between health care professionals and their patients has not been adequately assessed. Mobile phones , 5.15 billion people, which equates to 67% of the global population, have a mobile device and this number is growing. Mobile phones have been used in healthcare and has fostered its own term, mHealth. They have also played a role in improving adherence to medication. For example, text messaging has been used to remind patients to take medication in patients with chronic conditions such as asthma and hypertension. Other examples include the use of smartphones for synchronous and asynchronous Video Observed Therapy (VOT) as a replacement for the currently resource intensive standard of Directly Observed Therapy (DOT) (recommended by the WHO) for Tuberculosis management. Other mHealth interventions for improving adherence to medication include smartphone applications, voice recognition in interactive phone calls and Telepharmacy. Some results show that the use of mHealth improves adherence to medication and is cost-effective, though some reviews report mixed results. Studies show that using mHealth to improve adherence to medication is feasible and accepted by patients. Specific mobile applications might also support adherence. mHealth interventions have also been used alongside other telehealth interventions such as wearable wireless pill sensors, smart pillboxes and smart inhalers Forms of medication Depot injections need to be taken less regularly than other forms of medication and a medical professional is involved in the administration of drugs so can increase compliance. Depot's are used for oral contraceptive pill and antipsychotic medication used to treat schizophrenia and bipolar disorder. Coercion Sometimes drugs are given involuntarily to ensure compliance. This can occur if an individual has been involuntarily committed or are subjected to an outpatient commitment order, where failure to take medication will result in detention and involuntary administration of treatment. This can also occur if a patient is not deemed to have mental capacity to consent to treatment in an informed way. Health and disease management A WHO study estimates that only 50% of patients with chronic diseases in developed countries follow treatment recommendations. Asthma non-compliance (28–70% worldwide) increases the risk of severe asthma attacks requiring preventable ER visits and hospitalisations; compliance issues with asthma can be caused by a variety of reasons including: difficult inhaler use, side effects of medications, and cost of the treatment. Cancer 200,000 new cases of cancer are diagnosed each year in the UK. One in three adults in the UK will develop cancer that can be life-threatening, and 120,000 people will be killed by their cancer each year. This accounts for 25% of all deaths in the UK. However while 90% of cancer pain can be effectively treated, only 40% of patients adhere to their medicines due to poor understanding. Results of a recent (2016) systematic review found a large proportion of patients struggle to take their oral antineoplastic medications as prescribed. This presents opportunities and challenges for patient education, reviewing and documenting treatment plans, and patient monitoring, especially with the increase in patient cancer treatments at home. The reasons for non-adherence have been given by patients as follows: The poor quality of information available to them about their treatment A lack of knowledge as to how to raise concerns whilst on medication Concerns about unwanted effects Issues about remembering to take medication Partridge et al (2002) identified evidence to show that adherence rates in cancer treatment are variable, and sometimes surprisingly poor. The following table is a summary of their findings: Medication event monitoring system - a medication dispenser containing a microchip that records when the container is opened and from Partridge et al (2002) In 1998, trials evaluating Tamoxifen as a preventative agent have shown dropout rates of around one-third: 36% in the Royal Marsden Tamoxifen Chemoprevention Study of 1998 29% in the National Surgical Adjuvant Breast and Bowel Project of 1998 In March 1999, the "Adherence in the International Breast Cancer Intervention Study" evaluating the effect of a daily dose of Tamoxifen for five years in at-risk women aged 35–70 years was 90% after one year 83% after two years 74% after four years Diabetes Patients with diabetes are at high risk of developing coronary heart disease and usually have related conditions that make their treatment regimens even more complex, such as hypertension, obesity and depression which are also characterised by poor rates of adherence. Diabetes non-compliance is 98% in US and the principal cause of complications related to diabetes including nerve damage and kidney failure. Among patients with Type 2 Diabetes, adherence was found in less than one third of those prescribed sulphonylureas and/or metformin. Patients taking both drugs achieve only 13% adherence. Other aspects that drive medicine adherence rates is the idea of perceived self-efficacy and risk assessment in managing diabetes symptoms and decision making surrounding rigorous medication regiments. Perceived control and self-efficacy not only significantly correlate with each other, but also with diabetes distress psychological symptoms and have been directly related to better medication adherence outcomes. Various external factors also impact diabetic patients' self-management behaviors including health-related knowledge/beliefs, problem-solving skills, and self-regulatory skills, which all impact perceived control over diabetic symptoms. Additionally, it is crucial to understand the decision-making processes that drive diabetics in their choices surrounding risks of not adhering to their medication. While patient decision aids (PtDAs), sets of tools used to help individuals engage with their clinicians in making decisions about their healthcare options, have been useful in decreasing decisional conflict, improving transfer of diabetes treatment knowledge, and achieving greater risk perception for disease complications, their efficacy in medication adherence has been less substantial. Therefore, the risk perception and decision-making processes surrounding diabetes medication adherence are multi-faceted and complex with socioeconomic implications as well. For example, immigrant health disparities in diabetic outcomes have been associated with a lower risk perception amongst foreign-born adults in the United States compared to their native-born counterparts, which leads to fewer protective lifestyle and treatment changes crucial for combatting diabetes. Additionally, variations in patients' perceptions of time (i.e. taking rigorous, costly medication in the present for abstract beneficial future outcomes can conflict with patients' preferences for immediate versus delayed gratification) may also present severe consequences for adherence as diabetes medication often requires systematic, routine administration. Hypertension Hypertension non-compliance (93% in US, 70% in UK) is the main cause of uncontrolled hypertension-associated heart attack and stroke. In 1975, only about 50% took at least 80% of their prescribed anti-hypertensive medications. As a result of poor compliance, 75% of patients with a diagnosis of hypertension do not achieve optimum blood-pressure control. Mental illness A 2003 review found that 41–59% of patients prescribed antipsychotics took the medication prescribed to them infrequently or not at all. Sometimes non-adherence is due to lack of insight, but psychotic disorders can be episodic and antipsychotics are then use prophylactically to reduce the likelihood of relapse rather than treat symptoms and in some cases individuals will have no further episodes despite not using antipsychotics. A 2006 review investigated the effects of compliance therapy for schizophrenia: and found no clear evidence to suggest that compliance therapy was beneficial for people with schizophrenia and related syndromes. Rheumatoid arthritis A longitudinal study has shown that adherence with treatment about 60%. The predictors of adherence were found to be more of psychological, communication and logistic nature rather than sociodemographic or clinical factors. The following factors were identified as independent predictors of adherence: the type of treatment prescribed agreement on treatment having received information on treatment adaptation clinician perception of patient trust See also Drug withdrawal Patient participation References External links Adherence to long-term therapies, a report from the World Health Organization Technology report on NFC enabled smart medication packages Medical terminology Clinical pharmacology Pharmacy Health care quality
Adherence (medicine)
[ "Chemistry" ]
5,134
[ "Pharmacology", "Pharmacy", "Clinical pharmacology" ]
4,118,424
https://en.wikipedia.org/wiki/Schottky%20defect
A Schottky defect is an excitation of the site occupations in a crystal lattice leading to point defects named after Walter H. Schottky. In ionic crystals, this defect forms when oppositely charged ions leave their lattice sites and become incorporated for instance at the surface, creating oppositely charged vacancies. These vacancies are formed in stoichiometric units, to maintain an overall neutral charge in the ionic solid. Definition Schottky defects consist of unoccupied anion and cation sites in a stoichiometric ratio. For a simple ionic crystal of type A−B+, a Schottky defect consists of a single anion vacancy (A) and a single cation vacancy (B), or v + v following Kröger–Vink notation. For a more general crystal with formula AxBy, a Schottky cluster is formed of x vacancies of A and y vacancies of B, thus the overall stoichiometry and charge neutrality are conserved. Conceptually, a Schottky defect is generated if the crystal is expanded by one unit cell, whose a prior empty sites are filled by atoms that diffused out of the interior, thus creating vacancies in the crystal. Schottky defects are observed most frequently when there is a small difference in size between the cations and anions that make up a material. Illustration Chemical equations in Kröger–Vink notation for the formation of Schottky defects in TiO2 and BaTiO3. ∅ v + 2 v ∅ v + v + 3 v This can be illustrated schematically with a two-dimensional diagram of a sodium chloride crystal lattice: Bound and dilute defects The vacancies that make up the Schottky defects have opposite charge, thus they experience a mutually attractive Coulomb force. At low temperature, they may form bound clusters. The degree at which the Schottky defect affects the lattice is dependent on temperature where the higher temperatures around a cation vacancy multiple anion vacancies can also be observed. When there are anion vacancies located near a cation vacancy this will hinder the displacement of cation energy. The bound clusters are typically less mobile than the dilute counterparts, as multiple species need to move in a concerted motion for the whole cluster to migrate. This has important implications for numerous functional ceramics used in a wide range of applications, including ion conductors, Solid oxide fuel cells and nuclear fuel. Examples This type of defect is typically observed in highly ionic compounds, highly coordinated compounds, and where there is only a small difference in sizes of cations and anions of which the compound lattice is composed. Typical salts where Schottky disorder is observed are NaCl, KCl, KBr, CsCl and AgBr. For engineering applications, Schottky defects are important in oxides with Fluorite structure, such as CeO2, cubic ZrO2, UO2, ThO2 and PuO2. Effect on density Typically, the formation volume of a vacancy is positive: the lattice contraction due to the strains around the defect does not make up for the expansion of the crystal due to the additional number of sites. Thus, the density of the solid crystal is less than the theoretical density of the material. See also Frenkel defect Wigner effect Crystallographic defects References Kovalenko, M.A, and A. Ya Kupryazhkin. “States of the Schottky Defect in Uranium Dioxide and Other Fluorite Type Crystals: Molecular Dynamics Study.” Journal of Alloys and Compounds, vol. 645, no. 0925-8388, 1 Oct. 2015, pp. 405–413, https://doi.org/10.1016/j.jallcom.2015.05.111. Accessed 30 Apr. 2024. Notes Crystallographic defects
Schottky defect
[ "Chemistry", "Materials_science", "Engineering" ]
811
[ "Crystallographic defects", "Crystallography", "Materials degradation", "Materials science" ]
4,119,397
https://en.wikipedia.org/wiki/Rydberg%20ionization%20spectroscopy
Rydberg ionization spectroscopy is a spectroscopy technique in which multiple photons are absorbed by an atom causing the removal of an electron to form an ion. Resonance ionization spectroscopy The ionization threshold energy of atoms and small molecules are typically larger than the photon energies that are most easily available experimentally. However, it can be possible to span this ionization threshold energy if the photon energy is resonant with an intermediate electronically excited state. While it is often possible to observe the lower Rydberg levels in conventional spectroscopy of atoms and small molecules, Rydberg states are even more important in laser ionization experiments. Laser spectroscopic experiments often involve ionization through a photon energy resonance at an intermediate level, with an unbound final electron state and an ionic core. On resonance for phototransitions permitted by selection rules, the intensity of the laser in combination with the excited state lifetime makes ionization an expected outcome. This RIS approach and variations permit sensitive detection of specific species. Low Rydberg levels and resonance enhanced multiphoton ionization High photon intensity experiments can involve multiphoton processes with the absorption of integer multiples of the photon energy. In experiments that involve a multiphoton resonance, the intermediate is often a Rydberg state, and the final state is often an ion. The initial state of the system, photon energy, angular momentum and other selection rules can help in determining the nature of the intermediate state. This approach is exploited in resonance enhanced multiphoton ionization spectroscopy (REMPI). An advantage of this spectroscopic technique is that the ions can be detected with almost complete efficiency and even resolved for their mass. It is also possible to gain additional information by performing experiments to look at the energy of the liberated photoelectron in these experiments. (Compton and Johnson pioneered the development of REMPI) Near-threshold Rydberg levels The same approach that produces an ionization event can be used to access the dense manifold of near-threshold Rydberg states with laser experiments. These experiments often involve a laser operating at one wavelength to access the intermediate Rydberg state and a second wavelength laser to access the near-threshold Rydberg state region. Because of the photoabsorption selection rules, these Rydberg electrons are expected to be in highly elliptical angular momentum states. It is the Rydberg electrons excited to nearly circular angular momentum states that are expected to have the longest lifetimes. The conversion between a highly elliptical and a nearly circular near-threshold Rydberg state might happen in several ways, including encountering small stray electric fields. Zero electron kinetic energy spectroscopy Zero electron kinetic energy (ZEKE) spectroscopy was developed with the idea of collecting only the resonance ionization photoelectrons that have extremely low kinetic energy. The technique involves waiting for a period of time after a resonance ionization experiment and then pulsing an electric field to collect the lowest energy photoelectrons in a detector. Typically, ZEKE experiments utilize two different tunable lasers. One laser photon energy is tuned to be resonant with the energy of an intermediate state. (This may be resonant with an excited state at a multiphoton transition.) Another photon energy is tuned to be close to the ionization threshold energy. The technique worked extremely well and demonstrated energy resolution that was significantly better than the laser bandwidth. It turns out that it was not the photoelectrons that were detected in ZEKE. The delay between the laser and the electric field pulse selected the longest lived and most circular Rydberg states closest to the energy of the ion core. The population distribution of surviving long-lived near threshold Rydberg states is close to the laser energy bandwidth. The electric field pulse Stark shifts the near-threshold Rydberg states and vibrational autoionization occurs. ZEKE has provided a significant advance in the study of the vibrational spectroscopy of molecular ions. Schlag, Peatman and Müller-Dethlefs originated ZEKE spectroscopy. Mass analyzed threshold ionization Mass analyzed threshold ionization (MATI) was developed with idea of collecting the mass of the ions in a ZEKE experiment. MATI offered a mass resolution advantage to ZEKE. Because MATI also exploits vibrational autoionization of near-threshold Rydberg states, it also can offer a comparable resolution with the laser bandwidth. This information can be indispensable in understanding a variety of systems. Photo-induced Rydberg ionization Photo-induced Rydberg ionization (PIRI) was developed following REMPI experiments on electronic autoionization of low-lying Rydberg states of carbon dioxide. In REMPI photoelectron experiments, it was determined that a two-photon ionic core photoabsorption process (followed by prompt electronic autoionization) could dominate the direct single photon absorption in the ionization of some Rydberg states of carbon dioxide. These sorts of two excited electron systems had already been under study in the atomic physics, but there the experiments involved high order Rydberg states. PIRI works because electronic autoionization can dominate direct photoionization (photoionization). The circularized near-threshold Rydberg state is more likely to undergo a core photoabsorption than to absorb a photon and directly ionize the Rydberg state. PIRI extends the near-threshold spectroscopic techniques to allow access to the electronic states (including dissociative molecular states and other hard to study systems) as well as the vibrational states of molecular ions. References Spectroscopy
Rydberg ionization spectroscopy
[ "Physics", "Chemistry" ]
1,101
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
4,120,135
https://en.wikipedia.org/wiki/Weinstein%20conjecture
In mathematics, the Weinstein conjecture refers to a general existence problem for periodic orbits of Hamiltonian or Reeb vector flows. More specifically, the conjecture claims that on a compact contact manifold, its Reeb vector field should carry at least one periodic orbit. By definition, a level set of contact type admits a contact form obtained by contracting the Hamiltonian vector field into the symplectic form. In this case, the Hamiltonian flow is a Reeb vector field on that level set. It is a fact that any contact manifold (M,α) can be embedded into a canonical symplectic manifold, called the symplectization of M, such that M is a contact type level set (of a canonically defined Hamiltonian) and the Reeb vector field is a Hamiltonian flow. That is, any contact manifold can be made to satisfy the requirements of the Weinstein conjecture. Since, as is trivial to show, any orbit of a Hamiltonian flow is contained in a level set, the Weinstein conjecture is a statement about contact manifolds. It has been known that any contact form is isotopic to a form that admits a closed Reeb orbit; for example, for any contact manifold there is a compatible open book decomposition, whose binding is a closed Reeb orbit. This is not enough to prove the Weinstein conjecture, though, because the Weinstein conjecture states that every contact form admits a closed Reeb orbit, while an open book determines a closed Reeb orbit for a form which is only isotopic to the given form. The conjecture was formulated in 1978 by Alan Weinstein. In several cases, the existence of a periodic orbit was known. For instance, Rabinowitz showed that on star-shaped level sets of a Hamiltonian function on a symplectic manifold, there were always periodic orbits (Weinstein independently proved the special case of convex level sets). Weinstein observed that the hypotheses of several such existence theorems could be subsumed in the condition that the level set be of contact type. (Weinstein's original conjecture included the condition that the first de Rham cohomology group of the level set is trivial; this hypothesis turned out to be unnecessary). The Weinstein conjecture was first proved for contact hypersurfaces in in 1986 by , then extended to cotangent bundles by Hofer–Viterbo and to wider classes of aspherical manifolds by Floer–Hofer–Viterbo. The presence of holomorphic spheres was used by Hofer–Viterbo. All these cases dealt with the situation where the contact manifold is a contact submanifold of a symplectic manifold. A new approach without this assumption was discovered in dimension 3 by Hofer and is at the origin of contact homology. The Weinstein conjecture has now been proven for all closed 3-dimensional manifolds by Clifford Taubes. The proof uses a variant of Seiberg–Witten Floer homology and pursues a strategy analogous to Taubes' proof that the Seiberg-Witten and Gromov invariants are equivalent on a symplectic four-manifold. In particular, the proof provides a shortcut to the closely related program of proving the Weinstein conjecture by showing that the embedded contact homology of any contact three-manifold is nontrivial. See also Seifert conjecture References Further reading Symplectic geometry Hamiltonian mechanics Conjectures Unsolved problems in geometry Contact geometry
Weinstein conjecture
[ "Physics", "Mathematics" ]
713
[ "Geometry problems", "Unsolved problems in mathematics", "Theoretical physics", "Unsolved problems in geometry", "Classical mechanics", "Hamiltonian mechanics", "Conjectures", "Mathematical problems", "Dynamical systems" ]
4,120,782
https://en.wikipedia.org/wiki/Relativistic%20dynamics
For classical dynamics at relativistic speeds, see relativistic mechanics. Relativistic dynamics refers to a combination of relativistic and quantum concepts to describe the relationships between the motion and properties of a relativistic system and the forces acting on the system. What distinguishes relativistic dynamics from other physical theories is the use of an invariant scalar evolution parameter to monitor the historical evolution of space-time events. In a scale-invariant theory, the strength of particle interactions does not depend on the energy of the particles involved. Twentieth century experiments showed that the physical description of microscopic and submicroscopic objects moving at or near the speed of light raised questions about such fundamental concepts as space, time, mass, and energy. The theoretical description of the physical phenomena required the integration of concepts from relativity and quantum theory. Vladimir Fock was the first to propose an evolution parameter theory for describing relativistic quantum phenomena, but the evolution parameter theory introduced by Ernst Stueckelberg is more closely aligned with recent work. Evolution parameter theories were used by Feynman, Schwinger and others to formulate quantum field theory in the late 1940s and early 1950s. Silvan S. Schweber wrote a nice historical exposition of Feynman's investigation of such a theory. A resurgence of interest in evolution parameter theories began in the 1970s with the work of Horwitz and Piron, and Fanchi and Collins. Invariant Evolution Parameter Concept Some researchers view the evolution parameter as a mathematical artifact while others view the parameter as a physically measurable quantity. To understand the role of an evolution parameter and the fundamental difference between the standard theory and evolution parameter theories, it is necessary to review the concept of time. Time t played the role of a monotonically increasing evolution parameter in classical Newtonian mechanics, as in the force law F = dP/dt for a non-relativistic, classical object with momentum P. To Newton, time was an “arrow” that parameterized the direction of evolution of a system. Albert Einstein rejected the Newtonian concept and identified t as the fourth coordinate of a space-time four-vector. Einstein's view of time requires a physical equivalence between coordinate time and coordinate space. In this view, time should be a reversible coordinate in the same manner as space. Particles moving backward in time are often used to display antiparticles in Feynman-diagrams, but they are not thought of as really moving backward in time usually it is done to simplify notation. However a lot of people think they are really moving backward in time and take it as evidence for time reversibility. The development of non-relativistic quantum mechanics in the early twentieth century preserved the Newtonian concept of time in the Schrödinger equation. The ability of non-relativistic quantum mechanics and special relativity to successfully describe observations motivated efforts to extend quantum concepts to the relativistic domain. Physicists had to decide what role time should play in relativistic quantum theory. The role of time was a key difference between Einsteinian and Newtonian views of classical theory. Two hypotheses that were consistent with special relativity were possible: Hypothesis I Assume t = Einsteinian time and reject Newtonian time. Hypothesis II Introduce two temporal variables: A coordinate time in the sense of Einstein An invariant evolution parameter in the sense of Newton Hypothesis I led to a relativistic probability conservation equation that is essentially a re-statement of the non-relativistic continuity equation. Time in the relativistic probability conservation equation is Einstein's time and is a consequence of implicitly adopting Hypothesis I. By adopting Hypothesis I, the standard paradigm has at its foundation a temporal paradox: motion relative to a single temporal variable must be reversible even though the second law of thermodynamics establishes an “arrow of time” for evolving systems, including relativistic systems. Thus, even though Einstein's time is reversible in the standard theory, the evolution of a system is not time reversal invariant. From the perspective of Hypothesis I, time must be both an irreversible arrow tied to entropy and a reversible coordinate in the Einsteinian sense. The development of relativistic dynamics is motivated in part by the concern that Hypothesis I was too restrictive. The problems associated with the standard formulation of relativistic quantum mechanics provide a clue to the validity of Hypothesis I. These problems included negative probabilities, hole theory, the Klein paradox, non-covariant expectation values, and so forth. Most of these problems were never solved; they were avoided when quantum field theory (QFT) was adopted as the standard paradigm. The QFT perspective, particularly its formulation by Schwinger, is a subset of the more general Relativistic Dynamics. Relativistic Dynamics is based on Hypothesis II and employs two temporal variables: a coordinate time, and an evolution parameter. The evolution parameter, or parameterized time, may be viewed as a physically measurable quantity, and a procedure has been presented for designing evolution parameter clocks. By recognizing the existence of a distinct parameterized time and a distinct coordinate time, the conflict between a universal direction of time and a time that may proceed as readily from future to past as from past to future is resolved. The distinction between parameterized time and coordinate time removes ambiguities in the properties associated with the two temporal concepts in Relativistic Dynamics. See also Ernst Stueckelberg References External links Relativistic dynamics of stars near a supermassive black hole (2014) International Association for Relativistic Dynamics (IARD) Quantum mechanics Theory of relativity Theories
Relativistic dynamics
[ "Physics" ]
1,172
[ "Theoretical physics", "Quantum mechanics", "Theory of relativity" ]
4,121,300
https://en.wikipedia.org/wiki/Refuse-derived%20fuel
Refuse-derived fuel (RDF) is a fuel produced from various types of waste such as municipal solid waste (MSW), industrial waste or commercial waste. The World Business Council for Sustainable Development provides a definition: "Selected waste and by-products with recoverable calorific value can be used as fuels in a cement kiln, replacing a portion of conventional fossil fuels, like coal, if they meet strict specifications. Sometimes they can only be used after pre-processing to provide ‘tailor-made’ fuels for the cement process". RDF consists largely of combustible components of such waste, as non recyclable plastics (not including PVC), paper cardboard, labels, and other corrugated materials. These fractions are separated by different processing steps, such as screening, air classification, ballistic separation, separation of ferrous and non ferrous materials, glass, stones and other foreign materials and shredding into a uniform grain size, or also pelletized in order to produce a homogeneous material which can be used as substitute for fossil fuels in e.g. cement plants, lime plants, coal fired power plants or as reduction agent in steel furnaces. If documented according to CEN/TC 343 it can be labeled as solid recovered fuels (SRF). Others describe the properties, such as: Secondary fuels Substitute fuels “AF“ as an abbreviation for alternative fuels Ultimately most of the designations are only general paraphrases for alternative fuels which are either waste-derived or biomass-derived. There is no universal exact classification or specification which is used for such materials. Even legislative authorities have not yet established any exact guidelines on the type and composition of alternative fuels. The first approaches towards classification or specification are to be found in Germany (Bundesgütegemeinschaft für Sekundärbrennstoffe) as well as at European level (European Recovered Fuel Organisation). These approaches which are initiated primarily by the producers of alternative fuels, follow a correct approach: Only through an exactly defined standardisation in the composition of such materials can both production and utilisation be uniform worldwide. First approaches towards alternative fuel classification: Solid recovered fuels are part of RDF in the fact that it is produced to reach a standard such as CEN/343 ANAS. A comprehensive review is now available on SRF / RDF production, quality standards and thermal recovery, including statistics on European SRF quality. History In the 1950s tyres were used for the first time as refuse derived fuel in the cement industry. Continuous use of various waste-derived alternative fuels then followed in the mid-1980s with “Brennstoff aus Müll“ (BRAM) – fuel from waste – in the Westphalian cement industry in Germany. At that time the thought of cost reduction through replacement of fossil fuels was the priority as considerable competition pressure weighed down on the industry. Since the eighties the German Cement Works Association (Verein Deutscher Zementwerke e.V. (VDZ, Düsseldorf)) has been documenting the use of alternative fuels in the federal German cement industry. In 1987 less than 5% of fossil fuels were replaced by refuse derived fuels, in 2015 its use increased to almost 62%. Refuse-derived fuels are used in a wide range of specialized waste to energy facilities, which are using processed refuse-derived fuels with lower calorific values of 8-14MJ/kg in grain sizes of up to 500 mm to produce electricity and thermal energy (heat/steam) for district heating systems or industrial uses. Processing Materials such as glass and metals are removed during the treatment processing since they are non-combustible. The metal is removed using a magnet and the glass using mechanical screening. After that, an air knife is used to separate the light materials from the heavy ones. The light materials have higher calorific value and they create the final RDF. The heavy materials will usually continue to a landfill. The residual material can be sold in its processed form (depending on the process treatment) as a plain mixture or it may be compressed into pellet fuel, bricks or logs and used for other purposes either stand-alone or in a recursive recycling process. RDF or SRF is the combustible sub-fraction of municipal solid waste and other similar solid waste, produced using a mix of mechanical and/or biological treatment methods such as biodrying. in mechanical-biological treatment (MBT) plants. During the production of RDF / SRF in MBT plants there are solid loses of otherwise combustible material, which generates a debate whether the production and use of RDF / SRF is resource efficient or not over traditional one-step combustion of residual MSW in incineration (Energy from waste) plants. In the process of making RDF pellets from shredded SRF, drying is often required. Typically, the moisture content needs to be reduced to below 20% to produce high-calorific, high-density RDF pellets. Drying RDF often requires a substantial amount of energy, so choosing an inexpensive heat source is preferable. The production of RDF may involve the following steps: Bag splitting/Shredding Manual sorting (typically to remove inerts, PVC and/or other unwanted objects) Size screening Magnetic separation Eddy current separation (non-magnetic metals) Air classifier (density separation) Coarse shredding Refining separation by infrared separation Drying Pelletizing Mixing/homogenization End markets RDF can be used in a variety of ways to produce electricity or as a replacement of fossil fuels. It can be used alongside traditional sources of fuel in coal power plants. In Europe RDF can be used in the cement kiln industry, where strict air pollution control standards of the Waste Incineration Directive apply. The main limiting factor for RDF / SRF use in cement kilns is its total chlorine (Cl) content, with mean Cl content in average commercially manufactured SRF being at 0.76 w/w on a dry basis (± 0.14% w/wd, 95% confidence). RDF can also be fed into plasma arc gasification modules & pyrolysis plants. Where the RDF is capable of being combusted cleanly or in compliance with the Kyoto Protocol, RDF can provide a funding source where unused carbon credits are sold on the open market via a carbon exchange. However, the use of municipal waste contracts and the bankability of these solutions is still a relatively new concept, thus RDF's financial advantage may be debatable. The European market for the production of RDF have been grown fast due to the European landfill directive and the imposition of landfill taxes. Refuse derived fuel (RDF) exports from the UK to Europe and beyond are expected to have reached 3.3 million tonnes in 2015, representing a near-500,000 tonnes increase on the previous year. Measurement of RDF and SRF properties: biogenic content The biomass fraction of RDF and SRF has a monetary value under multiple greenhouse gas protocols, such as the European Union Emissions Trading Scheme and the Renewable Obligation Certificate program in the United Kingdom. Biomass is considered to be carbon-neutral since the liberated from the combustion of biomass is recycled in plants. The combusted biomass fraction of RDF/SRF is used by stationary combustion operators to reduce their overall reported emissions. Several methods have been developed by the European CEN 343 working group to determine the biomass fraction of RDF/SRF. The initial two methods developed (CEN/TS 15440) were the manual sorting method and the selective dissolution method; a comparative assessment of these two methods is available. An alternative, but more expensive method was developed using the principles of radiocarbon dating. A technical review (CEN/TR 15591:2007) outlining the carbon-14 method was published in 2007, and a technical standard of the carbon dating method (CEN/TS 15747:2008) was published in 2008. In the United States, there is already an equivalent carbon-14 method under the standard method ASTM D6866. Although carbon-14 dating can determine the biomass fraction of RDF/SRF, it cannot determine directly the biomass calorific value. Determining the calorific value is important for green certificate programs such as the Renewable Obligation Certificate program. These programs award certificates based on the energy produced from biomass. Several research papers, including the one commissioned by the Renewable Energy Association in the UK, have been published that demonstrate how the carbon-14 result can be used to calculate the biomass calorific value. Quality assurance of RDF and SRF properties: representative laboratory sub-sampling There are major challenges related to the quality assurance and especially the accurate determination of the RDF / SRF thermal recovery (combustion) properties, due to their inherently variable (heterogeneous) composition. Recent advances enable optimal sub-sampling schemes to arrive from the SRF / SRF sample of say 1 kg to the g or mg to be tested in the analytical devices such as the bomb calorimetry or TGA. With such solutions representative sub-sampling can be secured, but less so for the chlorine content. The new evidence suggests that the theory of sampling (ToS) may be overestimating the processing effort needed, to obtain a representative sub-sample. Regional use Campania In 2009, in response to the Naples waste management issue in Campania, Italy, the Acerra incineration facility was completed at a cost of over €350 million. The incinerator burns 600,000 tons of waste per year. The energy produced from the facility is enough to power 200,000 households per year. Iowa The first full-scale waste-to-energy facility in the US was the Arnold O. Chantland Resource Recovery Plant, built in 1975 located in Ames, Iowa. This plant also produces RDF that is sent to a local power plant for supplemental fuel. Manchester The city of Manchester, in the north west of England, is in the process of awarding a contract for the use of RDF which will be produced by proposed mechanical biological treatment facilities as part of a huge PFI contract. The Greater Manchester Waste Disposal Authority has recently announced there is significant market interest in initial bids for the use of RDF which is projected to be produced in tonnages up to 900,000 tonnes per annum. Bollnäs During spring 2008 Bollnäs Ovanåkers Renhållnings AB (BORAB) in Sweden, started their new waste-to-energy plant. Municipal solid waste as well as industrial waste is turned into refuse-derived fuel. The 70,000-80,000 tonnes RDF that is produced per annum is used to power the nearby BFB-plant, which provides the citizens of Bollnäs with electricity and district heating. Israel In late March 2017, Israel launched its own RDF plant at the Hiriya Recycling Park; which daily will intake about 1,500 tonnes of household waste, which will amount to around half a million tonnes of waste each year, with an estimated production of 500 tonnes of RDF daily. The plant is part of Israel's "diligent effort to improve and advance waste management in Israel." United Arab Emirates In October 2018, the UAE's Ministry of Climate Change and Environment signed a concession agreement with Emirates RDF (BESIX, Tech Group Eco Single Owner, Griffin Refineries) to develop and operate a RDF facility in the Emirate of Umm Al Quwain. The facility will receive 1,000 tons per day of household waste and convert the waste of 550,000 residents from the emirates of Ajman and Umm Al Quwain into RDF. RDF will be used in cement factories to partially replace the traditional use of gas or coal. See also Biodrying Cement kiln Heavy metals Isle of Wight gasification facility Mechanical biological treatment Mechanical heat treatment Open burning of waste Waste-to-energy References Incineration Mechanical biological treatment Waste treatment technology
Refuse-derived fuel
[ "Chemistry", "Engineering" ]
2,465
[ "Water treatment", "Combustion engineering", "Incineration", "Environmental engineering", "Waste treatment technology" ]
577,830
https://en.wikipedia.org/wiki/Titration%20curve
Titrations are often recorded on graphs called titration curves, which generally contain the volume of the titrant as the independent variable and the pH of the solution as the dependent variable (because it changes depending on the composition of the two solutions). The equivalence point on the graph is where all of the starting solution (usually an acid) has been neutralized by the titrant (usually a base). It can be calculated precisely by finding the second derivative of the titration curve and computing the points of inflection (where the graph changes concavity); however, in most cases, simple visual inspection of the curve will suffice. In the curve given to the right, both equivalence points are visible, after roughly 15 and 30 mL of NaOH solution has been titrated into the oxalic acid solution. To calculate the logarithmic acid dissociation constant (pKa), one must find the volume at the half-equivalence point, that is where half the amount of titrant has been added to form the next compound (here, sodium hydrogen oxalate, then disodium oxalate). Halfway between each equivalence point, at 7.5 mL and 22.5 mL, the pH observed was about 1.5 and 4, giving the pKa. In weak monoprotic acids, the point halfway between the beginning of the curve (before any titrant has been added) and the equivalence point is significant: at that point, the concentrations of the two species (the acid and conjugate base) are equal. Therefore, the Henderson-Hasselbalch equation can be solved in this manner: Therefore, one can easily find the pKa of the weak monoprotic acid by finding the pH of the point halfway between the beginning of the curve and the equivalence point, and solving the simplified equation. In the case of the sample curve, the acid dissociation constant Ka = 10-pKa would be approximately 1.78×10−5 from visual inspection (the actual Ka2 is 1.7×10−5) For polyprotic acids, calculating the acid dissociation constants is only marginally more difficult: the first acid dissociation constant can be calculated the same way as it would be calculated in a monoprotic acid. The pKa of the second acid dissociation constant, however, is the pH at the point halfway between the first equivalence point and the second equivalence point (and so on for acids that release more than two protons, such as phosphoric acid). References Titration
Titration curve
[ "Chemistry" ]
536
[ "Instrumental analysis", "Titration" ]
577,846
https://en.wikipedia.org/wiki/Redox%20titration
A redox titration is a type of titration based on a redox reaction between the analyte and titrant. It may involve the use of a redox indicator and/or a potentiometer. A common example of a redox titration is the treatment of a solution of iodine with a reducing agent to produce iodide using a starch indicator to help detect the endpoint. Iodine (I2) can be reduced to iodide (I−) by, say, thiosulfate (), and when all the iodine is consumed, the blue colour disappears. This is called an iodometric titration. Most often, the reduction of iodine to iodide is the last step in a series of reactions where the initial reactions convert an unknown amount of the solute (the substance being analyzed) to an equivalent amount of iodine, which may then be titrated. Sometimes other halogens (or haloalkanes) besides iodine are used in the intermediate reactions because they are available in better measurable standard solutions and/or react more readily with the solute. The extra steps in iodometric titration may be worthwhile because the equivalence point, where the blue turns a bit colourless, is more distinct than in some other analytical or volumetric methods. The main redox titration types are: {| class="wikitable" |- ! Redox titration !! Titrant |- | Iodometry || Iodine (I2) |- | Bromatometry || Bromine (Br2) |- | Cerimetry || Cerium(IV) salts |- | Permanganometry || Potassium permanganate |- | Dichrometry || Potassium dichromate |-hzhsisi |} Sources See also Oxidizing agent Reducing agent Titration
Redox titration
[ "Chemistry" ]
400
[ "Instrumental analysis", "Titration" ]
577,961
https://en.wikipedia.org/wiki/Department%20of%20Plant%20Sciences%2C%20University%20of%20Cambridge
The Department of Plant Sciences is a department of the University of Cambridge that conducts research and teaching in plant sciences. It was established in 1904, although the university has had a professor of botany since 1724. Research , the department pursues three strategic targets of research Global food security Synthetic biology and biotechnology Climate science and ecosystem conservation See also the Sainsbury Laboratory Cambridge University Notable academic staff Sir David Baulcombe, FRS, Regius Professor of Botany Beverley Glover, Professor of Plant systematics and evolution, director of the Cambridge University Botanic Garden Howard Griffiths, Professor of Plant Ecology Julian Hibberd, Professor of Photosynthesis Alison Smith, Professor of Plant Biochemistry and Head of Department , the department also has 66 members of faculty and postdoctoral researchers, 100 graduate students, 19 Biotechnology and Biological Sciences Research Council (BBSRC) Doctoral Training Program (DTP) PhD students, 20 part II Tripos undergraduate students and 44 support staff. History The University of Cambridge has a long and distinguished history in Botany including work by John Ray and Stephen Hales in the 17th century and 18th century, Charles Darwin’s mentor John Stevens Henslow in the 19th century, and Frederick Blackman, Arthur Tansley and Harry Godwin in the 20th century. Emeritus and alumni More recently, the department has been home to: John C. Gray, Emeritus Professor of Plant Molecular Biology since 2011 Thomas ap Rees, Professor of Botany F. Ian Woodward, Lecturer and Fellow of Trinity Hall, Cambridge before being appointed Professor of Plant Ecology at the University of Sheffield References Plant Sciences, Department of Biotechnology in the United Kingdom Cambridge Universities and colleges established in 1904 1904 establishments in England
Department of Plant Sciences, University of Cambridge
[ "Biology" ]
328
[ "Biotechnology in the United Kingdom", "Biotechnology by country" ]
578,365
https://en.wikipedia.org/wiki/MIRACL
MIRACL, or Mid-Infrared Advanced Chemical Laser, is a directed energy weapon developed by the US Navy. It is a deuterium fluoride laser, a type of chemical laser. The MIRACL laser first became operational in 1980. It can produce over a megawatt of output for up to 70 seconds, making it the most powerful continuous wave (CW) laser in the US. Its original goal was to be able to track and destroy anti-ship cruise missiles, but in later years it was used to test phenomenologies associated with national anti-ballistic and anti-satellite laser weapons. Originally tested at a contractor facility in California, as of the later 1990s and early 2000s, it was located at the former MAR-1 facility () in the White Sands Missile Range in New Mexico. The beam size in the resonator is about wide. The beam is then reshaped to a square. Amid much controversy in October 1997, MIRACL was tested against MSTI-3, a US Air Force satellite at the end of its original mission in orbit at a distance of . MIRACL failed during the test and was damaged and the Pentagon claimed mixed results for other portions of the test. A second, lower-powered chemical laser was able to temporarily blind the MSTI-3 sensors during the test. References Further reading Chemical lasers Military lasers Directed-energy weapons of the United States Military equipment introduced in the 1980s
MIRACL
[ "Chemistry" ]
290
[ "Chemical reaction engineering", "Chemical lasers" ]
578,412
https://en.wikipedia.org/wiki/Acetone%20peroxide
Acetone peroxide ( also called APEX and mother of Satan) is an organic peroxide and a primary explosive. It is produced by the reaction of acetone and hydrogen peroxide to yield a mixture of linear monomer and cyclic dimer, trimer, and tetramer forms. The monomer is dimethyldioxirane. The dimer is known as diacetone diperoxide (DADP). The trimer is known as triacetone triperoxide (TATP) or tri-cyclic acetone peroxide (TCAP). Acetone peroxide takes the form of a white crystalline powder with a distinctive bleach-like odor when impure, or a fruit-like smell when pure, and can explode powerfully if subjected to heat, friction, static electricity, concentrated sulfuric acid, strong UV radiation, or shock. Until about 2015, explosives detectors were not set to detect non-nitrogenous explosives, as most explosives used preceding 2015 were nitrogen-based. TATP, being nitrogen-free, has been used as the explosive of choice in several terrorist bomb attacks since 2001. History Acetone peroxide (specifically, triacetone triperoxide) was discovered in 1895 by the German chemist Richard Wolffenstein. Wolffenstein combined acetone and hydrogen peroxide, and then he allowed the mixture to stand for a week at room temperature, during which time a small quantity of crystals precipitated, which had a melting point of . In 1899, Adolf von Baeyer and Victor Villiger described the first synthesis of the dimer and described use of acids for the synthesis of both peroxides. Baeyer and Villiger prepared the dimer by combining potassium persulfate in diethyl ether with acetone, under cooling. After separating the ether layer, the product was purified and found to melt at . They found that the trimer could be prepared by adding hydrochloric acid to a chilled mixture of acetone and hydrogen peroxide. By using the depression of freezing points to determine the molecular weights of the compounds, they also determined that the form of acetone peroxide that they had prepared via potassium persulfate was a dimer, whereas the acetone peroxide that had been prepared via hydrochloric acid was a trimer, like Wolffenstein's compound. Work on this methodology and on the various products obtained, was further investigated in the mid-20th century by Milas and Golubović. Chemistry The chemical name acetone peroxide is most commonly used to refer to the cyclic trimer, the product of a reaction between two precursors, hydrogen peroxide and acetone, in an acid-catalyzed nucleophilic addition, although monomeric and dimeric forms are also possible. Specifically, two dimers, one cyclic (C6H12O4) and one open chain (C6H14O4), as well as an open dihydroperoxide monomer (C3H8O4), can also be formed; under a particular set of conditions of reagent and acid catalyst concentration, the cyclic trimer is the primary product. Under neutral conditions, the reaction is reported to produce the monomeric organic peroxide. A tetrameric form has also been described, under different catalytic conditions, albeit not without disputes and controversy. The most common route for nearly pure TATP is H2O2/acetone/HCl in 1:1:0.25 molar ratios, using 30% hydrogen peroxide. This product contains very little or none of DADP with some very small traces of chlorinated compounds. Product that contains large fraction of DADP can be obtained from 50% H2O2 using large amounts of concentrated sulfuric acid as catalyst or alternatively with 30% H2O2 and massive amounts of HCl as a catalyst. The product made by using hydrochloric acid is regarded as more stable than the one made using sulfuric acid. It is known that traces of sulfuric acid trapped inside the formed acetone peroxide crystals lead to instability. In fact, the trapped sulfuric acid can induce detonation at temperatures as low as . This is the most likely mechanism behind accidental explosions of acetone peroxide that occur during drying on heated surfaces. Organic peroxides in general are sensitive, dangerous explosives, and all forms of acetone peroxide are sensitive to initiation. TATP decomposes explosively; examination of the explosive decomposition of TATP at the very edge of detonation front predicts "formation of acetone and ozone as the main decomposition products and not the intuitively expected oxidation products." Very little heat is created by the explosive decomposition of TATP at the very edge of the detonation front; the foregoing computational analysis suggests that TATP decomposition is an entropic explosion. However, this hypothesis has been challenged as not conforming to actual measurements. The claim of entropic explosion has been tied to the events just behind the detonation front. The authors of the 2004 Dubnikova et al. study confirm that a final redox reaction (combustion) of ozone, oxygen and reactive species into water, various oxides and hydrocarbons takes place within about 180ps after the initial reaction—within about a micron of the detonation wave. Detonating crystals of TATP ultimately reach temperature of and pressure of 80 kbar. The final energy of detonation is about 2800 kJ/kg (measured in helium), enough to briefly raise the temperature of gaseous products to . Volume of gases at STP is 855 L/kg for TATP and 713 L/kg for DADP (measured in helium). The tetrameric form of acetone peroxide, prepared under neutral conditions using a tin catalyst in the presence of a chelator or general inhibitor of radical chemistry, is reported to be more chemically stable, although still a very dangerous primary explosive. Its synthesis has been disputed. Both TATP and DADP are prone to loss of mass via sublimation. DADP has lower molecular weight and higher vapor pressure. This means that DADP is more prone to sublimation than TATP. This can lead to dangerous crystal growth when the vapors deposit if the crystals have been stored in a container with a threaded lid. This process of repeated sublimation and deposition also results in a change in crystal size via Ostwald ripening. Several methods can be used for trace analysis of TATP, including gas chromatography/mass spectrometry (GC/MS), high performance liquid chromatography/mass spectrometry (HPLC/MS), and HPLC with post-column derivitization. Acetone peroxide is soluble in toluene, chloroform, acetone, dichloromethane and methanol. Recrystalization of primary explosives may yield large crystals that detonate spontaneously due to internal strain. Industrial uses Ketone peroxides, including acetone peroxide and methyl ethyl ketone peroxide, find application as initiators for polymerization reactions, e.g., silicone or polyester resins, in the making of fiberglass-reinforced composites. For these uses, the peroxides are typically in the form of a dilute solution in an organic solvent; methyl ethyl ketone peroxide is more common for this purpose, as it is stable in storage. Acetone peroxide is used as a flour bleaching agent to bleach and "mature" flour. Acetone peroxides are unwanted by-products of some oxidation reactions such as those used in phenol syntheses. Due to their explosive nature, their presence in chemical processes and chemical samples creates potential hazardous situations. For example, triacetone peroxide is the major contaminant found in diisopropyl ether as a result of photochemical oxidation in air. Accidental occurrence at illicit MDMA laboratories is possible. Numerous methods are used to reduce their appearance, including shifting pH to more alkaline, adjusting reaction temperature, or adding inhibitors of their production. Use in improvised explosive devices TATP has been used in bomb and suicide attacks and in improvised explosive devices, including the London bombings on 7 July 2005, where four suicide bombers killed 52 people and injured more than 700. It was one of the explosives used by the "shoe bomber" Richard Reid in his 2001 failed shoe bomb attempt and was used by the suicide bombers in the November 2015 Paris attacks, 2016 Brussels bombings, Manchester Arena bombing, June 2017 Brussels attack, Parsons Green bombing, the Surabaya bombings, and the 2019 Sri Lanka Easter bombings. Hong Kong police claim to have found of TATP among weapons and protest materials in July 2019, when mass protests were taking place against a proposed law allowing extradition to mainland China. TATP shockwave overpressure is 70% of that for TNT, and the positive phase impulse is 55% of the TNT equivalent. TATP at 0.4 g/cm3 has about one-third of the brisance of TNT (1.2 g/cm3) measured by the Hess test. TATP is attractive to terrorists because it is easily prepared from readily available retail ingredients, such as hair bleach and nail polish remover. It was also able to evade detection because it is one of the few high explosives that do not contain nitrogen, and could therefore pass undetected through standard explosive detection scanners, which were hitherto designed to detect nitrogenous explosives. By 2016, explosives detectors had been modified to be able to detect TATP, and new types were developed. Legislative measures to limit the sale of hydrogen peroxide concentrated to 12% or higher have been made in the European Union. A key disadvantage is the high susceptibility of TATP to accidental detonation, causing injuries and deaths among illegal bomb-makers, which has led to TATP being referred to as the "Mother of Satan". TATP was found in the accidental explosion that preceded the 2017 terrorist attacks in Barcelona and surrounding areas. Large-scale TATP synthesis is often betrayed by excessive bleach-like or fruity smells. This smell can even penetrate into clothes and hair in amounts that are quite noticeable; this was reported in the 2016 Brussels bombings. References External links Explosive chemicals Ketals Organic peroxides Organic peroxide explosives Oxygen heterocycles Radical initiators
Acetone peroxide
[ "Chemistry", "Materials_science" ]
2,149
[ "Ketals", "Radical initiators", "Functional groups", "Organic compounds", "Polymer chemistry", "Reagents for organic chemistry", "Explosive chemicals", "Organic peroxide explosives", "Organic peroxides" ]
578,693
https://en.wikipedia.org/wiki/Programmable%20Array%20Logic
Programmable Array Logic (PAL) is a family of programmable logic device semiconductors used to implement logic functions in digital circuits that was introduced by Monolithic Memories, Inc. (MMI) in March 1978. MMI obtained a registered trademark on the term PAL for use in "Programmable Semiconductor Logic Circuits". The trademark is currently held by Lattice Semiconductor. PAL devices consisted of a small PROM (programmable read-only memory) core and additional output logic used to implement particular desired logic functions with few components. Using specialized machines, PAL devices were "field-programmable". PALs were available in several variants: "One-time programmable" (OTP) devices could not be updated and reused after initial programming. (MMI also offered a similar family called HAL, or "hard array logic", which were like PAL devices except that they were mask-programmed at the factory.) UV erasable versions (e.g.: PALCxxxxx e.g.: PALC22V10) had a quartz window over the chip die and could be erased for re-use with an ultraviolet light source just like an EPROM. Later versions (PALCExxx e.g.: PALCE22V10) were flash erasable devices. In most applications, electrically erasable GALs are now deployed as pin-compatible direct replacements for one-time programmable PALs. History Before PALs were introduced, designers of digital logic circuits would use small-scale integration (SSI) components, such as those in the 7400 series TTL (transistor-transistor logic) family; the 7400 family included a variety of logic building blocks, such as gates (NOT, NAND, NOR, AND, OR), multiplexers (MUXes) and demultiplexers (DEMUXes), flip-flops (D-type, JK, etc.) and others. One PAL device would typically replace dozens of such "discrete" logic packages, so the SSI business declined as the PAL business took off. PALs were used advantageously in many products, such as minicomputers, as documented in Tracy Kidder's best-selling book The Soul of a New Machine. PALs were not the first commercial programmable logic devices; Signetics had been selling its field programmable logic array (FPLA) since 1975. These devices were completely unfamiliar to most circuit designers and were perceived to be too difficult to use. The FPLA had a relatively slow maximum operating speed (due to having both programmable-AND and programmable-OR arrays), was expensive, and had a poor reputation for testability. Another factor limiting the acceptance of the FPLA was the large package, a 600-mil (0.6", or 15.24 mm) wide 28-pin dual in-line package (DIP). The project to create the PAL device was managed by John Birkner and the actual PAL circuit was designed by H. T. Chua. In a previous job (at mini-computer manufacturer Computer Automation), Birkner had developed a 16-bit processor using 80 standard logic devices. His experience with standard logic led him to believe that user-programmable devices would be more attractive if the devices were designed to replace standard logic. This meant that the package sizes had to be more typical of the existing devices, and the speeds had to be improved. MMI intended PALs to be a relatively low cost (sub $3) part. However, the company initially had severe manufacturing yield problems and had to sell the devices for over $50. This threatened the viability of the PAL as a commercial product, and MMI was forced to license the product line to National Semiconductor. PALs were later "second sourced" by Texas Instruments and Advanced Micro Devices. Process technologies Early PALs were 20-pin DIP components fabricated in silicon using bipolar transistor technology with one-time programmable (OTP) titanium-tungsten programming fuses. Later devices were manufactured by Cypress, Lattice Semiconductor and Advanced Micro Devices using CMOS technology. The original 20- and 24-pin PALs were denoted by MMI as medium-scale integration (MSI) devices. PAL architecture The PAL architecture consists of two main components: a logic plane and output logic macrocells. Programmable logic plane The programmable logic plane is a programmable read-only memory (PROM) array that allows the signals present on the device pins, or the logical complements of those signals, to be routed to output logic macrocells. PAL devices have arrays of transistor cells arranged in a "fixed-OR, programmable-AND" plane used to implement "sum-of-products" binary logic equations for each of the outputs in terms of the inputs and either synchronous or asynchronous feedback from the outputs. Output logic The early 20-pin PALs had 10 inputs and 8 outputs. The outputs were active low and could be registered or combinational. Members of the PAL family were available with various output structures called "output logic macrocells" or OLMCs. Prior to the introduction of the "V" (for "variable") series, the types of OLMCs available in each PAL were fixed at the time of manufacture. (The PAL16L8 had 8 combinational outputs, and the PAL16R8 had 8 registered outputs. The PAL16R6 had 6 registered and 2 combinational outputs, while the PAL16R4 had 4 of each.) Each output could have up to 8 product terms (effectively AND gates); however, the combinational outputs used one of the terms to control a bidirectional output buffer. There were other combinations that had fewer outputs with more product terms per output and were available with active high outputs ("H" series). The "X" series of devices had an XOR gate before the register. There were also similar 24-pin versions of these PALs. This fixed output structure often frustrated designers attempting to optimize the utility of PAL devices because output structures of different types were often required by their applications. (For example, one could not get 5 registered outputs with 3 active high combinational outputs.) So, in June 1983 AMD introduced the 22V10, a 24-pin device with 10 output logic macrocells. Each macrocell could be configured by the user to be combinational or registered, active high or active low. The number of product terms allocated to an output varied from 8 to 16. This one device could replace all of the 24-pin fixed function PAL devices. Members of the PAL "V" ("variable") series included the PAL16V8, PAL20V8 and PAL22V10. Programming PALs PALs were programmed electrically using binary patterns (as JEDEC ASCII/hexadecimal files) and a special electronic programming system available from either the manufacturer or a third party, such as DATA I/O. In addition to single-unit device programmers, device feeders and gang programmers were often used when more than just a few PALs needed to be programmed. (For large volumes, electrical programming costs could be eliminated by having the manufacturer fabricate a custom metal mask used to program the customers' patterns at the time of manufacture; MMI used the term "hard array logic" (HAL) to refer to devices programmed in this way.) Programming languages (by chronological order of appearance) Though some engineers programmed PAL devices by manually editing files containing the binary fuse pattern data, most opted to design their logic using a hardware description language (HDL) such as Data I/O's ABEL, Logical Devices' CUPL, or MMI's PALASM. These were computer-assisted design (CAD) (now referred to as "electronic design automation") programs which translated (or "compiled") the designers' logic equations into binary fuse map files used to program (and often test) each device. PALASM The PALASM (from "PAL assembler") language was developed by John Birkner in the early 1980s and the PALASM compiler was written by MMI in FORTRAN IV on an IBM 370/168. MMI made the source code available to users at no cost. By 1983, MMI customers ran versions on the DEC PDP-11, Data General NOVA, Hewlett-Packard HP 2100, MDS800 and others. It was used to express Boolean equations for the output pins in a text file, which was then converted to the 'fuse map' file for the programming system using a vendor-supplied program; later the option of translation from schematics became common, and later still, 'fuse maps' could be 'synthesized' from an HDL (hardware description language) such as Verilog. CUPL Assisted Technology released CUPL (Compiler for Universal Programmable Logic) in September 1983. The software was always referred to as CUPL and never the expanded acronym. It was the first commercial design tool that supported multiple PLD families. The initial release was for the IBM PC and MS-DOS, but it was written in the C programming language so it could be ported to additional platforms. Assisted Technology was acquired by Personal CAD Systems (P-CAD) in July 1985. In 1986, PCAD's schematic capture package could be used as a front end for CUPL. CUPL was later acquired by Logical Devices and is now owned by Altium. CUPL is currently available as an integrated development package for Microsoft Windows. Atmel releases for free WinCUPL (their own design software for all Atmel SPLDs and CPLDs). Atmel was acquired by Microchip in 2016. ABEL Data I/O Corporation released ABEL in April, 1984. The development team was Michael Holley, Mike Mraz, Gerrit Barrere, Walter Bright, Bjorn Freeman-Benson, Kyu Lee, David Pellerin, Mary Bailey, Daniel Burrier and Charles Olivier. Data I/O spun off the ABEL product line into an electronic design automation company called Synario Design Systems and then sold Synario to MINC Inc in 1997. MINC was focused on developing FPGA development tools. The company closed its doors in 1998 and Xilinx acquired some of MINC's assets including the ABEL language and tool set. ABEL then became part of the Xilinx Webpack tool suite. Now Xilinx owns ABEL. Device programmers Popular device programmers included Data I/O Corporation's Model 60A Logic Programmer and Model 2900. One of the first PAL programmers was the Structured Design SD20/24. They had the PALASM software built-in and only required a CRT terminal to enter the equations and view the fuse plots. After fusing, the outputs of the PAL could be verified if test vectors were entered in the source file. Successors After MMI succeeded with the 20-pin PAL parts introduced circa 1978, AMD introduced the 24-pin 22V10 PAL with additional features. After buying out MMI (circa 1987), AMD spun off a consolidated operation as Vantis, and that business was acquired by Lattice Semiconductor in 1999. Altera introduced the EP300 (first CMOS PAL) in 1983 and later moved into the FPGA business. Lattice Semiconductor introduced the generic array logic (GAL) family in 1985, with functional equivalents of the "V" series PALs that used reprogrammable logic planes based on EEPROM (electrically eraseable programmable read-only memory) technology. National Semiconductor was a second source for GAL parts. AMD introduced a similar family called PALCE. In general one GAL part is able to function as any of the similar family PAL devices. For example, the 16V8 GAL is able to replace the 16L8, 16H8, 16H6, 16H4, 16H2 and 16R8 PALs (and many others besides). ICT (International CMOS Technology) introduced the PEEL 18CV8 in 1986. The 20-pin CMOS EEPROM part could be used in place of any of the registered-output bipolar PALs and used much less power. Larger-scale programmable logic devices were introduced by Atmel, Lattice Semiconductor, and others. These devices extended the PAL architecture by including multiple logic planes and/or burying logic macrocells within the logic plane(s). The term complex programmable logic device (CPLD) was introduced to differentiate these devices from their PAL and GAL predecessors, which were then sometimes referred to as simple programmable logic devices (SPLDs). Another large programmable logic device is the field-programmable gate array (FPGA). These are devices currently made by Intel (who acquired Altera) and Xilinx (who was acquired by AMD) and other semiconductor manufacturers. See also Combinational logic Other types of programmable logic devices: Field-programmable gate array (FPGA) Programmable logic array (PLA) Programmable logic device (PLD) Complex programmable logic device (CPLD) Erasable programmable logic device (EPLD) Field programmable logic array (Signetics FPLA) Current and former makers of programmable logic devices: Actel Advanced Micro Devices (PAL, PALCE) Altera (Flex, Max) Atmel Cypress Semiconductor Intel Lattice Semiconductor (GAL) Microchip Technology (FPGA, SPLD, CPLD) National Semiconductor (GAL) QuickLogic Corp. Signetics (FPLA) Texas Instruments Xilinx Current and former makers of PAL device programmers: Data I/O Corporation References Further reading Books Programmable Logic Designer's Guide; Roger Alford; Sams Publishing; 1989; . (archive) PAL Programmable Logic Handbook; 4ed; Monolithic Memories; 1985. (archive) Databooks Bipolar LSI 1984 Databook; 5ed; Monolithic Memories; 1984. (archive) Specifications Standard Data Transfer Format Between Data Preparation System and Programmable Logic Device Programmer; JEDEC Standard JESD3-C; JEDEC; June 1994. Electronic design automation Gate arrays
Programmable Array Logic
[ "Technology", "Engineering" ]
2,925
[ "Computer engineering", "Gate arrays" ]
579,026
https://en.wikipedia.org/wiki/Gravitational%20potential
In classical mechanics, the gravitational potential is a scalar potential associating with each point in space the work (energy transferred) per unit mass that would be needed to move an object to that point from a fixed reference point in the conservative gravitational field. It is analogous to the electric potential with mass playing the role of charge. The reference point, where the potential is zero, is by convention infinitely far away from any mass, resulting in a negative potential at any finite distance. Their similarity is correlated with both associated fields having conservative forces. Mathematically, the gravitational potential is also known as the Newtonian potential and is fundamental in the study of potential theory. It may also be used for solving the electrostatic and magnetostatic fields generated by uniformly charged or polarized ellipsoidal bodies. Potential energy The gravitational potential (V) at a location is the gravitational potential energy (U) at that location per unit mass: where m is the mass of the object. Potential energy is equal (in magnitude, but negative) to the work done by the gravitational field moving a body to its given position in space from infinity. If the body has a mass of 1 kilogram, then the potential energy to be assigned to that body is equal to the gravitational potential. So the potential can be interpreted as the negative of the work done by the gravitational field moving a unit mass in from infinity. In some situations, the equations can be simplified by assuming a field that is nearly independent of position. For instance, in a region close to the surface of the Earth, the gravitational acceleration, g, can be considered constant. In that case, the difference in potential energy from one height to another is, to a good approximation, linearly related to the difference in height: Mathematical form The gravitational potential V at a distance x from a point mass of mass M can be defined as the work W that needs to be done by an external agent to bring a unit mass in from infinity to that point: where G is the gravitational constant, and F is the gravitational force. The product GM is the standard gravitational parameter and is often known to higher precision than G or M separately. The potential has units of energy per mass, e.g., J/kg in the MKS system. By convention, it is always negative where it is defined, and as x tends to infinity, it approaches zero. The gravitational field, and thus the acceleration of a small body in the space around the massive object, is the negative gradient of the gravitational potential. Thus the negative of a negative gradient yields positive acceleration toward a massive object. Because the potential has no angular components, its gradient is where x is a vector of length x pointing from the point mass toward the small body and is a unit vector pointing from the point mass toward the small body. The magnitude of the acceleration therefore follows an inverse square law: The potential associated with a mass distribution is the superposition of the potentials of point masses. If the mass distribution is a finite collection of point masses, and if the point masses are located at the points x1, ..., xn and have masses m1, ..., mn, then the potential of the distribution at the point x is If the mass distribution is given as a mass measure dm on three-dimensional Euclidean space R3, then the potential is the convolution of with dm. In good cases this equals the integral where is the distance between the points x and r. If there is a function ρ(r) representing the density of the distribution at r, so that , where dv(r) is the Euclidean volume element, then the gravitational potential is the volume integral If V is a potential function coming from a continuous mass distribution ρ(r), then ρ can be recovered using the Laplace operator, : This holds pointwise whenever ρ is continuous and is zero outside of a bounded set. In general, the mass measure dm can be recovered in the same way if the Laplace operator is taken in the sense of distributions. As a consequence, the gravitational potential satisfies Poisson's equation. See also Green's function for the three-variable Laplace equation and Newtonian potential. The integral may be expressed in terms of known transcendental functions for all ellipsoidal shapes, including the symmetrical and degenerate ones. These include the sphere, where the three semi axes are equal; the oblate (see reference ellipsoid) and prolate spheroids, where two semi axes are equal; the degenerate ones where one semi axes is infinite (the elliptical and circular cylinder) and the unbounded sheet where two semi axes are infinite. All these shapes are widely used in the applications of the gravitational potential integral (apart from the constant G, with 𝜌 being a constant charge density) to electromagnetism. Spherical symmetry A spherically symmetric mass distribution behaves to an observer completely outside the distribution as though all of the mass was concentrated at the center, and thus effectively as a point mass, by the shell theorem. On the surface of the earth, the acceleration is given by so-called standard gravity g, approximately 9.8 m/s2, although this value varies slightly with latitude and altitude. The magnitude of the acceleration is a little larger at the poles than at the equator because Earth is an oblate spheroid. Within a spherically symmetric mass distribution, it is possible to solve Poisson's equation in spherical coordinates. Within a uniform spherical body of radius R, density ρ, and mass m, the gravitational force g inside the sphere varies linearly with distance r from the center, giving the gravitational potential inside the sphere, which is which differentiably connects to the potential function for the outside of the sphere (see the figure at the top). General relativity In general relativity, the gravitational potential is replaced by the metric tensor. When the gravitational field is weak and the sources are moving very slowly compared to light-speed, general relativity reduces to Newtonian gravity, and the metric tensor can be expanded in terms of the gravitational potential. Multipole expansion The potential at a point is given by The potential can be expanded in a series of Legendre polynomials. Represent the points x and r as position vectors relative to the center of mass. The denominator in the integral is expressed as the square root of the square to give where, in the last integral, and is the angle between x and r. (See "mathematical form".) The integrand can be expanded as a Taylor series in , by explicit calculation of the coefficients. A less laborious way of achieving the same result is by using the generalized binomial theorem. The resulting series is the generating function for the Legendre polynomials: valid for and . The coefficients Pn are the Legendre polynomials of degree n. Therefore, the Taylor coefficients of the integrand are given by the Legendre polynomials in . So the potential can be expanded in a series that is convergent for positions x such that for all mass elements of the system (i.e., outside a sphere, centered at the center of mass, that encloses the system): The integral is the component of the center of mass in the direction; this vanishes because the vector x emanates from the center of mass. So, bringing the integral under the sign of the summation gives This shows that elongation of the body causes a lower potential in the direction of elongation, and a higher potential in perpendicular directions, compared to the potential due to a spherical mass, if we compare cases with the same distance to the center of mass. (If we compare cases with the same distance to the surface, the opposite is true.) Numerical values The absolute value of gravitational potential at a number of locations with regards to the gravitation from the Earth, the Sun, and the Milky Way is given in the following table; i.e. an object at Earth's surface would need 60 MJ/kg to "leave" Earth's gravity field, another 900 MJ/kg to also leave the Sun's gravity field and more than 130 GJ/kg to leave the gravity field of the Milky Way. The potential is half the square of the escape velocity. Compare the gravity at these locations. See also Applications of Legendre polynomials in physics Standard gravitational parameter (GM) Geoid Geopotential Geopotential model Notes References . . Energy (physics) Gravity Potentials Scalar physical quantities
Gravitational potential
[ "Physics", "Mathematics" ]
1,734
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Energy (physics)", "Wikipedia categories named after physical quantities" ]
579,219
https://en.wikipedia.org/wiki/%CE%92-Carotene
β-Carotene (beta-carotene) is an organic, strongly colored red-orange pigment abundant in fungi, plants, and fruits. It is a member of the carotenes, which are terpenoids (isoprenoids), synthesized biochemically from eight isoprene units and thus having 40 carbons. Dietary β-carotene is a provitamin A compound, converting in the body to retinol (vitamin A). In foods, it has rich content in carrots, pumpkin, spinach, and sweet potato. It is used as a dietary supplement and may be prescribed to treat erythropoietic protoporphyria, an inherited condition of sunlight sensitivity. β-carotene is the most common carotenoid in plants. When used as a food coloring, it has the E number E160a. The structure was deduced in 1930. Isolation of β-carotene from fruits abundant in carotenoids is commonly done using column chromatography. It is industrially extracted from richer sources such as the algae Dunaliella salina. The separation of β-carotene from the mixture of other carotenoids is based on the polarity of a compound. β-Carotene is a non-polar compound, so it is separated with a non-polar solvent such as hexane. Being highly conjugated, it is deeply colored, and as a hydrocarbon lacking functional groups, it is lipophilic. Provitamin A activity Plant carotenoids are the primary dietary source of provitamin A worldwide, with β-carotene as the best-known provitamin A carotenoid. Others include α-carotene and β-cryptoxanthin. Carotenoid absorption is restricted to the duodenum of the small intestine. One molecule of β-carotene can be cleaved by the intestinal enzyme β,β-carotene 15,15'-monooxygenase into two molecules of vitamin A. Absorption, metabolism and excretion As part of the digestive process, food-sourced carotenoids must be separated from plant cells and incorporated into lipid-containing micelles to be bioaccessible to intestinal enterocytes. If already extracted (or synthetic) and then presented in an oil-filled dietary supplement capsule, there is greater bioavailability compared to that from foods. At the enterocyte cell wall, β-carotene is taken up by the membrane transporter protein scavenger receptor class B, type 1 (SCARB1). Absorbed β-carotene is then either incorporated as such into chylomicrons or first converted to retinal and then retinol, bound to retinol binding protein 2, before being incorporated into chylomicrons. The conversion process consists of one molecule of β-carotene cleaved by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene, into two molecules of retinal. When plasma retinol is in the normal range the gene expression for SCARB1 and BCO1 are suppressed, creating a feedback loop that suppresses β-carotene absorption and conversion. The majority of chylomicrons are taken up by the liver, then secreted into the blood repackaged into low density lipoproteins (LDLs). From these circulating lipoproteins and the chylomicrons that bypassed the liver, β-carotene is taken into cells via receptor SCARB1. Human tissues differ in expression of SCARB1, and hence β-carotene content. Examples expressed as ng/g, wet weight: liver=479, lung=226, prostate=163 and skin=26. Once taken up by peripheral tissue cells, the major usage of absorbed β-carotene is as a precursor to retinal via symmetric cleavage by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene. A lesser amount is metabolized by the mitochondrial enzyme beta-carotene 9',10'-dioxygenase, which is encoded by the BCO2 gene. The products of this asymmetric cleavage are two beta-ionone molecules and rosafluene. BCO2 appears to be involved in preventing excessive accumulation of carotenoids; a BCO2 defect in chickens results in yellow skin color due to accumulation in subcutaneous fat. Conversion factors For counting dietary vitamin A intake, β-carotene may be converted either using the newer retinol activity equivalents (RAE) or the older international unit (IU). Retinol activity equivalents (RAEs) Since 2001, the US Institute of Medicine uses retinol activity equivalents (RAE) for their Dietary Reference Intakes, defined as follows: 1 μg RAE = 1 μg retinol from food or supplements 1 μg RAE = 2 μg all-trans-β-carotene from supplements 1 μg RAE = 12 μg of all-trans-β-carotene from food 1 μg RAE = 24 μg α-carotene or β-cryptoxanthin from food RAE takes into account carotenoids' variable absorption and conversion to vitamin A by humans better than and replaces the older retinol equivalent (RE) (1 μg RE = 1 μg retinol, 6 μg β-carotene, or 12 μg α-carotene or β-cryptoxanthin). RE was developed 1967 by the United Nations/World Health Organization Food and Agriculture Organization (FAO/WHO). International Units Another older unit of vitamin A activity is the international unit (IU). Like retinol equivalent, the international unit does not take into account carotenoid variable absorption and conversion to vitamin A by humans, as well as the more modern retinol activity equivalent. Food and supplement labels still generally use IU, but IU can be converted to the more useful retinol activity equivalent as follows: 1 μg RAE = 3.33 IU retinol 1 IU retinol = 0.3 μg RAE 1 IU β-carotene from supplements = 0.3 μg RAE 1 IU β-carotene from food = 0.05 μg RAE 1 IU α-carotene or β-cryptoxanthin from food = 0.025 μg RAE1 Dietary sources The average daily intake of β-carotene is in the range 2–7 mg, as estimated from a pooled analysis of 500,000 women living in the US, Canada, and some European countries. Beta-carotene is found in many foods and is sold as a dietary supplement. β-Carotene contributes to the orange color of many different fruits and vegetables. Vietnamese gac (Momordica cochinchinensis Spreng.) and crude palm oil are particularly rich sources, as are yellow and orange fruits, such as cantaloupe, mangoes, pumpkin, and papayas, and orange root vegetables such as carrots and sweet potatoes. The color of β-carotene is masked by chlorophyll in green leaf vegetables such as spinach, kale, sweet potato leaves, and sweet gourd leaves. The U.S. Department of Agriculture lists foods high in β-carotene content: No dietary requirement Government and non-government organizations have not set a dietary requirement for β-carotene. Side effects Excess β-carotene is predominantly stored in the fat tissues of the body. The most common side effect of excessive β-carotene consumption is carotenodermia, a physically harmless condition that presents as a conspicuous orange skin tint arising from deposition of the carotenoid in the outermost layer of the epidermis. Carotenosis Carotenoderma, also referred to as carotenemia, is a benign and reversible medical condition where an excess of dietary carotenoids results in orange discoloration of the outermost skin layer. It is associated with a high blood β-carotene value. This can occur after a month or two of consumption of beta-carotene rich foods, such as carrots, carrot juice, tangerine juice, mangos, or in Africa, red palm oil. β-carotene dietary supplements can have the same effect. The discoloration extends to palms and soles of feet, but not to the white of the eye, which helps distinguish the condition from jaundice. Carotenodermia is reversible upon cessation of excessive intake. Consumption of greater than 30 mg/day for a prolonged period has been confirmed as leading to carotenemia. No risk for hypervitaminosis A At the enterocyte cell wall, β-carotene is taken up by the membrane transporter protein scavenger receptor class B, type 1 (SCARB1). Absorbed β-carotene is then either incorporated as such into chylomicrons or first converted to retinal and then retinol, bound to retinol binding protein 2, before being incorporated into chylomicrons. The conversion process consists of one molecule of β-carotene cleaved by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene, into two molecules of retinal. When plasma retinol is in the normal range the gene expression for SCARB1 and BCO1 are suppressed, creating a feedback loop that suppresses absorption and conversion. Because of these two mechanisms, high intake will not lead to hypervitaminosis A. Drug interactions β-Carotene can interact with medication used for lowering cholesterol. Taking them together can lower the effectiveness of these medications and is considered only a moderate interaction. Bile acid sequestrants and proton-pump inhibitors can decrease absorption of β-carotene. Consuming alcohol with β-carotene can decrease its ability to convert to retinol and could possibly result in hepatotoxicity. β-Carotene and lung cancer in smokers Chronic high doses of β-carotene supplementation increases the probability of lung cancer in smokers while its natural vitamer, retinol, increases lung cancer in smokers and nonsmokers. The effect is specific to supplementation dose as no lung damage has been detected in those who are exposed to cigarette smoke and who ingest a physiological dose of β-carotene (6 mg), in contrast to high pharmacological dose (30 mg). Increases in lung cancer have been attributed to the tendency of β-carotene to oxidize, yet based on the pharmacokinetics of β-carotene absorption and transport through the intestine and the lack of specific β-carotene transporters, it is unlikely that β-carotene reaches the lung of smokers in sufficient quantities. Additional research is required to understand the link between the increased risk of cancer and all-cause mortality following β-carotene supplementation. Additionally, supplemental, high-dose β-carotene may increase the risk of prostate cancer, intracerebral hemorrhage, and cardiovascular and total mortality irrespective of smoking status. Industrial sources β-carotene is industrially made either by total synthesis (see ) or by extraction from biological sources such as vegetables, microalgae (especially Dunaliella salina), and genetically-engineered microbes. The synthetic path is low-cost and high-yield. Research Medical authorities generally recommend obtaining beta-carotene from food rather than dietary supplements. A 2013 meta-analysis of randomized controlled trials concluded that high-dosage (≥9.6 mg/day) beta-carotene supplementation is associated with a 6% increase in the risk of all-cause mortality, while low-dosage (<9.6 mg/day) supplementation does not have a significant effect on mortality. Research is insufficient to determine whether a minimum level of beta-carotene consumption is necessary for human health and to identify what problems might arise from insufficient beta-carotene intake. However, a 2018 meta-analysis mostly of prospective cohort studies found that both dietary and circulating beta-carotene are associated with a lower risk of all-cause mortality. The highest circulating beta-carotene category, compared to the lowest, correlated with a 37% reduction in the risk of all-cause mortality, while the highest dietary beta-carotene intake category, compared to the lowest, was linked to an 18% decrease in the risk of all-cause mortality. Macular degeneration Age-related macular degeneration (AMD) represents the leading cause of irreversible blindness in elderly people. AMD is an oxidative stress, retinal disease that affects the macula, causing progressive loss of central vision. β-carotene content is confirmed in human retinal pigment epithelium. Reviews reported mixed results for observational studies, with some reporting that diets higher in β-carotene correlated with a decreased risk of AMD whereas other studies reporting no benefits. Reviews reported that for intervention trials using only β-carotene, there was no change to risk of developing AMD. Cancer A meta-analysis concluded that supplementation with β-carotene does not appear to decrease the risk of cancer overall, nor specific cancers including: pancreatic, colorectal, prostate, breast, melanoma, or skin cancer generally. High levels of β-carotene may increase the risk of lung cancer in current and former smokers. Results are not clear for thyroid cancer. Cataract A Cochrane review looked at supplementation of β-carotene, vitamin C, and vitamin E, independently and combined, on people to examine differences in risk of cataract, cataract extraction, progression of cataract, and slowing the loss of visual acuity. These studies found no evidence of any protective effects afforded by β-carotene supplementation on preventing and slowing age-related cataract. A second meta-analysis compiled data from studies that measured diet-derived serum beta-carotene and reported a not statistically significant 10% decrease in cataract risk. Erythropoietic protoporphyria High doses of β-carotene (up to 180 mg per day) may be used as a treatment for erythropoietic protoporphyria, a rare inherited disorder of sunlight sensitivity, without toxic effects. Food drying Foods rich in carotenoid dyes show discoloration upon drying. This is due to thermal degradation of carotenoids, possibly via isomerization and oxidation reactions. See also Sunless tanning with beta-carotene Vitamin A Retinol Carotenoids References Carotenoids Vitamin A Hydrocarbons Tetraterpenes Cyclohexenes
Β-Carotene
[ "Chemistry", "Biology" ]
3,169
[ "Hydrocarbons", "Biomarkers", "Vitamin A", "Carotenoids", "Organic compounds", "Biomolecules" ]
579,390
https://en.wikipedia.org/wiki/Gene%20prediction
In computational biology, gene prediction or gene finding refers to the process of identifying the regions of genomic DNA that encode genes. This includes protein-coding genes as well as RNA genes, but may also include prediction of other functional elements such as regulatory regions. Gene finding is one of the first and most important steps in understanding the genome of a species once it has been sequenced. In its earliest days, "gene finding" was based on painstaking experimentation on living cells and organisms. Statistical analysis of the rates of homologous recombination of several different genes could determine their order on a certain chromosome, and information from many such experiments could be combined to create a genetic map specifying the rough location of known genes relative to each other. Today, with comprehensive genome sequence and powerful computational resources at the disposal of the research community, gene finding has been redefined as a largely computational problem. Determining that a sequence is functional should be distinguished from determining the function of the gene or its product. Predicting the function of a gene and confirming that the gene prediction is accurate still demands in vivo experimentation through gene knockout and other assays, although frontiers of bioinformatics research are making it increasingly possible to predict the function of a gene based on its sequence alone. Gene prediction is one of the key steps in genome annotation, following sequence assembly, the filtering of non-coding regions and repeat masking. Gene prediction is closely related to the so-called 'target search problem' investigating how DNA-binding proteins (transcription factors) locate specific binding sites within the genome. Many aspects of structural gene prediction are based on current understanding of underlying biochemical processes in the cell such as gene transcription, translation, protein–protein interactions and regulation processes, which are subject of active research in the various omics fields such as transcriptomics, proteomics, metabolomics, and more generally structural and functional genomics. Empirical methods In empirical (similarity, homology or evidence-based) gene finding systems, the target genome is searched for sequences that are similar to extrinsic evidence in the form of the known expressed sequence tags, messenger RNA (mRNA), protein products, and homologous or orthologous sequences. Given an mRNA sequence, it is trivial to derive a unique genomic DNA sequence from which it had to have been transcribed. Given a protein sequence, a family of possible coding DNA sequences can be derived by reverse translation of the genetic code. Once candidate DNA sequences have been determined, it is a relatively straightforward algorithmic problem to efficiently search a target genome for matches, complete or partial, and exact or inexact. Given a sequence, local alignment algorithms such as BLAST, FASTA and Smith-Waterman look for regions of similarity between the target sequence and possible candidate matches. Matches can be complete or partial, and exact or inexact. The success of this approach is limited by the contents and accuracy of the sequence database. A high degree of similarity to a known messenger RNA or protein product is strong evidence that a region of a target genome is a protein-coding gene. However, to apply this approach systemically requires extensive sequencing of mRNA and protein products. Not only is this expensive, but in complex organisms, only a subset of all genes in the organism's genome are expressed at any given time, meaning that extrinsic evidence for many genes is not readily accessible in any single cell culture. Thus, to collect extrinsic evidence for most or all of the genes in a complex organism requires the study of many hundreds or thousands of cell types, which presents further difficulties. For example, some human genes may be expressed only during development as an embryo or fetus, which might be difficult to study for ethical reasons. Despite these difficulties, extensive transcript and protein sequence databases have been generated for human as well as other important model organisms in biology, such as mice and yeast. For example, the RefSeq database contains transcript and protein sequence from many different species, and the Ensembl system comprehensively maps this evidence to human and several other genomes. It is, however, likely that these databases are both incomplete and contain small but significant amounts of erroneous data. New high-throughput transcriptome sequencing technologies such as RNA-Seq and ChIP-sequencing open opportunities for incorporating additional extrinsic evidence into gene prediction and validation, and allow structurally rich and more accurate alternative to previous methods of measuring gene expression such as expressed sequence tag or DNA microarray. Major challenges involved in gene prediction involve dealing with sequencing errors in raw DNA data, dependence on the quality of the sequence assembly, handling short reads, frameshift mutations, overlapping genes and incomplete genes. In prokaryotes it's essential to consider horizontal gene transfer when searching for gene sequence homology. An additional important factor underused in current gene detection tools is existence of gene clusters — operons (which are functioning units of DNA containing a cluster of genes under the control of a single promoter) in both prokaryotes and eukaryotes. Most popular gene detectors treat each gene in isolation, independent of others, which is not biologically accurate. Ab initio methods Ab Initio gene prediction is an intrinsic method based on gene content and signal detection. Because of the inherent expense and difficulty in obtaining extrinsic evidence for many genes, it is also necessary to resort to ab initio gene finding, in which the genomic DNA sequence alone is systematically searched for certain tell-tale signs of protein-coding genes. These signs can be broadly categorized as either signals, specific sequences that indicate the presence of a gene nearby, or content, statistical properties of the protein-coding sequence itself. Ab initio gene finding might be more accurately characterized as gene prediction, since extrinsic evidence is generally required to conclusively establish that a putative gene is functional. In the genomes of prokaryotes, genes have specific and relatively well-understood promoter sequences (signals), such as the Pribnow box and transcription factor binding sites, which are easy to systematically identify. Also, the sequence coding for a protein occurs as one contiguous open reading frame (ORF), which is typically many hundred or thousands of base pairs long. The statistics of stop codons are such that even finding an open reading frame of this length is a fairly informative sign. (Since 3 of the 64 possible codons in the genetic code are stop codons, one would expect a stop codon approximately every 20–25 codons, or 60–75 base pairs, in a random sequence.) Furthermore, protein-coding DNA has certain periodicities and other statistical properties that are easy to detect in a sequence of this length. These characteristics make prokaryotic gene finding relatively straightforward, and well-designed systems are able to achieve high levels of accuracy. Ab initio gene finding in eukaryotes, especially complex organisms like humans, is considerably more challenging for several reasons. First, the promoter and other regulatory signals in these genomes are more complex and less well-understood than in prokaryotes, making them more difficult to reliably recognize. Two classic examples of signals identified by eukaryotic gene finders are CpG islands and binding sites for a poly(A) tail. Second, splicing mechanisms employed by eukaryotic cells mean that a particular protein-coding sequence in the genome is divided into several parts (exons), separated by non-coding sequences (introns). (Splice sites are themselves another signal that eukaryotic gene finders are often designed to identify.) A typical protein-coding gene in humans might be divided into a dozen exons, each less than two hundred base pairs in length, and some as short as twenty to thirty. It is therefore much more difficult to detect periodicities and other known content properties of protein-coding DNA in eukaryotes. Advanced gene finders for both prokaryotic and eukaryotic genomes typically use complex probabilistic models, such as hidden Markov models (HMMs) to combine information from a variety of different signal and content measurements. The GLIMMER system is a widely used and highly accurate gene finder for prokaryotes. GeneMark is another popular approach. Eukaryotic ab initio gene finders, by comparison, have achieved only limited success; notable examples are the GENSCAN and geneid programs. The GeneMark-ES and SNAP gene finders are GHMM-based like GENSCAN. They attempt to address problems related to using a gene finder on a genome sequence that it was not trained against. A few recent approaches like mSplicer, CONTRAST, or mGene also use machine learning techniques like support vector machines for successful gene prediction. They build a discriminative model using hidden Markov support vector machines or conditional random fields to learn an accurate gene prediction scoring function. Ab Initio methods have been benchmarked, with some approaching 100% sensitivity, however as the sensitivity increases, accuracy suffers as a result of increased false positives. Other signals Among the derived signals used for prediction are statistics resulting from the sub-sequence statistics like k-mer statistics, Isochore (genetics) or Compositional domain GC composition/uniformity/entropy, sequence and frame length, Intron/Exon/Donor/Acceptor/Promoter and Ribosomal binding site vocabulary, Fractal dimension, Fourier transform of a pseudo-number-coded DNA, Z-curve parameters and certain run features. It has been suggested that signals other than those directly detectable in sequences may improve gene prediction. For example, the role of secondary structure in the identification of regulatory motifs has been reported. In addition, it has been suggested that RNA secondary structure prediction helps splice site prediction. Neural networks Artificial neural networks are computational models that excel at machine learning and pattern recognition. Neural networks must be trained with example data before being able to generalise for experimental data, and tested against benchmark data. Neural networks are able to come up with approximate solutions to problems that are hard to solve algorithmically, provided there is sufficient training data. When applied to gene prediction, neural networks can be used alongside other ab initio methods to predict or identify biological features such as splice sites. One approach involves using a sliding window, which traverses the sequence data in an overlapping manner. The output at each position is a score based on whether the network thinks the window contains a donor splice site or an acceptor splice site. Larger windows offer more accuracy but also require more computational power. A neural network is an example of a signal sensor as its goal is to identify a functional site in the genome. Combined approaches Programs such as Maker combine extrinsic and ab initio approaches by mapping protein and EST data to the genome to validate ab initio predictions. Augustus, which may be used as part of the Maker pipeline, can also incorporate hints in the form of EST alignments or protein profiles to increase the accuracy of the gene prediction. Comparative genomics approaches As the entire genomes of many different species are sequenced, a promising direction in current research on gene finding is a comparative genomics approach. This is based on the principle that the forces of natural selection cause genes and other functional elements to undergo mutation at a slower rate than the rest of the genome, since mutations in functional elements are more likely to negatively impact the organism than mutations elsewhere. Genes can thus be detected by comparing the genomes of related species to detect this evolutionary pressure for conservation. This approach was first applied to the mouse and human genomes, using programs such as SLAM, SGP and TWINSCAN/N-SCAN and CONTRAST. Multiple informants TWINSCAN examined only human-mouse synteny to look for orthologous genes. Programs such as N-SCAN and CONTRAST allowed the incorporation of alignments from multiple organisms, or in the case of N-SCAN, a single alternate organism from the target. The use of multiple informants can lead to significant improvements in accuracy. CONTRAST is composed of two elements. The first is a smaller classifier, identifying donor splice sites and acceptor splice sites as well as start and stop codons. The second element involves constructing a full model using machine learning. Breaking the problem into two means that smaller targeted data sets can be used to train the classifiers, and that classifier can operate independently and be trained with smaller windows. The full model can use the independent classifier, and not have to waste computational time or model complexity re-classifying intron-exon boundaries. The paper in which CONTRAST is introduced proposes that their method (and those of TWINSCAN, etc.) be classified as de novo gene assembly, using alternate genomes, and identifying it as distinct from ab initio, which uses a target 'informant' genomes. Comparative gene finding can also be used to project high quality annotations from one genome to another. Notable examples include Projector, GeneWise, GeneMapper and GeMoMa. Such techniques now play a central role in the annotation of all genomes. Pseudogene prediction Pseudogenes are close relatives of genes, sharing very high sequence homology, but being unable to code for the same protein product. Whilst once relegated as byproducts of gene sequencing, increasingly, as regulatory roles are being uncovered, they are becoming predictive targets in their own right. Pseudogene prediction utilises existing sequence similarity and ab initio methods, whilst adding additional filtering and methods of identifying pseudogene characteristics. Sequence similarity methods can be customised for pseudogene prediction using additional filtering to find candidate pseudogenes. This could use disablement detection, which looks for nonsense or frameshift mutations that would truncate or collapse an otherwise functional coding sequence. Additionally, translating DNA into proteins sequences can be more effective than just straight DNA homology. Content sensors can be filtered according to the differences in statistical properties between pseudogenes and genes, such as a reduced count of CpG islands in pseudogenes, or the differences in G-C content between pseudogenes and their neighbours. Signal sensors also can be honed to pseudogenes, looking for the absence of introns or polyadenine tails. Metagenomic gene prediction Metagenomics is the study of genetic material recovered from the environment, resulting in sequence information from a pool of organisms. Predicting genes is useful for comparative metagenomics. Metagenomics tools also fall into the basic categories of using either sequence similarity approaches (MEGAN4) and ab initio techniques (GLIMMER-MG). Glimmer-MG is an extension to GLIMMER that relies mostly on an ab initio approach for gene finding and by using training sets from related organisms. The prediction strategy is augmented by classification and clustering gene data sets prior to applying ab initio gene prediction methods. The data is clustered by species. This classification method leverages techniques from metagenomic phylogenetic classification. An example of software for this purpose is, Phymm, which uses interpolated markov models—and PhymmBL, which integrates BLAST into the classification routines. MEGAN4 uses a sequence similarity approach, using local alignment against databases of known sequences, but also attempts to classify using additional information on functional roles, biological pathways and enzymes. As in single organism gene prediction, sequence similarity approaches are limited by the size of the database. FragGeneScan and MetaGeneAnnotator are popular gene prediction programs based on Hidden Markov model. These predictors account for sequencing errors, partial genes and work for short reads. Another fast and accurate tool for gene prediction in metagenomes is MetaGeneMark. This tool is used by the DOE Joint Genome Institute to annotate IMG/M, the largest metagenome collection to date. See also List of gene prediction software Phylogenetic footprinting Protein function prediction Protein structure prediction Protein–protein interaction prediction Pseudogene (database) Sequence mining Sequence similarity (homology) References External links Augustus FGENESH GeMoMa - Homology-based gene prediction based on amino acid and intron position conservation as well as RNA-Seq data geneid, SGP2 Glimmer , GlimmerHMM GenomeThreader ChemGenome GeneMark Gismo mGene StarORF — A multi-platform and web tool for predicting ORFs and obtaining reverse complement sequence Maker - A portable and easily configurable genome annotation pipeline Bioinformatics Mathematical and theoretical biology Markov models
Gene prediction
[ "Mathematics", "Engineering", "Biology" ]
3,397
[ "Bioinformatics", "Applied mathematics", "Biological engineering", "Mathematical and theoretical biology" ]
579,400
https://en.wikipedia.org/wiki/Levinthal%27s%20paradox
Levinthal's paradox is a thought experiment in the field of computational protein structure prediction; protein folding seeks a stable energy configuration. An algorithmic search through all possible conformations to identify the minimum energy configuration (the native state) would take an immense duration; however in reality protein folding happens very quickly, even in the case of the most complex structures, suggesting that the transitions are somehow guided into a stable state through an uneven energy landscape. History In 1969, Cyrus Levinthal noted that, because of the very large number of degrees of freedom in an unfolded polypeptide chain, the molecule has an astronomical number of possible conformations. An estimate of 10300 was made in one of his papers (often incorrectly cited as the 1968 paper). For example, a polypeptide of 100 residues will have 200 different phi and psi bond angles, two within each residue. If each of these bond angles can be in one of three stable conformations, the protein may misfold into a maximum of 3200 different conformations (including any possible folding redundancy), not even considering the peptide linkages between each residue or the conformations of the side-chains. Therefore, if a protein were to attain its correctly folded configuration by sequentially sampling all the possible conformations, it would require a time longer than the age of the universe to arrive at its correct native conformation. This is true even if conformations are sampled at rapid (nanosecond or picosecond) rates. The "paradox" is that most small proteins fold spontaneously on a millisecond or even microsecond time scale. The solution to this paradox has been established by computational approaches to protein structure prediction. Levinthal himself was aware that proteins fold spontaneously and on short timescales. He suggested that the paradox can be resolved if "protein folding is sped up and guided by the rapid formation of local interactions which then determine the further folding of the peptide; this suggests local amino acid sequences which form stable interactions and serve as nucleation points in the folding process". Indeed, the protein folding intermediates and the partially folded transition states were experimentally detected, which explains the fast protein folding. This is also described as protein folding directed within funnel-like energy landscapes. Some computational approaches to protein structure prediction have sought to identify and simulate the mechanism of protein folding. Levinthal also suggested that the native structure might have a higher energy, if the lowest energy was not kinetically accessible. An analogy is a rock tumbling down a hillside that lodges in a gully rather than reaching the base. Levinthal's paradox was cited on the first page of the Scientific Background to the 2024 Nobel Prize in Chemistry (awarded to David Baker, Demis Hassabis, and John M. Jumper for computational protein design and protein structure prediction) by way of demonstrating the sheer scale of the problem given the astronomical number of permutations. Suggested explanations According to Edward Trifonov and Igor Berezovsky, proteins fold by subunits (modules) of the size of 25–30 amino acids. See also Chaperone proteins that assist other proteins in folding or unfolding Folding funnel Anfinsen's dogma References External links http://www-wales.ch.cam.ac.uk/~mark/levinthal/levinthal.html https://www.wired.com/wired/archive/9.07/blue_pr.html https://web.archive.org/web/20041011182039/http://www.sdsc.edu/~nair/levinthal.html Eponymous paradoxes Protein structure Physical paradoxes Thought experiments
Levinthal's paradox
[ "Chemistry" ]
749
[ "Protein structure", "Structural biology" ]
579,414
https://en.wikipedia.org/wiki/Drug%20design
Drug design, often referred to as rational drug design or simply rational design, is the inventive process of finding new medications based on the knowledge of a biological target. The drug is most commonly an organic small molecule that activates or inhibits the function of a biomolecule such as a protein, which in turn results in a therapeutic benefit to the patient. In the most basic sense, drug design involves the design of molecules that are complementary in shape and charge to the biomolecular target with which they interact and therefore will bind to it. Drug design frequently but not necessarily relies on computer modeling techniques. This type of modeling is sometimes referred to as computer-aided drug design. Finally, drug design that relies on the knowledge of the three-dimensional structure of the biomolecular target is known as structure-based drug design. In addition to small molecules, biopharmaceuticals including peptides and especially therapeutic antibodies are an increasingly important class of drugs and computational methods for improving the affinity, selectivity, and stability of these protein-based therapeutics have also been developed. Definition The phrase "drug design" is similar to ligand design (i.e., design of a molecule that will bind tightly to its target). Although design techniques for prediction of binding affinity are reasonably successful, there are many other properties, such as bioavailability, metabolic half-life, and side effects, that first must be optimized before a ligand can become a safe and effictive drug. These other characteristics are often difficult to predict with rational design techniques. Due to high attrition rates, especially during clinical phases of drug development, more attention is being focused early in the drug design process on selecting candidate drugs whose physicochemical properties are predicted to result in fewer complications during development and hence more likely to lead to an approved, marketed drug. Furthermore, in vitro experiments complemented with computation methods are increasingly used in early drug discovery to select compounds with more favorable ADME (absorption, distribution, metabolism, and excretion) and toxicological profiles. Drug targets A biomolecular target (most commonly a protein or a nucleic acid) is a key molecule involved in a particular metabolic or signaling pathway that is associated with a specific disease condition or pathology or to the infectivity or survival of a microbial pathogen. Potential drug targets are not necessarily disease causing but must by definition be disease modifying. In some cases, small molecules will be designed to enhance or inhibit the target function in the specific disease modifying pathway. Small molecules (for example receptor agonists, antagonists, inverse agonists, or modulators; enzyme activators or inhibitors; or ion channel openers or blockers) will be designed that are complementary to the binding site of target. Small molecules (drugs) can be designed so as not to affect any other important "off-target" molecules (often referred to as antitargets) since drug interactions with off-target molecules may lead to undesirable side effects. Due to similarities in binding sites, closely related targets identified through sequence homology have the highest chance of cross reactivity and hence highest side effect potential. Most commonly, drugs are organic small molecules produced through chemical synthesis, but biopolymer-based drugs (also known as biopharmaceuticals) produced through biological processes are becoming increasingly more common. In addition, mRNA-based gene silencing technologies may have therapeutic applications. For example, nanomedicines based on mRNA can streamline and expedite the drug development process, enabling transient and localized expression of immunostimulatory molecules. In vitro transcribed (IVT) mRNA allows for delivery to various accessible cell types via the blood or alternative pathways. The use of IVT mRNA serves to convey specific genetic information into a person's cells, with the primary objective of preventing or altering a particular disease. Drug discovery Phenotypic drug discovery Phenotypic drug discovery is a traditional drug discovery method, also known as forward pharmacology or classical pharmacology. It uses the process of phenotypic screening on collections of synthetic small molecules, natural products, or extracts within chemical libraries to pinpoint substances exhibiting beneficial therapeutic effects. This method is to first discover the in vivo or in vitro functional activity of drugs (such as extract drugs or natural products), and then perform target identification. Phenotypic discovery uses a practical and target-independent approach to generate initial leads, aiming to discover pharmacologically active compounds and therapeutics that operate through novel drug mechanisms. This method allows the exploration of disease phenotypes to find potential treatments for conditions with unknown, complex, or multifactorial origins, where the understanding of molecular targets is insufficient for effective intervention. Rational drug discovery Rational drug design (also called reverse pharmacology) begins with a hypothesis that modulation of a specific biological target may have therapeutic value. In order for a biomolecule to be selected as a drug target, two essential pieces of information are required. The first is evidence that modulation of the target will be disease modifying. This knowledge may come from, for example, disease linkage studies that show an association between mutations in the biological target and certain disease states. The second is that the target is capable of binding to a small molecule and that its activity can be modulated by the small molecule. Once a suitable target has been identified, the target is normally cloned and produced and purified. The purified protein is then used to establish a screening assay. In addition, the three-dimensional structure of the target may be determined. The search for small molecules that bind to the target is begun by screening libraries of potential drug compounds. This may be done by using the screening assay (a "wet screen"). In addition, if the structure of the target is available, a virtual screen may be performed of candidate drugs. Ideally, the candidate drug compounds should be "drug-like", that is they should possess properties that are predicted to lead to oral bioavailability, adequate chemical and metabolic stability, and minimal toxic effects. Several methods are available to estimate druglikeness such as Lipinski's Rule of Five and a range of scoring methods such as lipophilic efficiency. Several methods for predicting drug metabolism have also been proposed in the scientific literature. Due to the large number of drug properties that must be simultaneously optimized during the design process, multi-objective optimization techniques are sometimes employed. Finally because of the limitations in the current methods for prediction of activity, drug design is still very much reliant on serendipity and bounded rationality. Computer-aided drug design The most fundamental goal in drug design is to predict whether a given molecule will bind to a target and if so how strongly. Molecular mechanics or molecular dynamics is most often used to estimate the strength of the intermolecular interaction between the small molecule and its biological target. These methods are also used to predict the conformation of the small molecule and to model conformational changes in the target that may occur when the small molecule binds to it. Semi-empirical, ab initio quantum chemistry methods, or density functional theory are often used to provide optimized parameters for the molecular mechanics calculations and also provide an estimate of the electronic properties (electrostatic potential, polarizability, etc.) of the drug candidate that will influence binding affinity. Molecular mechanics methods may also be used to provide semi-quantitative prediction of the binding affinity. Also, knowledge-based scoring function may be used to provide binding affinity estimates. These methods use linear regression, machine learning, neural nets or other statistical techniques to derive predictive binding affinity equations by fitting experimental affinities to computationally derived interaction energies between the small molecule and the target. Ideally, the computational method will be able to predict affinity before a compound is synthesized and hence in theory only one compound needs to be synthesized, saving enormous time and cost. The reality is that present computational methods are imperfect and provide, at best, only qualitatively accurate estimates of affinity. In practice, it requires several iterations of design, synthesis, and testing before an optimal drug is discovered. Computational methods have accelerated discovery by reducing the number of iterations required and have often provided novel structures. Computer-aided drug design may be used at any of the following stages of drug discovery: hit identification using virtual screening (structure- or ligand-based design) hit-to-lead optimization of affinity and selectivity (structure-based design, QSAR, etc.) lead optimization of other pharmaceutical properties while maintaining affinity In order to overcome the insufficient prediction of binding affinity calculated by recent scoring functions, the protein-ligand interaction and compound 3D structure information are used for analysis. For structure-based drug design, several post-screening analyses focusing on protein-ligand interaction have been developed for improving enrichment and effectively mining potential candidates: Consensus scoring Selecting candidates by voting of multiple scoring functions May lose the relationship between protein-ligand structural information and scoring criterion Cluster analysis Represent and cluster candidates according to protein-ligand 3D information Needs meaningful representation of protein-ligand interactions. Types There are two major types of drug design. The first is referred to as ligand-based drug design and the second, structure-based drug design. Ligand-based Ligand-based drug design (or indirect drug design) relies on knowledge of other molecules that bind to the biological target of interest. These other molecules may be used to derive a pharmacophore model that defines the minimum necessary structural characteristics a molecule must possess in order to bind to the target. A model of the biological target may be built based on the knowledge of what binds to it, and this model in turn may be used to design new molecular entities that interact with the target. Alternatively, a quantitative structure-activity relationship (QSAR), in which a correlation between calculated properties of molecules and their experimentally determined biological activity, may be derived. These QSAR relationships in turn may be used to predict the activity of new analogs. Structure-based Structure-based drug design (or direct drug design) relies on knowledge of the three dimensional structure of the biological target obtained through methods such as x-ray crystallography or NMR spectroscopy. If an experimental structure of a target is not available, it may be possible to create a homology model of the target based on the experimental structure of a related protein. Using the structure of the biological target, candidate drugs that are predicted to bind with high affinity and selectivity to the target may be designed using interactive graphics and the intuition of a medicinal chemist. Alternatively, various automated computational procedures may be used to suggest new drug candidates. Current methods for structure-based drug design can be divided roughly into three main categories. The first method is identification of new ligands for a given receptor by searching large databases of 3D structures of small molecules to find those fitting the binding pocket of the receptor using fast approximate docking programs. This method is known as virtual screening. A second category is de novo design of new ligands. In this method, ligand molecules are built up within the constraints of the binding pocket by assembling small pieces in a stepwise manner. These pieces can be either individual atoms or molecular fragments. The key advantage of such a method is that novel structures, not contained in any database, can be suggested. A third method is the optimization of known ligands by evaluating proposed analogs within the binding cavity. Binding site identification Binding site identification is the first step in structure based design. If the structure of the target or a sufficiently similar homolog is determined in the presence of a bound ligand, then the ligand should be observable in the structure in which case location of the binding site is trivial. However, there may be unoccupied allosteric binding sites that may be of interest. Furthermore, it may be that only apoprotein (protein without ligand) structures are available and the reliable identification of unoccupied sites that have the potential to bind ligands with high affinity is non-trivial. In brief, binding site identification usually relies on identification of concave surfaces on the protein that can accommodate drug sized molecules that also possess appropriate "hot spots" (hydrophobic surfaces, hydrogen bonding sites, etc.) that drive ligand binding. Scoring functions Structure-based drug design attempts to use the structure of proteins as a basis for designing new ligands by applying the principles of molecular recognition. Selective high affinity binding to the target is generally desirable since it leads to more efficacious drugs with fewer side effects. Thus, one of the most important principles for designing or obtaining potential new ligands is to predict the binding affinity of a certain ligand to its target (and known antitargets) and use the predicted affinity as a criterion for selection. One early general-purposed empirical scoring function to describe the binding energy of ligands to receptors was developed by Böhm. This empirical scoring function took the form: where: ΔG0 – empirically derived offset that in part corresponds to the overall loss of translational and rotational entropy of the ligand upon binding. ΔGhb – contribution from hydrogen bonding ΔGionic – contribution from ionic interactions ΔGlip – contribution from lipophilic interactions where |Alipo| is surface area of lipophilic contact between the ligand and receptor ΔGrot – entropy penalty due to freezing a rotatable in the ligand bond upon binding A more general thermodynamic "master" equation is as follows: where: desolvation – enthalpic penalty for removing the ligand from solvent motion – entropic penalty for reducing the degrees of freedom when a ligand binds to its receptor configuration – conformational strain energy required to put the ligand in its "active" conformation interaction – enthalpic gain for "resolvating" the ligand with its receptor The basic idea is that the overall binding free energy can be decomposed into independent components that are known to be important for the binding process. Each component reflects a certain kind of free energy alteration during the binding process between a ligand and its target receptor. The Master Equation is the linear combination of these components. According to Gibbs free energy equation, the relation between dissociation equilibrium constant, Kd, and the components of free energy was built. Various computational methods are used to estimate each of the components of the master equation. For example, the change in polar surface area upon ligand binding can be used to estimate the desolvation energy. The number of rotatable bonds frozen upon ligand binding is proportional to the motion term. The configurational or strain energy can be estimated using molecular mechanics calculations. Finally the interaction energy can be estimated using methods such as the change in non polar surface, statistically derived potentials of mean force, the number of hydrogen bonds formed, etc. In practice, the components of the master equation are fit to experimental data using multiple linear regression. This can be done with a diverse training set including many types of ligands and receptors to produce a less accurate but more general "global" model or a more restricted set of ligands and receptors to produce a more accurate but less general "local" model. Examples A particular example of rational drug design involves the use of three-dimensional information about biomolecules obtained from such techniques as X-ray crystallography and NMR spectroscopy. Computer-aided drug design in particular becomes much more tractable when there is a high-resolution structure of a target protein bound to a potent ligand. This approach to drug discovery is sometimes referred to as structure-based drug design. The first unequivocal example of the application of structure-based drug design leading to an approved drug is the carbonic anhydrase inhibitor dorzolamide, which was approved in 1995. Another case study in rational drug design is imatinib, a tyrosine kinase inhibitor designed specifically for the bcr-abl fusion protein that is characteristic for Philadelphia chromosome-positive leukemias (chronic myelogenous leukemia and occasionally acute lymphocytic leukemia). Imatinib is substantially different from previous drugs for cancer, as most agents of chemotherapy simply target rapidly dividing cells, not differentiating between cancer cells and other tissues. Additional examples include: Many of the atypical antipsychotics Cimetidine, the prototypical H2-receptor antagonist from which the later members of the class were developed Selective COX-2 inhibitor NSAIDs Enfuvirtide, a peptide HIV entry inhibitor Nonbenzodiazepines like zolpidem and zopiclone Raltegravir, an HIV integrase inhibitor SSRIs (selective serotonin reuptake inhibitors), a class of antidepressants Zanamivir, an antiviral drug Drug screening Types of drug screening include phenotypic screening, high-throughput screening, and virtual screening. Phenotypic screening is characterized by the process of screening drugs using cellular or animal disease models to identify compounds that alter the phenotype and produce beneficial disease-related effects. Emerging technologies in high-throughput screening substantially enhance processing speed and decrease the required detection volume. Virtual screening is completed by computer, enabling a large number of molecules can be screened with a short cycle and low cost. Virtual screening uses a range of computational methods that empower chemists to reduce extensive virtual libraries into more manageable sizes. Case studies 5-HT3 antagonists Acetylcholine receptor agonists Angiotensin receptor antagonists Bcr-Abl tyrosine-kinase inhibitors Cannabinoid receptor antagonists CCR5 receptor antagonists Cyclooxygenase 2 inhibitors Dipeptidyl peptidase-4 inhibitors HIV protease inhibitors NK1 receptor antagonists Non-nucleoside reverse transcriptase inhibitors Nucleoside and nucleotide reverse transcriptase inhibitors PDE5 inhibitors Proton pump inhibitors Renin inhibitors Triptans TRPV1 antagonists c-Met inhibitors Criticism It has been argued that the highly rigid and focused nature of rational drug design suppresses serendipity in drug discovery. See also Bioisostere Bioinformatics Cheminformatics Drug development Drug discovery List of pharmaceutical companies Medicinal chemistry Molecular design software Molecular modification Retrometabolic drug design References External links [Drug Design Org](https://www.drugdesign.org/chapters/drug-design/) Design of experiments Drug discovery Medicinal chemistry
Drug design
[ "Chemistry", "Biology" ]
3,758
[ "Life sciences industry", "Drug discovery", "nan", "Medicinal chemistry", "Biochemistry" ]
579,645
https://en.wikipedia.org/wiki/Strontium%20titanate
Strontium titanate is an oxide of strontium and titanium with the chemical formula SrTiO3. At room temperature, it is a centrosymmetric paraelectric material with a perovskite structure. At low temperatures it approaches a ferroelectric phase transition with a very large dielectric constant ~104 but remains paraelectric down to the lowest temperatures measured as a result of quantum fluctuations, making it a quantum paraelectric. It was long thought to be a wholly artificial material, until 1982 when its natural counterpart—discovered in Siberia and named tausonite—was recognised by the IMA. Tausonite remains an extremely rare mineral in nature, occurring as very tiny crystals. Its most important application has been in its synthesized form wherein it is occasionally encountered as a diamond simulant, in precision optics, in varistors, and in advanced ceramics. The name tausonite was given in honour of Lev Vladimirovich Tauson (1917–1989), a Russian geochemist. Disused trade names for the synthetic product include strontium mesotitanate, Diagem, and Marvelite. This product is currently being marketed for its use in jewelry under the name Fabulite. Other than its type locality of the Murun Massif in the Sakha Republic, natural tausonite is also found in Cerro Sarambi, Concepción department, Paraguay; and along the Kotaki River of Honshū, Japan. Properties SrTiO3 has an indirect band gap of 3.25 eV and a direct gap of 3.75 eV in the typical range of semiconductors. Synthetic strontium titanate has a very large dielectric constant (300) at room temperature and low electric field. It has a specific resistivity of over 109 Ω-cm for very pure crystals. It is also used in high-voltage capacitors. Introducing mobile charge carriers by doping leads to Fermi-liquid metallic behavior already at very low charge carrier densities. At high electron densities strontium titanate becomes superconducting below 0.35 K and was the first insulator and oxide discovered to be superconductive. Strontium titanate is both much denser (specific gravity 4.88 for natural, 5.13 for synthetic) and much softer (Mohs hardness 5.5 for synthetic, 6–6.5 for natural) than diamond. Its crystal system is cubic and its refractive index (2.410—as measured by sodium light, 589.3 nm) is nearly identical to that of diamond (at 2.417), but the dispersion (the optical property responsible for the "fire" of the cut gemstones) of strontium titanate is 4.3x that of diamond, at 0.190 (B–G interval). This results in a shocking display of fire compared to diamond and diamond simulants such as YAG, GAG, GGG, Cubic Zirconia, and Moissanite. Synthetics are usually transparent and colourless, but can be doped with certain rare earth or transition metals to give reds, yellows, browns, and blues. Natural tausonite is usually translucent to opaque, in shades of reddish brown, dark red, or grey. Both have an adamantine (diamond-like) lustre. Strontium titanate is considered extremely brittle with a conchoidal fracture; natural material is cubic or octahedral in habit and streaks brown. Through a hand-held (direct vision) spectroscope, doped synthetics will exhibit a rich absorption spectrum typical of doped stones. Synthetic material has a melting point of ca. 2080 °C (3776 °F) and is readily attacked by hydrofluoric acid. Under extremely low oxygen partial pressure, strontium titanate decomposes via incongruent sublimation of strontium well below the melting temperature. At temperatures lower than 105 K, its cubic structure transforms to tetragonal. Its monocrystals can be used as optical windows and high-quality sputter deposition targets. SrTiO3 is an excellent substrate for epitaxial growth of high-temperature superconductors and many oxide-based thin films. It is particularly well known as the substrate for the growth of the lanthanum aluminate-strontium titanate interface. Doping strontium titanate with niobium makes it electrically conductive, being one of the only conductive commercially available single crystal substrates for the growth of perovskite oxides. Its bulk lattice parameter of 3.905Å makes it suitable as the substrate for the growth of many other oxides, including the rare-earth manganites, titanates, lanthanum aluminate (LaAlO3), strontium ruthenate (SrRuO3) and many others. Oxygen vacancies are fairly common in SrTiO3 crystals and thin films. Oxygen vacancies induce free electrons in the conduction band of the material, making it more conductive and opaque. These vacancies can be caused by exposure to reducing conditions, such as high vacuum at elevated temperatures. High-quality, epitaxial SrTiO3 layers can also be grown on silicon without forming silicon dioxide, thereby making SrTiO3 an alternative gate dielectric material. This also enables the integration of other thin film perovskite oxides onto silicon. SrTiO3 can change its properties when it is exposed to light. These changes depend on the temperature and the defects in the material. SrTiO3 has been shown to possess persistent photoconductivity where exposing the crystal to light will increase its electrical conductivity by over 2 orders of magnitude. After the light is turned off, the enhanced conductivity persists for several days, with negligible decay. At low temperatures, the main effects of light are electronic, meaning that they involve the creation, movement, and recombination of electrons and holes (positive charges) in the material. These effects include photoconductivity, photoluminescence, photovoltage, and photochromism. They are influenced by the defect chemistry of SrTiO3, which determines the energy levels, band gap, carrier concentration, and mobility of the material. At high temperatures (>200 °C), the main effects of light are photoionic, meaning that they involve the migration of oxygen vacancies (negative ions) in the material. These vacancies are the main ionic defects in SrTiO3, and they can alter the electronic structure, defect chemistry, and surface properties of the material. These effects include photoinduced phase transitions, photoinduced oxygen exchange, and photoinduced surface reconstruction. They are influenced by the oxygen pressure, the crystal structure, and the doping level of SrTiO3. Due to the significant ionic and electronic conduction of SrTiO3, it is potent to be used as the mixed conductor. Synthesis Synthetic strontium titanate was one of several titanates patented during the late 1940s and early 1950s; other titanates included barium titanate and calcium titanate. Research was conducted primarily at the National Lead Company (later renamed NL Industries) in the United States, by Leon Merker and Langtry E. Lynd. Merker and Lynd first patented the growth process on February 10, 1953; a number of refinements were subsequently patented over the next four years, such as modifications to the feed powder and additions of colouring dopants. A modification to the basic Verneuil process (also known as flame-fusion) is the favoured method of growth. An inverted oxy-hydrogen blowpipe is used, with feed powder mixed with oxygen carefully fed through the blowpipe in the typical fashion, but with the addition of a third pipe to deliver oxygen—creating a tricone burner. The extra oxygen is required for successful formation of strontium titanate, which would otherwise fail to oxidize completely due to the titanium component. The ratio is ca. 1.5 volumes of hydrogen for each volume of oxygen. The highly purified feed powder is derived by first producing titanyl double oxalate salt (SrTiO(C2O4)2) by reacting strontium chloride (SrCl2) and oxalic acid ((COOH)2) with titanium tetrachloride (TiCl4). The salt is washed to eliminate chloride, heated to 1000 °C in order to produce a free-flowing granular powder of the required composition, and is then ground and sieved to ensure all particles are between 0.2 and 0.5 micrometres in size. The feed powder falls through the oxyhydrogen flame, melts, and lands on a rotating and slowly descending pedestal below. The height of the pedestal is constantly adjusted to keep its top at the optimal position below the flame, and over a number of hours the molten powder cools and crystallises to form a single pedunculated pear or boule crystal. This boule is usually no larger than 2.5 centimetres in diameter and 10 centimetres long; it is an opaque black to begin with, requiring further annealing in an oxidizing atmosphere in order to make the crystal colourless and to relieve strain. This is done at over 1000 °C for 12 hours. Thin films of SrTiO3 can be grown epitaxially by various methods, including pulsed laser deposition, molecular beam epitaxy, RF sputtering and atomic layer deposition. As in most thin films, different growth methods can result in significantly different defect and impurity densities and crystalline quality, resulting in a large variation of the electronic and optical properties. Use as a diamond simulant Its cubic structure and high dispersion once made synthetic strontium titanate a prime candidate for simulating diamond. Beginning , large quantities of strontium titanate were manufactured for this sole purpose. Strontium titanate was in competition with synthetic rutile ("titania") at the time, and had the advantage of lacking the unfortunate yellow tinge and strong birefringence inherent to the latter material. While it was softer, it was significantly closer to diamond in likeness. Eventually, however, both would fall into disuse, being eclipsed by the creation of "better" simulants: first by yttrium aluminium garnet (YAG) and followed shortly after by gadolinium gallium garnet (GGG); and finally by the (to date) ultimate simulant in terms of diamond-likeness and cost-effectiveness, cubic zirconia. Despite being outmoded, strontium titanate is still manufactured and periodically encountered in jewellery. It is one of the most costly of diamond simulants, and due to its rarity collectors may pay a premium for large i.e. >2 carat (400 mg) specimens. As a diamond simulant, strontium titanate is most deceptive when mingled with melée i.e. <0.20 carat (40 mg) stones and when it is used as the base material for a composite or doublet stone (with, e.g., synthetic corundum as the crown or top of the stone). Under the microscope, gemmologists distinguish strontium titanate from diamond by the former's softness—manifested by surface abrasions—and excess dispersion (to the trained eye), and occasional gas bubbles which are remnants of synthesis. Doublets can be detected by a join line at the girdle ("waist" of the stone) and flattened air bubbles or glue visible within the stone at the point of bonding. Use in radioisotope thermoelectric generators Due to its high melting point and insolubility in water, strontium titanate has been used as a strontium-90-containing material in radioisotope thermoelectric generators (RTGs), such as the US Sentinel and Soviet Beta-M series. As strontium-90 has a high fission product yield and is easily extracted from spent nuclear fuel, Sr-90 based RTGs can in principle be produced cheaper than those based on plutonium-238 or other radionuclides which have to be produced in dedicated facilities. However, due to the lower power density (~0.45W thermal per gram of Strontium-90-Titanate) and half life, space based applications, which put a particular premium on low weight, high reliability and longevity prefer Plutonium-238. Terrestrial off-grid applications of RTGs meanwhile have been largely phased out due to concern over orphan sources and the decreasing price and increasing availability of solar panels, small wind turbines, chemical battery storage and other off-grid power solutions. Use in solid oxide fuel cells Strontium titanate's mixed conductivity has attracted attention for use in solid oxide fuel cells (SOFCs). It demonstrates both electronic and ionic conductivity which is useful for SOFC electrodes because there is an exchange of gas and oxygen ions in the material and electrons on both sides of the cell. H2 + O^2- -> H2O + 2e- (anode) 1/2O2 + 2e- -> O^2- (cathode) Strontium titanate is doped with different materials for use on different sides of a fuel cell. On the fuel side (anode), where the first reaction occurs, it is often doped with lanthanum to form lanthanum-doped strontium titanate (LST). In this case, the A-site, or position in the unit cell where strontium usually sits, is sometimes filled by lanthanum instead, this causes the material to exhibit n-type semiconductor properties, including electronic conductivity. It also shows oxygen ion conduction due to the perovskite structure tolerance for oxygen vacancies. This material has a thermal coefficient of expansion similar to that of the common electrolyte yttria-stabilized zirconia (YSZ), chemical stability during the reactions which occur at fuel cell electrodes, and electronic conductivity of up to 360 S/cm under SOFC operating conditions. Another key advantage of these LST is that it shows a resistance to sulfur poisoning, which is an issue with the currently used nickel - ceramic (cermet) anodes. Another related compound is strontium titanium ferrite (STF) which is used as a cathode (oxygen-side) material in SOFCs. This material also shows mixed ionic and electronic conductivity which is important as it means the reduction reaction which happens at the cathode can occur over a wider area. Building on this material by adding cobalt on the B-site (replacing titanium) as well as iron, we have the material STFC, or cobalt-substituted STF, which shows remarkable stability as a cathode material as well as lower polarization resistance than other common cathode materials such as lanthanum strontium cobalt ferrite. These cathodes also have the advantage of not containing rare earth metals which make them cheaper than many of the alternatives. See also Calcium copper titanate References External links An electron micrograph of strontium titanate, as artwork entitled "Strontium" at the DeYoung Museum in San Francisco Titanates Strontium compounds Gemstones Ceramic materials Transition metal oxides Diamond simulants Perovskites
Strontium titanate
[ "Physics", "Engineering" ]
3,216
[ "Materials", "Ceramic materials", "Gemstones", "Ceramic engineering", "Matter" ]
580,069
https://en.wikipedia.org/wiki/Gay-Lussac%27s%20law
Gay-Lussac's law usually refers to Joseph-Louis Gay-Lussac's law of combining volumes of gases, discovered in 1808 and published in 1809. However, it sometimes refers to the proportionality of the volume of a gas to its absolute temperature at constant pressure. The latter law was published by Gay-Lussac in 1802, but in the article in which he described his work, he cited earlier unpublished work from the 1780s by Jacques Charles. Consequently, the volume-temperature proportionality is usually known as Charles's Law. Law of combining volumes The law of combining volumes states that when gases chemically react together, they do so in amounts by volume which bear small whole-number ratios (the volumes calculated at the same temperature and pressure). The ratio between the volumes of the reactant gases and the gaseous products can be expressed in simple whole numbers. For example, Gay-Lussac found that two volumes of hydrogen react with one volume of oxygen to form two volumes of gaseous water. Expressed concretely, 100 mL of hydrogen combine with 50 mL of oxygen to give 100 mL of water vapor: Hydrogen(100 mL) + Oxygen(50 mL) = Water(100 mL). Thus, the volumes of hydrogen and oxygen which combine (i.e., 100mL and 50mL) bear a simple ratio of 2:1, as also is the case for the ratio of product water vapor to reactant oxygen. Based on Gay-Lussac's results, Amedeo Avogadro hypothesized in 1811 that, at the same temperature and pressure, equal volumes of gases (of whatever kind) contain equal numbers of molecules (Avogadro's law). He pointed out that if this hypothesis is true, then the previously stated result 2 volumes of hydrogen + 1 volume of oxygen = 2 volume of gaseous water could also be expressed as 2 molecules of hydrogen + 1 molecule of oxygen = 2 molecule of water. The law of combining volumes of gases was announced publicly by Joseph Louis Gay-Lussac on the last day of 1808, and published in 1809. Since there was no direct evidence for Avogadro's molecular theory, very few chemists adopted Avogadro's hypothesis as generally valid until the Italian chemist Stanislao Cannizzaro argued convincingly for it during the First International Chemical Congress in 1860. Pressure-temperature law In the 17th century Guillaume Amontons discovered a regular relationship between the pressure and temperature of a gas at constant volume. Some introductory physics textbooks still define the pressure-temperature relationship as Gay-Lussac's law. Gay-Lussac primarily investigated the relationship between volume and temperature and published it in 1802, but his work did cover some comparison between pressure and temperature. Given the relative technology available to both men, Amontons could only work with air as a gas, whereas Gay-Lussac was able to experiment with multiple types of common gases, such as oxygen, nitrogen, and hydrogen. Volume-temperature law Regarding the volume-temperature relationship, Gay-Lussac attributed his findings to Jacques Charles because he used much of Charles's unpublished data from 1787 – hence, the law became known as Charles's law or the Law of Charles and Gay-Lussac. Amontons's, Charles', and Boyle's law form the combined gas law. These three gas laws in combination with Avogadro's law can be generalized by the ideal gas law. Gay-Lussac used the formula acquired from ΔV/V = αΔT to define the rate of expansion α for gases. For air, he found a relative expansion ΔV/V = 37.50% and obtained a value of α = 37.50%/100 °C = 1/266.66 °C which indicated that the value of absolute zero was approximately 266.66 °C below 0 °C. The value of the rate of expansion α is approximately the same for all gases and this is also sometimes referred to as Gay-Lussac's Law. See the introduction to this article, and Charles's Law. See also References Further reading Gas laws de:Thermische Zustandsgleichung idealer Gase#Gesetz von Amontons ga:Dlí Gay-Lussac
Gay-Lussac's law
[ "Chemistry" ]
887
[ "Gas laws" ]
580,145
https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20relativity
This is a list of mathematical topics in relativity, by Wikipedia page. Special relativity Foundational issues principle of relativity speed of light faster-than-light biquaternion conjugate diameters four-vector four-acceleration four-force four-gradient four-momentum four-velocity hyperbolic orthogonality hyperboloid model light-like Lorentz covariance Lorentz group Lorentz transformation Lorentz–FitzGerald contraction hypothesis Minkowski diagram Minkowski space Poincaré group proper length proper time rapidity relativistic wave equations relativistic mass split-complex number unit hyperbola world line General relativity black holes no-hair theorem Hawking radiation Hawking temperature Black hole entropy charged black hole rotating black hole micro black hole Schwarzschild black hole Schwarzschild metric Schwarzschild radius Reissner–Nordström black hole Immirzi parameter closed timelike curve cosmic censorship hypothesis chronology protection conjecture Einstein–Cartan theory Einstein's field equation geodesic gravitational redshift Penrose–Hawking singularity theorems Pseudo-Riemannian manifold stress–energy tensor worm hole Cosmology anti-de Sitter space Ashtekar variables Batalin–Vilkovisky formalism Big Bang Cauchy horizon cosmic inflation cosmic microwave background cosmic variance cosmological constant dark energy dark matter de Sitter space Friedmann–Lemaître–Robertson–Walker metric horizon problem large-scale structure of the cosmos Randall–Sundrum model warped geometry Weyl curvature hypothesis Relativity Mathematics
List of mathematical topics in relativity
[ "Physics" ]
308
[ "Theory of relativity" ]
580,264
https://en.wikipedia.org/wiki/Barbier%27s%20theorem
In geometry, Barbier's theorem states that every curve of constant width has perimeter times its width, regardless of its precise shape. This theorem was first published by Joseph-Émile Barbier in 1860. Examples The most familiar examples of curves of constant width are the circle and the Reuleaux triangle. For a circle, the width is the same as the diameter; a circle of width w has perimeter w. A Reuleaux triangle of width w consists of three arcs of circles of radius w. Each of these arcs has central angle /3, so the perimeter of the Reuleaux triangle of width w is equal to half the perimeter of a circle of radius w and therefore is equal to w. A similar analysis of other simple examples such as Reuleaux polygons gives the same answer. Proofs One proof of the theorem uses the properties of Minkowski sums. If K is a body of constant width w, then the Minkowski sum of K and its 180° rotation is a disk with radius w and perimeter 2w. The Minkowski sum acts linearly on the perimeters of convex bodies, so the perimeter of K must be half the perimeter of this disk, which is w as the theorem states. Alternatively, the theorem follows immediately from the Crofton formula in integral geometry according to which the length of any curve equals the measure of the set of lines that cross the curve, multiplied by their numbers of crossings. Any two curves that have the same constant width are crossed by sets of lines with the same measure, and therefore they have the same length. Historically, Crofton derived his formula later than, and independently of, Barbier's theorem. An elementary probabilistic proof of the theorem can be found at Buffon's noodle. Higher dimensions The analogue of Barbier's theorem for surfaces of constant width is false. In particular, the unit sphere has surface area , while the surface of revolution of a Reuleaux triangle with the same constant width has surface area . Instead, Barbier's theorem generalizes to bodies of constant brightness, three-dimensional convex sets for which every two-dimensional projection has the same area. These all have the same surface area as a sphere of the same projected area. And in general, if is a convex subset of , for which every (n−1)-dimensional projection has area of the unit ball in , then the surface area of is equal to that of the unit sphere in . This follows from the general form of Crofton formula. See also Blaschke–Lebesgue theorem and isoperimetric inequality, bounding the areas of curves of constant width References Theorems in plane geometry Pi Length Constant width
Barbier's theorem
[ "Physics", "Mathematics" ]
548
[ "Scalar physical quantities", "Physical quantities", "Distance", "Quantity", "Size", "Theorems in plane geometry", "Length", "Theorems in geometry", "Wikipedia categories named after physical quantities", "Pi" ]
580,384
https://en.wikipedia.org/wiki/Hopf%20fibration
In differential topology, the Hopf fibration (also known as the Hopf bundle or Hopf map) describes a 3-sphere (a hypersphere in four-dimensional space) in terms of circles and an ordinary sphere. Discovered by Heinz Hopf in 1931, it is an influential early example of a fiber bundle. Technically, Hopf found a many-to-one continuous function (or "map") from the -sphere onto the -sphere such that each distinct point of the -sphere is mapped from a distinct great circle of the -sphere . Thus the -sphere is composed of fibers, where each fiber is a circle — one for each point of the -sphere. This fiber bundle structure is denoted meaning that the fiber space (a circle) is embedded in the total space (the -sphere), and (Hopf's map) projects onto the base space (the ordinary -sphere). The Hopf fibration, like any fiber bundle, has the important property that it is locally a product space. However it is not a trivial fiber bundle, i.e., is not globally a product of and although locally it is indistinguishable from it. This has many implications: for example the existence of this bundle shows that the higher homotopy groups of spheres are not trivial in general. It also provides a basic example of a principal bundle, by identifying the fiber with the circle group. Stereographic projection of the Hopf fibration induces a remarkable structure on , in which all of 3-dimensional space, except for the z-axis, is filled with nested tori made of linking Villarceau circles. Here each fiber projects to a circle in space (one of which is a line, thought of as a "circle through infinity"). Each torus is the stereographic projection of the inverse image of a circle of latitude of the -sphere. (Topologically, a torus is the product of two circles.) These tori are illustrated in the images at right. When is compressed to the boundary of a ball, some geometric structure is lost although the topological structure is retained (see Topology and geometry). The loops are homeomorphic to circles, although they are not geometric circles. There are numerous generalizations of the Hopf fibration. The unit sphere in complex coordinate space fibers naturally over the complex projective space with circles as fibers, and there are also real, quaternionic, and octonionic versions of these fibrations. In particular, the Hopf fibration belongs to a family of four fiber bundles in which the total space, base space, and fiber space are all spheres: By Adams's theorem such fibrations can occur only in these dimensions. Definition and construction For any natural number n, an n-dimensional sphere, or n-sphere, can be defined as the set of points in an -dimensional space which are a fixed distance from a central point. For concreteness, the central point can be taken to be the origin, and the distance of the points on the sphere from this origin can be assumed to be a unit length. With this convention, the n-sphere, , consists of the points in with x12 + x22 + ⋯+ xn + 12 = 1. For example, the -sphere consists of the points (x1, x2, x3, x4) in R4 with x12 + x22 + x32 + x42 = 1. The Hopf fibration of the -sphere over the -sphere can be defined in several ways. Direct construction Identify with and with (where denotes the complex numbers) by writing: and . Thus is identified with the subset of all in such that , and is identified with the subset of all in such that . (Here, for a complex number , where the star denotes the complex conjugate.) Then the Hopf fibration is defined by The first component is a complex number, whereas the second component is real. Any point on the -sphere must have the property that . If that is so, then lies on the unit -sphere in , as may be shown by adding the squares of the absolute values of the complex and real components of Furthermore, if two points on the 3-sphere map to the same point on the 2-sphere, i.e., if , then must equal for some complex number with . The converse is also true; any two points on the -sphere that differ by a common complex factor map to the same point on the -sphere. These conclusions follow, because the complex factor cancels with its complex conjugate in both parts of : in the complex component and in the real component . Since the set of complex numbers with form the unit circle in the complex plane, it follows that for each point in , the inverse image is a circle, i.e., . Thus the -sphere is realized as a disjoint union of these circular fibers. A direct parametrization of the -sphere employing the Hopf map is as follows. or in Euclidean Where runs over the range from to , runs over the range from to , and can take any value from to . Every value of , except and which specify circles, specifies a separate flat torus in the -sphere, and one round trip ( to ) of either or causes you to make one full circle of both limbs of the torus. A mapping of the above parametrization to the -sphere is as follows, with points on the circles parametrized by . Geometric interpretation using the complex projective line A geometric interpretation of the fibration may be obtained using the complex projective line, , which is defined to be the set of all complex one-dimensional subspaces of . Equivalently, is the quotient of by the equivalence relation which identifies with for any nonzero complex number . On any complex line in there is a circle of unit norm, and so the restriction of the quotient map to the points of unit norm is a fibration of over . is diffeomorphic to a -sphere: indeed it can be identified with the Riemann sphere , which is the one point compactification of (obtained by adding a point at infinity). The formula given for above defines an explicit diffeomorphism between the complex projective line and the ordinary -sphere in -dimensional space. Alternatively, the point can be mapped to the ratio in the Riemann sphere . Fiber bundle structure The Hopf fibration defines a fiber bundle, with bundle projection . This means that it has a "local product structure", in the sense that every point of the -sphere has some neighborhood whose inverse image in the -sphere can be identified with the product of and a circle: . Such a fibration is said to be locally trivial. For the Hopf fibration, it is enough to remove a single point from and the corresponding circle from ; thus one can take , and any point in has a neighborhood of this form. Geometric interpretation using rotations Another geometric interpretation of the Hopf fibration can be obtained by considering rotations of the -sphere in ordinary -dimensional space. The rotation group SO(3) has a double cover, the spin group , diffeomorphic to the -sphere. The spin group acts transitively on by rotations. The stabilizer of a point is isomorphic to the circle group; its elements are angles of rotation leaving the given point unmoved, all sharing the axis connecting that point to the sphere's center. It follows easily that the -sphere is a principal circle bundle over the -sphere, and this is the Hopf fibration. To make this more explicit, there are two approaches: the group can either be identified with the group Sp(1) of unit quaternions, or with the special unitary group SU(2). In the first approach, a vector in is interpreted as a quaternion by writing The -sphere is then identified with the versors, the quaternions of unit norm, those for which , where , which is equal to for as above. On the other hand, a vector in can be interpreted as a pure quaternion Then, as is well-known since , the mapping is a rotation in : indeed it is clearly an isometry, since , and it is not hard to check that it preserves orientation. In fact, this identifies the group of versors with the group of rotations of , modulo the fact that the versors and determine the same rotation. As noted above, the rotations act transitively on , and the set of versors which fix a given right versor have the form , where and are real numbers with . This is a circle subgroup. For concreteness, one can take , and then the Hopf fibration can be defined as the map sending a versor . All the quaternions , where is one of the circle of versors that fix , get mapped to the same thing (which happens to be one of the two rotations rotating to the same place as does). Another way to look at this fibration is that every versor ω moves the plane spanned by to a new plane spanned by . Any quaternion , where is one of the circle of versors that fix , will have the same effect. We put all these into one fibre, and the fibres can be mapped one-to-one to the -sphere of rotations which is the range of . This approach is related to the direct construction by identifying a quaternion with the matrix: This identifies the group of versors with , and the imaginary quaternions with the skew-hermitian matrices (isomorphic to ). Explicit formulae The rotation induced by a unit quaternion is given explicitly by the orthogonal matrix Here we find an explicit real formula for the bundle projection by noting that the fixed unit vector along the axis, , rotates to another unit vector, which is a continuous function of . That is, the image of is the point on the -sphere where it sends the unit vector along the axis. The fiber for a given point on consists of all those unit quaternions that send the unit vector there. We can also write an explicit formula for the fiber over a point in . Multiplication of unit quaternions produces composition of rotations, and is a rotation by around the axis. As varies, this sweeps out a great circle of , our prototypical fiber. So long as the base point, , is not the antipode, , the quaternion will send to . Thus the fiber of is given by quaternions of the form , which are the points Since multiplication by acts as a rotation of quaternion space, the fiber is not merely a topological circle, it is a geometric circle. The final fiber, for , can be given by defining to equal , producing which completes the bundle. But note that this one-to-one mapping between and is not continuous on this circle, reflecting the fact that is not topologically equivalent to . Thus, a simple way of visualizing the Hopf fibration is as follows. Any point on the -sphere is equivalent to a quaternion, which in turn is equivalent to a particular rotation of a Cartesian coordinate frame in three dimensions. The set of all possible quaternions produces the set of all possible rotations, which moves the tip of one unit vector of such a coordinate frame (say, the vector) to all possible points on a unit -sphere. However, fixing the tip of the vector does not specify the rotation fully; a further rotation is possible about the axis. Thus, the -sphere is mapped onto the -sphere, plus a single rotation. The rotation can be represented using the Euler angles θ, φ, and ψ. The Hopf mapping maps the rotation to the point on the 2-sphere given by θ and φ, and the associated circle is parametrized by ψ. Note that when θ = π the Euler angles φ and ψ are not well defined individually, so we do not have a one-to-one mapping (or a one-to-two mapping) between the 3-torus of (θ, φ, ψ) and S3. Fluid mechanics If the Hopf fibration is treated as a vector field in 3 dimensional space then there is a solution to the (compressible, non-viscous) Navier–Stokes equations of fluid dynamics in which the fluid flows along the circles of the projection of the Hopf fibration in 3 dimensional space. The size of the velocities, the density and the pressure can be chosen at each point to satisfy the equations. All these quantities fall to zero going away from the centre. If a is the distance to the inner ring, the velocities, pressure and density fields are given by: for arbitrary constants and . Similar patterns of fields are found as soliton solutions of magnetohydrodynamics: Generalizations The Hopf construction, viewed as a fiber bundle p: S3 → CP1, admits several generalizations, which are also often known as Hopf fibrations. First, one can replace the projective line by an n-dimensional projective space. Second, one can replace the complex numbers by any (real) division algebra, including (for n = 1) the octonions. Real Hopf fibrations A real version of the Hopf fibration is obtained by regarding the circle S1 as a subset of R2 in the usual way and by identifying antipodal points. This gives a fiber bundle S1 → RP1 over the real projective line with fiber S0 = {1, −1}. Just as CP1 is diffeomorphic to a sphere, RP1 is diffeomorphic to a circle. More generally, the n-sphere Sn fibers over real projective space RPn with fiber S0. Complex Hopf fibrations The Hopf construction gives circle bundles p : S2n+1 → CPn over complex projective space. This is actually the restriction of the tautological line bundle over CPn to the unit sphere in Cn+1. Quaternionic Hopf fibrations Similarly, one can regard S4n+3 as lying in Hn+1 (quaternionic n-space) and factor out by unit quaternion (= S3) multiplication to get the quaternionic projective space HPn. In particular, since S4 = HP1, there is a bundle S7 → S4 with fiber S3. Octonionic Hopf fibrations A similar construction with the octonions yields a bundle S15 → S8 with fiber S7. But the sphere S31 does not fiber over S16 with fiber S15. One can regard S8 as the octonionic projective line OP1. Although one can also define an octonionic projective plane OP2, the sphere S23 does not fiber over OP2 with fiber S7. Fibrations between spheres Sometimes the term "Hopf fibration" is restricted to the fibrations between spheres obtained above, which are S1 → S1 with fiber S0 S3 → S2 with fiber S1 S7 → S4 with fiber S3 S15 → S8 with fiber S7 As a consequence of Adams's theorem, fiber bundles with spheres as total space, base space, and fiber can occur only in these dimensions. Fiber bundles with similar properties, but different from the Hopf fibrations, were used by John Milnor to construct exotic spheres. Geometry and applications The Hopf fibration has many implications, some purely attractive, others deeper. For example, stereographic projection S3 → R3 induces a remarkable structure in R3, which in turn illuminates the topology of the bundle . Stereographic projection preserves circles and maps the Hopf fibers to geometrically perfect circles in R3 which fill space. Here there is one exception: the Hopf circle containing the projection point maps to a straight line in R3 — a "circle through infinity". The fibers over a circle of latitude on S2 form a torus in S3 (topologically, a torus is the product of two circles) and these project to nested toruses in R3 which also fill space. The individual fibers map to linking Villarceau circles on these tori, with the exception of the circle through the projection point and the one through its opposite point: the former maps to a straight line, the latter to a unit circle perpendicular to, and centered on, this line, which may be viewed as a degenerate torus whose minor radius has shrunken to zero. Every other fiber image encircles the line as well, and so, by symmetry, each circle is linked through every circle, both in R3 and in S3. Two such linking circles form a Hopf link in R3 Hopf proved that the Hopf map has Hopf invariant 1, and therefore is not null-homotopic. In fact it generates the homotopy group π3(S2) and has infinite order. In quantum mechanics, the Riemann sphere is known as the Bloch sphere, and the Hopf fibration describes the topological structure of a quantum mechanical two-level system or qubit. Similarly, the topology of a pair of entangled two-level systems is given by the Hopf fibration . Moreover, the Hopf fibration is equivalent to the fiber bundle structure of the Dirac monopole. Hopf fibration also found applications in robotics, where it was used to generate uniform samples on SO(3) for the probabilistic roadmap algorithm in motion planning. It also found application in the automatic control of quadrotors. See also Villarceau circles Notes References ; reprinted as article 20 in . External links Dimensions Math Chapters 7 and 8 illustrate the Hopf fibration with animated computer graphics. An Elementary Introduction to the Hopf Fibration by David W. Lyons (PDF) YouTube animation showing dynamic mapping of points on the 2-sphere to circles in the 3-sphere, by Professor Niles Johnson. YouTube animation of the construction of the 120-cell By Gian Marco Todesco shows the Hopf fibration of the 120-cell. Video of one 30-cell ring of the 600-cell from http://page.math.tu-berlin.de/~gunn/. Interactive visualization of the mapping of points on the 2-sphere to circles in the 3-sphere Algebraic topology Geometric topology Differential geometry Fiber bundles Homotopy theory
Hopf fibration
[ "Mathematics" ]
3,861
[ "Fields of abstract algebra", "Topology", "Algebraic topology", "Geometric topology" ]
581,005
https://en.wikipedia.org/wiki/Implicit%20function%20theorem
In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function. More precisely, given a system of equations (often abbreviated into ), the theorem states that, under a mild condition on the partial derivatives (with respect to each ) at a point, the variables are differentiable functions of the in some neighborhood of the point. As these functions generally cannot be expressed in closed form, they are implicitly defined by the equations, and this motivated the name of the theorem. In other words, under a mild condition on the partial derivatives, the set of zeros of a system of equations is locally the graph of a function. History Augustin-Louis Cauchy (1789–1857) is credited with the first rigorous form of the implicit function theorem. Ulisse Dini (1845–1918) generalized the real-variable version of the implicit function theorem to the context of functions of any number of real variables. First example If we define the function , then the equation cuts out the unit circle as the level set . There is no way to represent the unit circle as the graph of a function of one variable because for each choice of , there are two choices of y, namely . However, it is possible to represent part of the circle as the graph of a function of one variable. If we let for , then the graph of provides the upper half of the circle. Similarly, if , then the graph of gives the lower half of the circle. The purpose of the implicit function theorem is to tell us that functions like and almost always exist, even in situations where we cannot write down explicit formulas. It guarantees that and are differentiable, and it even works in situations where we do not have a formula for . Definitions Let be a continuously differentiable function. We think of as the Cartesian product and we write a point of this product as Starting from the given function , our goal is to construct a function whose graph is precisely the set of all such that . As noted above, this may not always be possible. We will therefore fix a point which satisfies , and we will ask for a that works near the point . In other words, we want an open set containing , an open set containing , and a function such that the graph of satisfies the relation on , and that no other points within do so. In symbols, To state the implicit function theorem, we need the Jacobian matrix of , which is the matrix of the partial derivatives of . Abbreviating to , the Jacobian matrix is where is the matrix of partial derivatives in the variables and is the matrix of partial derivatives in the variables . The implicit function theorem says that if is an invertible matrix, then there are , , and as desired. Writing all the hypotheses together gives the following statement. Statement of the theorem Let be a continuously differentiable function, and let have coordinates . Fix a point with , where is the zero vector. If the Jacobian matrix (this is the right-hand panel of the Jacobian matrix shown in the previous section): is invertible, then there exists an open set containing such that there exists a unique function such that and Moreover, is continuously differentiable and, denoting the left-hand panel of the Jacobian matrix shown in the previous section as: the Jacobian matrix of partial derivatives of in is given by the matrix product: Higher derivatives If, moreover, is analytic or continuously differentiable times in a neighborhood of , then one may choose in order that the same holds true for inside . In the analytic case, this is called the analytic implicit function theorem. Proof for 2D case Suppose is a continuously differentiable function defining a curve . Let be a point on the curve. The statement of the theorem above can be rewritten for this simple case as follows: Proof. Since is differentiable we write the differential of through partial derivatives: Since we are restricted to movement on the curve and by assumption around the point (since is continuous at and ). Therefore we have a first-order ordinary differential equation: Now we are looking for a solution to this ODE in an open interval around the point for which, at every point in it, . Since is continuously differentiable and from the assumption we have From this we know that is continuous and bounded on both ends. From here we know that is Lipschitz continuous in both and . Therefore, by Cauchy-Lipschitz theorem, there exists unique that is the solution to the given ODE with the initial conditions. Q.E.D. The circle example Let us go back to the example of the unit circle. In this case n = m = 1 and . The matrix of partial derivatives is just a 1 × 2 matrix, given by Thus, here, the in the statement of the theorem is just the number ; the linear map defined by it is invertible if and only if . By the implicit function theorem we see that we can locally write the circle in the form for all points where . For we run into trouble, as noted before. The implicit function theorem may still be applied to these two points, by writing as a function of , that is, ; now the graph of the function will be , since where we have , and the conditions to locally express the function in this form are satisfied. The implicit derivative of y with respect to x, and that of x with respect to y, can be found by totally differentiating the implicit function and equating to 0: giving and Application: change of coordinates Suppose we have an -dimensional space, parametrised by a set of coordinates . We can introduce a new coordinate system by supplying m functions each being continuously differentiable. These functions allow us to calculate the new coordinates of a point, given the point's old coordinates using . One might want to verify if the opposite is possible: given coordinates , can we 'go back' and calculate the same point's original coordinates ? The implicit function theorem will provide an answer to this question. The (new and old) coordinates are related by f = 0, with Now the Jacobian matrix of f at a certain point (a, b) [ where ] is given by where Im denotes the m × m identity matrix, and is the matrix of partial derivatives, evaluated at (a, b). (In the above, these blocks were denoted by X and Y. As it happens, in this particular application of the theorem, neither matrix depends on a.) The implicit function theorem now states that we can locally express as a function of if J is invertible. Demanding J is invertible is equivalent to det J ≠ 0, thus we see that we can go back from the primed to the unprimed coordinates if the determinant of the Jacobian J is non-zero. This statement is also known as the inverse function theorem. Example: polar coordinates As a simple application of the above, consider the plane, parametrised by polar coordinates . We can go to a new coordinate system (cartesian coordinates) by defining functions and . This makes it possible given any point to find corresponding Cartesian coordinates . When can we go back and convert Cartesian into polar coordinates? By the previous example, it is sufficient to have , with Since , conversion back to polar coordinates is possible if . So it remains to check the case . It is easy to see that in case , our coordinate transformation is not invertible: at the origin, the value of θ is not well-defined. Generalizations Banach space version Based on the inverse function theorem in Banach spaces, it is possible to extend the implicit function theorem to Banach space valued mappings. Let X, Y, Z be Banach spaces. Let the mapping be continuously Fréchet differentiable. If , , and is a Banach space isomorphism from Y onto Z, then there exist neighbourhoods U of x0 and V of y0 and a Fréchet differentiable function g : U → V such that f(x, g(x)) = 0 and f(x, y) = 0 if and only if y = g(x), for all . Implicit functions from non-differentiable functions Various forms of the implicit function theorem exist for the case when the function f is not differentiable. It is standard that local strict monotonicity suffices in one dimension. The following more general form was proven by Kumagai based on an observation by Jittorntrum. Consider a continuous function such that . If there exist open neighbourhoods and of x0 and y0, respectively, such that, for all y in B, is locally one-to-one, then there exist open neighbourhoods and of x0 and y0, such that, for all , the equation f(x, y) = 0 has a unique solution where g is a continuous function from B0 into A0. Collapsing manifolds Perelman’s collapsing theorem for 3-manifolds, the capstone of his proof of Thurston's geometrization conjecture, can be understood as an extension of the implicit function theorem. See also Inverse function theorem Constant rank theorem: Both the implicit function theorem and the inverse function theorem can be seen as special cases of the constant rank theorem. Notes References Further reading Articles containing proofs Mathematical identities Theorems in calculus Theorems in real analysis
Implicit function theorem
[ "Mathematics" ]
1,978
[ "Theorems in mathematical analysis", "Theorems in calculus", "Calculus", "Theorems in real analysis", "Mathematical problems", "Articles containing proofs", "Mathematical identities", "Mathematical theorems", "Algebra" ]
581,124
https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Rao%20bound
In estimation theory and statistics, the Cramér–Rao bound (CRB) relates to estimation of a deterministic (fixed, though unknown) parameter. The result is named in honor of Harald Cramér and Calyampudi Radhakrishna Rao, but has also been derived independently by Maurice Fréchet, Georges Darmois, and by Alexander Aitken and Harold Silverstone. It is also known as Fréchet-Cramér–Rao or Fréchet-Darmois-Cramér-Rao lower bound. It states that the precision of any unbiased estimator is at most the Fisher information; or (equivalently) the reciprocal of the Fisher information is a lower bound on its variance. An unbiased estimator that achieves this bound is said to be (fully) efficient. Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is, therefore, the minimum variance unbiased (MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur either if for any unbiased estimator, there exists another with a strictly smaller variance, or if an MVU estimator exists, but its variance is strictly greater than the inverse of the Fisher information. The Cramér–Rao bound can also be used to bound the variance of estimators of given bias. In some cases, a biased approach can result in both a variance and a mean squared error that are the unbiased Cramér–Rao lower bound; see estimator bias. Significant progress over the Cramér–Rao lower bound was proposed by Anil Kumar Bhattacharyya through a series of works, called Bhattacharyya bound. Statement The Cramér–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased. All versions of the bound require certain regularity conditions, which hold for most well-behaved distributions. These conditions are listed later in this section. Scalar unbiased case Suppose is an unknown deterministic parameter that is to be estimated from independent observations (measurements) of , each from a distribution according to some probability density function . The variance of any unbiased estimator of is then bounded by the reciprocal of the Fisher information : where the Fisher information is defined by and is the natural logarithm of the likelihood function for a single sample and denotes the expected value with respect to the density of . If not indicated, in what follows, the expectation is taken with respect to . If is twice differentiable and certain regularity conditions hold, then the Fisher information can also be defined as follows: The efficiency of an unbiased estimator measures how close this estimator's variance comes to this lower bound; estimator efficiency is defined as or the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér–Rao lower bound thus gives . General scalar case A more general form of the bound can be obtained by considering a biased estimator , whose expectation is not but a function of this parameter, say, . Hence is not generally equal to 0. In this case, the bound is given by where is the derivative of (by ), and is the Fisher information defined above. Bound on the variance of biased estimators Apart from being a bound on estimators of functions of the parameter, this approach can be used to derive a bound on the variance of biased estimators with a given bias, as follows. Consider an estimator with bias , and let . By the result above, any unbiased estimator whose expectation is has variance greater than or equal to . Thus, any estimator whose bias is given by a function satisfies The unbiased version of the bound is a special case of this result, with . It's trivial to have a small variance − an "estimator" that is constant has a variance of zero. But from the above equation, we find that the mean squared error of a biased estimator is bounded by using the standard decomposition of the MSE. Note, however, that if this bound might be less than the unbiased Cramér–Rao bound . For instance, in the example of estimating variance below, . Multivariate case Extending the Cramér–Rao bound to multiple parameters, define a parameter column vector with probability density function which satisfies the two regularity conditions below. The Fisher information matrix is a matrix with element defined as Let be an estimator of any vector function of parameters, , and denote its expectation vector by . The Cramér–Rao bound then states that the covariance matrix of satisfies , where The matrix inequality is understood to mean that the matrix is positive semidefinite, and is the Jacobian matrix whose element is given by . If is an unbiased estimator of (i.e., ), then the Cramér–Rao bound reduces to If it is inconvenient to compute the inverse of the Fisher information matrix, then one can simply take the reciprocal of the corresponding diagonal element to find a (possibly loose) lower bound. Regularity conditions The bound relies on two weak regularity conditions on the probability density function, , and the estimator : The Fisher information is always defined; equivalently, for all such that , exists, and is finite. The operations of integration with respect to and differentiation with respect to can be interchanged in the expectation of ; that is, whenever the right-hand side is finite. This condition can often be confirmed by using the fact that integration and differentiation can be swapped when either of the following cases hold: The function has bounded support in , and the bounds do not depend on ; The function has infinite support, is continuously differentiable, and the integral converges uniformly for all . Proof Proof for the general case based on the Chapman–Robbins bound Proof based on. A standalone proof for the general scalar case For the general scalar case: Assume that is an estimator with expectation (based on the observations ), i.e. that . The goal is to prove that, for all , Let be a random variable with probability density function . Here is a statistic, which is used as an estimator for . Define as the score: where the chain rule is used in the final equality above. Then the expectation of , written , is zero. This is because: where the integral and partial derivative have been interchanged (justified by the second regularity condition). If we consider the covariance of and , we have , because . Expanding this expression we have again because the integration and differentiation operations commute (second condition). The Cauchy–Schwarz inequality shows that therefore which proves the proposition. Examples Multivariate normal distribution For the case of a d-variate normal distribution the Fisher information matrix has elements where "tr" is the trace. For example, let be a sample of independent observations with unknown mean and known variance . Then the Fisher information is a scalar given by and so the Cramér–Rao bound is Normal variance with known mean Suppose X is a normally distributed random variable with known mean and unknown variance . Consider the following statistic: Then T is unbiased for , as . What is the variance of T? (the second equality follows directly from the definition of variance). The first term is the fourth moment about the mean and has value ; the second is the square of the variance, or . Thus Now, what is the Fisher information in the sample? Recall that the score is defined as where is the likelihood function. Thus in this case, where the second equality is from elementary calculus. Thus, the information in a single observation is just minus the expectation of the derivative of , or Thus the information in a sample of independent observations is just times this, or The Cramér–Rao bound states that In this case, the inequality is saturated (equality is achieved), showing that the estimator is efficient. However, we can achieve a lower mean squared error using a biased estimator. The estimator obviously has a smaller variance, which is in fact Its bias is so its mean squared error is which is less than what unbiased estimators can achieve according to the Cramér–Rao bound. When the mean is not known, the minimum mean squared error estimate of the variance of a sample from Gaussian distribution is achieved by dividing by , rather than or . See also Chapman–Robbins bound Kullback's inequality Brascamp–Lieb inequality Lehmann–Scheffé theorem References and notes Further reading . Chapter 3. . Section 3.1.3. Posterior uncertainty, asymptotic law and Cramér-Rao bound, Structural Control and Health Monitoring 25(1851):e2113 DOI: 10.1002/stc.2113 External links FandPLimitTool a GUI-based software to calculate the Fisher information and Cramér-Rao lower bound with application to single-molecule microscopy. Articles containing proofs Statistical inequalities Estimation theory
Cramér–Rao bound
[ "Mathematics" ]
1,879
[ "Articles containing proofs", "Theorems in statistics", "Statistical inequalities", "Inequalities (mathematics)" ]
581,417
https://en.wikipedia.org/wiki/Table%20of%20standard%20reduction%20potentials%20for%20half-reactions%20important%20in%20biochemistry
The values below are standard apparent reduction potentials for electro-biochemical half-reactions measured at 25 °C, 1 atmosphere and a pH of 7 in aqueous solution. The actual physiological potential depends on the ratio of the reduced () and oxidized () forms according to the Nernst equation and the thermal voltage. When an oxidizer () accepts a number z of electrons () to be converted in its reduced form (), the half-reaction is expressed as: + z → The reaction quotient (r) is the ratio of the chemical activity (ai) of the reduced form (the reductant, aRed) to the activity of the oxidized form (the oxidant, aox). It is equal to the ratio of their concentrations (Ci) only if the system is sufficiently diluted and the activity coefficients (γi) are close to unity (ai = γi Ci): The Nernst equation is a function of and can be written as follows: At chemical equilibrium, the reaction quotient of the product activity (aRed) by the reagent activity (aOx) is equal to the equilibrium constant () of the half-reaction and in the absence of driving force () the potential () also becomes nul. The numerically simplified form of the Nernst equation is expressed as: Where is the standard reduction potential of the half-reaction expressed versus the standard reduction potential of hydrogen. For standard conditions in electrochemistry (T = 25 °C, P = 1 atm and all concentrations being fixed at 1 mol/L, or 1 M) the standard reduction potential of hydrogen is fixed at zero by convention as it serves of reference. The standard hydrogen electrode (SHE), with [] = 1 M works thus at a pH = 0. At pH = 7, when [] = 10−7 M, the reduction potential of differs from zero because it depends on pH. Solving the Nernst equation for the half-reaction of reduction of two protons into hydrogen gas gives: In biochemistry and in biological fluids, at pH = 7, it is thus important to note that the reduction potential of the protons () into hydrogen gas is no longer zero as with the standard hydrogen electrode (SHE) at 1 M (pH = 0) in classical electrochemistry, but that versus the standard hydrogen electrode (SHE). The same also applies for the reduction potential of oxygen: For , = 1.229 V, so, applying the Nernst equation for pH = 7 gives: For obtaining the values of the reduction potential at pH = 7 for the redox reactions relevant for biological systems, the same kind of conversion exercise is done using the corresponding Nernst equation expressed as a function of pH. The conversion is simple, but care must be taken not to inadvertently mix reduction potential converted at pH = 7 with other data directly taken from tables referring to SHE (pH = 0). Expression of the Nernst equation as a function of pH The and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram . For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side): The half-cell standard reduction potential is given by where is the standard Gibbs free energy change, is the number of electrons involved, and is Faraday's constant. The Nernst equation relates pH and :   where curly braces { } indicate activities, and exponents are shown in the conventional manner.This equation is the equation of a straight line for as a function of pH with a slope of volt (pH has no units). This equation predicts lower at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for reduction of H+ into H2. Formal standard reduction potential combined with the pH dependency To obtain the reduction potential as a function of the measured concentrations of the redox-active species in solution, it is necessary to express the activities as a function of the concentrations. Given that the chemical activity denoted here by { } is the product of the activity coefficient γ by the concentration denoted by [ ]: ai = γi·Ci, here expressed as {X} = γx [X] and {X}x = (γx)x [X]x and replacing the logarithm of a product by the sum of the logarithms (i.e., log (a·b) = log a + log b), the log of the reaction quotient () (without {H+} already isolated apart in the last term as h pH) expressed here above with activities { } becomes: It allows to reorganize the Nernst equation as: Where is the formal standard potential independent of pH including the activity coefficients. Combining directly with the last term depending on pH gives: For a pH = 7: So, It is therefore important to know to what exact definition does refer the value of a reduction potential for a given biochemical redox process reported at pH = 7, and to correctly understand the relationship used. Is it simply: calculated at pH 7 (with or without corrections for the activity coefficients), , a formal standard reduction potential including the activity coefficients but no pH calculations, or, is it, , an apparent formal standard reduction potential at pH 7 in given conditions and also depending on the ratio . This requires thus to dispose of a clear definition of the considered reduction potential, and of a sufficiently detailed description of the conditions in which it is valid, along with a complete expression of the corresponding Nernst equation. Were also the reported values only derived from thermodynamic calculations, or determined from experimental measurements and under what specific conditions? Without being able to correctly answering these questions, mixing data from different sources without appropriate conversion can lead to errors and confusion. Determination of the formal standard reduction potential when 1 The formal standard reduction potential can be defined as the measured reduction potential of the half-reaction at unity concentration ratio of the oxidized and reduced species (i.e., when 1) under given conditions. Indeed: as, , when , , when , because , and that the term is included in . The formal reduction potential makes possible to more simply work with molar or molal concentrations in place of activities. Because molar and molal concentrations were once referred as formal concentrations, it could explain the origin of the adjective formal in the expression formal potential. The formal potential is thus the reversible potential of an electrode at equilibrium immersed in a solution where reactants and products are at unit concentration. If any small incremental change of potential causes a change in the direction of the reaction, i.e. from reduction to oxidation or vice versa, the system is close to equilibrium, reversible and is at its formal potential. When the formal potential is measured under standard conditions (i.e. the activity of each dissolved species is 1 mol/L, T = 298.15 K = 25 °C = 77 °F, = 1 bar) it becomes de facto a standard potential. According to Brown and Swift (1949), "A formal potential is defined as the potential of a half-cell, measured against the standard hydrogen electrode, when the total concentration of each oxidation state is one formal". The activity coefficients and are included in the formal potential , and because they depend on experimental conditions such as temperature, ionic strength, and pH, cannot be referred as an immuable standard potential but needs to be systematically determined for each specific set of experimental conditions. Formal reduction potentials are applied to simplify results interpretations and calculations of a considered system. Their relationship with the standard reduction potentials must be clearly expressed to avoid any confusion. Main factors affecting the formal (or apparent) standard reduction potentials The main factor affecting the formal (or apparent) reduction potentials in biochemical or biological processes is the pH. To determine approximate values of formal reduction potentials, neglecting in a first approach changes in activity coefficients due to ionic strength, the Nernst equation has to be applied taking care to first express the relationship as a function of pH. The second factor to be considered are the values of the concentrations taken into account in the Nernst equation. To define a formal reduction potential for a biochemical reaction, the pH value, the concentrations values and the hypotheses made on the activity coefficients must always be clearly indicated. When using, or comparing, several formal (or apparent) reduction potentials they must also be internally consistent. Problems may occur when mixing different sources of data using different conventions or approximations (i.e., with different underlying hypotheses). When working at the frontier between inorganic and biological processes (e.g., when comparing abiotic and biotic processes in geochemistry when microbial activity could also be at work in the system), care must be taken not to inadvertently directly mix standard reduction potentials ( versus SHE, pH = 0) with formal (or apparent) reduction potentials ( at pH = 7). Definitions must be clearly expressed and carefully controlled, especially if the sources of data are different and arise from different fields (e.g., picking and directly mixing data from classical electrochemistry textbooks ( versus SHE, pH = 0) and microbiology textbooks ( at pH = 7) without paying attention to the conventions on which they are based). Example in biochemistry For example, in a two electrons couple like : the reduction potential becomes ~ 30 mV (or more exactly, 59.16 mV/2 = 29.6 mV) more positive for every power of ten increase in the ratio of the oxidised to the reduced form. Some important apparent potentials used in biochemistry See also Nernst equation Electron bifurcation Pourbaix diagram Reduction potential Dependency of reduction potential on pH Standard electrode potential Standard reduction potential Standard reduction potential (data page) Standard state References Bibliography Electrochemistry Bio-electrochemistry Microbiology Biochemistry Standard reduction potentials for half-reactions important in biochemistry Electrochemical potentials Thermodynamics databases Biochemistry databases
Table of standard reduction potentials for half-reactions important in biochemistry
[ "Physics", "Chemistry", "Biology" ]
2,091
[ "Electrochemical potentials", "Biochemistry databases", "Electrochemistry", "Thermodynamics", "nan", "Biochemistry", "Thermodynamics databases" ]
2,171,486
https://en.wikipedia.org/wiki/Ponderomotive%20force
In physics, a ponderomotive force is a nonlinear force that a charged particle experiences in an inhomogeneous oscillating electromagnetic field. It causes the particle to move towards the area of the weaker field strength, rather than oscillating around an initial point as happens in a homogeneous field. This occurs because the particle sees a greater magnitude of force during the half of the oscillation period while it is in the area with the stronger field. The net force during its period in the weaker area in the second half of the oscillation does not offset the net force of the first half, and so over a complete cycle this makes the particle move towards the area of lesser force. The ponderomotive force Fp is expressed by which has units of newtons (in SI units) and where e is the electrical charge of the particle, m is its mass, ω is the angular frequency of oscillation of the field, and E is the amplitude of the electric field. At low enough amplitudes the magnetic field exerts very little force. This equation means that a charged particle in an inhomogeneous oscillating field not only oscillates at the frequency of ω of the field, but is also accelerated by Fp toward the weak field direction. This is a rare case in which the direction of the force does not depend on whether the particle is positively or negatively charged. Etymology The term ponderomotive comes from the Latin ponder- (meaning weight) and the english motive (having to do with motion). Derivation The derivation of the ponderomotive force expression proceeds as follows. Consider a particle under the action of a non-uniform electric field oscillating at frequency in the x-direction. The equation of motion is given by: neglecting the effect of the associated oscillating magnetic field. If the length scale of variation of is large enough, then the particle trajectory can be divided into a slow time (secular) motion and a fast time (micro)motion: where is the slow drift motion and represents fast oscillations. Now, let us also assume that . Under this assumption, we can use Taylor expansion on the force equation about , to get: , and because is small, , so On the time scale on which oscillates, is essentially a constant. Thus, the above can be integrated to get: Substituting this in the force equation and averaging over the timescale, we get, Thus, we have obtained an expression for the drift motion of a charged particle under the effect of a non-uniform oscillating field. Time averaged density Instead of a single charged particle, there could be a gas of charged particles confined by the action of such a force. Such a gas of charged particles is called plasma. The distribution function and density of the plasma will fluctuate at the applied oscillating frequency and to obtain an exact solution, we need to solve the Vlasov Equation. But, it is usually assumed that the time averaged density of the plasma can be directly obtained from the expression for the force expression for the drift motion of individual charged particles: where is the ponderomotive potential and is given by Generalized ponderomotive force Instead of just an oscillating field, a permanent field could also be present. In such a situation, the force equation of a charged particle becomes: To solve the above equation, we can make a similar assumption as we did for the case when . This gives a generalized expression for the drift motion of the particle: Applications The idea of a ponderomotive description of particles under the action of a time-varying field has applications in areas like: High harmonic generation Plasma acceleration of particles Plasma propulsion engine especially the Electrodeless plasma thruster Quadrupole ion trap Terahertz time-domain spectroscopy as a source of high energy THz radiation in laser-induced air plasmas The quadrupole ion trap uses a linear function along its principal axes. This gives rise to a harmonic oscillator in the secular motion with the so-called trapping frequency , where are the charge and mass of the ion, the peak amplitude and the frequency of the radiofrequency (rf) trapping field, and the ion-to-electrode distance respectively. Note that a larger rf frequency lowers the trapping frequency. The ponderomotive force also plays an important role in laser induced plasmas as a major density lowering factor. Often, however, the assumed slow-time independency of is too restrictive, an example being the ultra-short, intense laser pulse-plasma(target) interaction. Here a new ponderomotive effect comes into play, the ponderomotive memory effect. The result is a weakening of the ponderomotive force and the generation of wake fields and ponderomotive streamers. In this case the fast-time averaged density becomes for a Maxwellian plasma: , where and . References General Citations Journals Electrodynamics Force
Ponderomotive force
[ "Physics", "Mathematics" ]
1,011
[ "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Electrodynamics", "Wikipedia categories named after physical quantities", "Matter", "Dynamical systems" ]
2,172,915
https://en.wikipedia.org/wiki/Scattering%20channel
In scattering theory, a scattering channel is a quantum state of the colliding system before or after the collision (). The Hilbert space spanned by the states before collision (in states) is equal to the space spanned by the states after collision (out states) which are both Fock spaces if there is a mass gap. This is the reason why the S matrix which maps the in states onto the out states must be unitary. The scattering channel are also called scattering asymptotes. The Møller operators are mapping the scattering channels onto the corresponding states which are solution of the Schrödinger equation taking the interaction Hamiltonian into account. The Møller operators are isometric. See also LSZ formalism Scattering
Scattering channel
[ "Physics", "Chemistry", "Materials_science" ]
150
[ "Scattering stubs", "Quantum mechanics", "Scattering", "Particle physics", "Condensed matter physics", "Nuclear physics", "Quantum physics stubs" ]
2,174,011
https://en.wikipedia.org/wiki/Avoided%20crossing
In quantum physics and quantum chemistry, an avoided crossing (AC, sometimes called intended crossing, non-crossing or anticrossing) is the phenomenon where two eigenvalues of a Hermitian matrix representing a quantum observable and depending on continuous real parameters cannot become equal in value ("cross") except on a manifold of dimension . The phenomenon is also known as the von Neumann–Wigner theorem. In the case of a diatomic molecule (with one parameter, namely the bond length), this means that the eigenvalues cannot cross at all. In the case of a triatomic molecule, this means that the eigenvalues can coincide only at a single point (see conical intersection). This is particularly important in quantum chemistry. In the Born–Oppenheimer approximation, the electronic molecular Hamiltonian is diagonalized on a set of distinct molecular geometries (the obtained eigenvalues are the values of the adiabatic potential energy surfaces). The geometries for which the potential energy surfaces are avoiding to cross are the locus where the Born–Oppenheimer approximation fails. Avoided crossing also occur in the resonance frequencies of undamped mechanical systems, where the stiffness and mass matrices are real symmetric. There the resonance frequencies are the square root of the generalized eigenvalues. In two-state systems Emergence Study of a two-level system is of vital importance in quantum mechanics because it embodies simplification of many of physically realizable systems. The effect of perturbation on a two-state system Hamiltonian is manifested through avoided crossings in the plot of individual energy versus energy difference curve of the eigenstates. The two-state Hamiltonian can be written as The eigenvalues of which are and and the eigenvectors, and . These two eigenvectors designate the two states of the system. If the system is prepared in either of the states it would remain in that state. If happens to be equal to there will be a twofold degeneracy in the Hamiltonian. In that case any superposition of the degenerate eigenstates is evidently another eigenstate of the Hamiltonian. Hence the system prepared in any state will remain in that forever. However, when subjected to an external perturbation, the matrix elements of the Hamiltonian change. For the sake of simplicity we consider a perturbation with only off diagonal elements. Since the overall Hamiltonian must be Hermitian we may simply write the new Hamiltonian Where P is the perturbation with zero diagonal terms. The fact that P is Hermitian fixes its off-diagonal components. The modified eigenstates can be found by diagonalising the modified Hamiltonian. It turns out that the new eigenvalues and are If a graph is plotted varying along the horizontal axis and or along the vertical, we find two branches of a hyperbola (as shown in the figure). The curve asymptotically approaches the original unperturbed energy levels. Analyzing the curves it becomes evident that even if the original states were degenerate (i.e. ) the new energy states are no longer equal. However, if is set to zero we may find at , and the levels cross. Thus with the effect of the perturbation these level crossings are avoided. Quantum resonance The immediate impact of avoided level crossing in a degenerate two state system is the emergence of a lowered energy eigenstate. The effective lowering of energy always correspond to increasing stability.(see: Energy minimization) Bond resonance in organic molecules exemplifies the occurrence of such avoided crossings. To describe these cases we may note that the non-diagonal elements in an erstwhile diagonalised Hamiltonian not only modify the energy eigenvalues but also superpose the old eigenstates into the new ones. These effects are more prominent if the original Hamiltonian had degeneracy. This superposition of eigenstates to attain more stability is precisely the phenomena of chemical bond resonance. Our earlier treatment started by denoting the eigenvectors and as the matrix representation of eigenstates and of a two-state system. Using bra–ket notation the matrix elements of are actually the terms with where due to the degeneracy of the unperturbed Hamiltonian and the off-diagonal perturbations are and . The new eigenstates and can be found by solving the eigenvalue equations and . From simple calculations it can be shown that and where It is evident that both of the new eigenstates are superposition of the original degenerate eigenstates and one of the eigenvalues (here ) is less than the original unperturbed eigenenergy. So the corresponding stable system will naturally mix up the former unperturbed eigenstates to minimize its energy. In the example of benzene the experimental evidences of probable bond structures give rise to two different eigenstates, and . The symmetry of these two structures mandates that . However it turns out that the two-state Hamiltonian of benzene is not diagonal. The off-diagonal elements result into lowering of energy and the benzene molecule stabilizes in a structure which is a superposition of these symmetric ones with energy . For any general two-state system avoided level crossing repels the eigenstates and such that it requires more energy for the system to achieve the higher energy configuration. Resonances in avoided crossing In molecules, the nonadiabatic couplings between two adiabatic potentials build the AC region. Because they are not in the bound state region of the adiabatic potentials, the rovibronic resonances in the AC region of two-coupled potentials are very special and usually do not play important roles on the scatterings. Exemplified in particle scattering, resonances in the AC region are comprehensively investigated. The effects of resonances in the AC region on the scattering cross sections strongly depend on the nonadiabatic couplings of the system, it can be very significant as sharp peaks, or inconspicuous buried in the background. More importantly, it shows a simple quantity proposed by Zhu and Nakamura to classify the coupling strength of nonadiabatic interactions, can be well applied to quantitatively estimate the importance of resonances in the AC region. General avoided crossing theorem The above illustration of avoided crossing however is a very specific case. From a generalised view point the phenomenon of avoided crossing is actually controlled by the parameters behind the perturbation. For the most general perturbation affecting a two-dimensional subspace of the Hamiltonian , we may write the effective Hamiltonian matrix in that subspace as Here the elements of the state vectors were chosen to be real so that all the matrix elements become real. Now the eigenvalues of the system for this subspace are given by The terms under the square root are squared real numbers. So for these two levels to cross we simultaneously require Now if the perturbation has parameters we may in general vary these numbers to satisfy these two equations. If we choose the values of to then both of the equations above have one single free parameter. In general it is not possible to find one such that both of the equations are satisfied. However, if we allow another parameter to be free, both of these two equations will now be controlled by the same two parameters And generally there will be two such values of them for which the equations will be simultaneously satisfied. So with distinct parameters parameters can always be chosen arbitrarily and still we can find two such s such that there would be crossing of energy eigenvalues. In other words, the values of and would be the same for freely varying co-ordinates (while the rest of the two co-ordinates are fixed from the condition equations). Geometrically the eigenvalue equations describe a surface in dimensional space. Since their intersection is parametrized by coordinates, we may formally state that for continuous real parameters controlling the perturbed Hamiltonian, the levels (or surfaces) can only cross at a manifold of dimension . However the symmetry of the Hamiltonian has a role to play in the dimensionality. If the original Hamiltonian has asymmetric states, , the off-diagonal terms vanish automatically to ensure hermiticity. This allows us to get rid of the equation . Now from similar arguments as posed above, it is straightforward that for an asymmetrical Hamiltonian, the intersection of energy surfaces takes place in a manifold of dimension . In polyatomic molecules In an N-atomic polyatomic molecule there are 3N-6 vibrational coordinates (3N-5 for a linear molecule) that enter into the electronic Hamiltonian as parameters. For a diatomic molecule there is only one such coordinate, the bond length r. Thus, due to the avoided crossing theorem, in a diatomic molecule we cannot have level crossings between electronic states of the same symmetry. However, for a polyatomic molecule there is more than one geometry parameter in the electronic Hamiltonian and level crossings between electronic states of the same symmetry are not avoided. See also Geometric phase Christopher Longuet-Higgins Conical intersection Vibronic coupling Adiabatic theorem Bond hardening Bond softening Landau–Zener formula Level repulsion References Sources Quantum mechanics Quantum chemistry
Avoided crossing
[ "Physics", "Chemistry" ]
1,923
[ "Quantum chemistry", "Theoretical physics", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", " and optical physics" ]
2,174,453
https://en.wikipedia.org/wiki/Lotus%20effect
The lotus effect refers to self-cleaning properties that are a result of ultrahydrophobicity as exhibited by the leaves of Nelumbo, the lotus flower. Dirt particles are picked up by water droplets due to the micro- and nanoscopic architecture on the surface, which minimizes the droplet's adhesion to that surface. Ultrahydrophobicity and self-cleaning properties are also found in other plants, such as Tropaeolum (nasturtium), Opuntia (prickly pear), Alchemilla, cane, and also on the wings of certain insects. The phenomenon of ultrahydrophobicity was first studied by Dettre and Johnson in 1964 using rough hydrophobic surfaces. Their work developed a theoretical model based on experiments with glass beads coated with paraffin or PTFE telomer. The self-cleaning property of ultrahydrophobic micro-nanostructured surfaces was studied by Wilhelm Barthlott and Ehler in 1977, who described such self-cleaning and ultrahydrophobic properties for the first time as the "lotus effect"; perfluoroalkyl and perfluoropolyether ultrahydrophobic materials were developed by Brown in 1986 for handling chemical and biological fluids. Other biotechnical applications have emerged since the 1990s. Functional principle The high surface tension of water causes droplets to assume a nearly spherical shape, since a sphere has minimal surface area, and this shape therefore minimizes the solid-liquid surface energy. On contact of liquid with a surface, adhesion forces result in wetting of the surface. Either complete or incomplete wetting may occur depending on the structure of the surface and the fluid tension of the droplet. The cause of self-cleaning properties is the hydrophobic water-repellent double structure of the surface. This enables the contact area and the adhesion force between surface and droplet to be significantly reduced, resulting in a self-cleaning process. This hierarchical double structure is formed out of a characteristic epidermis (its outermost layer called the cuticle) and the covering waxes. The epidermis of the lotus plant possesses papillae 10 μm to 20 μm in height and 10 μm to 15 μm in width on which the so-called epicuticular waxes are imposed. These superimposed waxes are hydrophobic and form the second layer of the double structure. This system regenerates. This biochemical property is responsible for the functioning of the water repellency of the surface. The hydrophobicity of a surface can be measured by its contact angle. The higher the contact angle the higher the hydrophobicity of a surface. Surfaces with a contact angle < 90° are referred to as hydrophilic and those with an angle >90° as hydrophobic. Some plants show contact angles up to 160° and are called ultrahydrophobic, meaning that only 2–3% of the surface of a droplet (of typical size) is in contact. Plants with a double structured surface like the lotus can reach a contact angle of 170°, whereby the droplet's contact area is only 0.6%. All this leads to a self-cleaning effect. Dirt particles with an extremely reduced contact area are picked up by water droplets and are thus easily cleaned off the surface. If a water droplet rolls across such a contaminated surface the adhesion between the dirt particle, irrespective of its chemistry, and the droplet is higher than between the particle and the surface. This cleaning effect has been demonstrated on common materials such as stainless steel when a superhydrophobic surface is produced. As this self-cleaning effect is based on the high surface tension of water it does not work with organic solvents. Therefore, the hydrophobicity of a surface is no protection against graffiti. This effect is of a great importance for plants as a protection against pathogens like fungi or algae growth, and also for animals like butterflies, dragonflies and other insects not able to cleanse all their body parts. Another positive effect of self-cleaning is the prevention of contamination of the area of a plant surface exposed to light resulting in reduced photosynthesis. Technical application When it was discovered that the self-cleaning qualities of ultrahydrophobic surfaces come from physical-chemical properties at the microscopic to nanoscopic scale rather than from the specific chemical properties of the leaf surface, the possibility arose of using this effect in manmade surfaces, by mimicking nature in a general way rather than a specific one. Some nanotechnologists have developed treatments, coatings, paints, roof tiles, fabrics and other surfaces that can stay dry and clean themselves by replicating in a technical manner the self-cleaning properties of plants, such as the lotus plant. This can usually be achieved using special fluorochemical or silicone treatments on structured surfaces or with compositions containing micro-scale particulates. In addition to chemical surface treatments, which can be removed over time, metals have been sculpted with femtosecond pulse lasers to produce the lotus effect. The materials are uniformly black at any angle, which combined with the self-cleaning properties might produce very low maintenance solar thermal energy collectors, while the high durability of the metals could be used for self-cleaning latrines to reduce disease transmission. Further applications have been marketed, such as self-cleaning glasses installed in the sensors of traffic control units on German autobahns developed by a cooperation partner (Ferro GmbH). The Swiss companies HeiQ and Schoeller Textil have developed stain-resistant textiles under the brand names "HeiQ Eco Dry" and "nanosphere" respectively. In October 2005, tests of the Hohenstein Research Institute showed that clothes treated with NanoSphere technology allowed tomato sauce, coffee and red wine to be easily washed away even after a few washes. Another possible application is thus with self-cleaning awnings, tarpaulins and sails, which otherwise quickly become dirty and difficult to clean. Superhydrophobic coatings applied to microwave antennas can significantly reduce rain fade and the buildup of ice and snow. "Easy to clean" products in ads are often mistaken in the name of the self-cleaning properties of hydrophobic or ultrahydrophobic surfaces. Patterned ultrahydrophobic surfaces also show promise for "lab-on-a-chip" microfluidic devices and can greatly improve surface-based bioanalysis. Superhydrophobic or hydrophobic properties have been used in dew harvesting, or the funneling of water to a basin for use in irrigation. The Groasis Waterboxx has a lid with a microscopic pyramidal structure based on the ultrahydrophobic properties that funnel condensation and rainwater into a basin for release to a growing plant's roots. Research history Although the self-cleaning phenomenon of the lotus was possibly known in Asia long before (reference to the lotus effect is found in the Bhagavad Gita), its mechanism was explained only in the early 1970s after the introduction of the scanning electron microscope. Studies were performed with leaves of Tropaeolum and lotus (Nelumbo). Similar to lotus effect, a recent study has revealed honeycomb-like micro-structures on the taro leaf, which makes the leaf superhydrophobic. The measured contact angle on this leaf in this study is around 148 degrees. See also Biomimetics Petal effect Salvinia effect References External links Project Group lotus effect - Nees Institut for biodiversity of plants Friedrich-Wilhelm University of Bonn Scientific American article: "Self-Cleaning Materials: Lotus Leaf-Inspired Nanotechnology" Nanotechnology Surface science
Lotus effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,578
[ "Nanotechnology", "Condensed matter physics", "Surface science", "Materials science" ]
2,175,118
https://en.wikipedia.org/wiki/Refugium%20%28population%20biology%29
In biology, a refugium (plural: refugia) is a location which supports an isolated or relict population of a once more widespread species. This isolation (allopatry) can be due to climatic changes, geography, or human activities such as deforestation and overhunting. Present examples of refugial animal species are the mountain gorilla, isolated to specific mountains in central Africa, and the Australian sea lion, isolated to specific breeding beaches along the south-west coast of Australia, due to humans taking so many of their number as game. This resulting isolation, in many cases, can be seen as only a temporary state; however, some refugia may be longstanding, thereby having many endemic species, not found elsewhere, which survive as relict populations. The Indo-Pacific Warm Pool has been proposed to be a longstanding refugium, based on the discovery of the "living fossil" of a marine dinoflagellate called Dapsilidinium pastielsii, currently found in the Indo-Pacific Warm Pool only. For plants, anthropogenic climate change propels scientific interest in identifying refugial species that were isolated into small or disjunct ranges during glacial episodes of the Pleistocene, yet whose ability to expand their ranges during the warmth of interglacial periods (such as the Holocene) was apparently limited or precluded by topographic, streamflow, or habitat barriers—or by the extinction of coevolved animal dispersers. The concern is that ongoing warming trends will expose them to extirpation or extinction in the decades ahead. In anthropology, refugia often refers specifically to Last Glacial Maximum refugia, where some ancestral human populations may have been forced back to glacial refugia (similar small isolated pockets on the face of the continental ice sheets) during the last glacial period. Going from west to east, suggested examples include the Franco-Cantabrian region (in northern Iberia), the Italian and Balkan peninsulas, the Ukrainian LGM refuge, and the Bering Land Bridge. Archaeological and genetic data suggest that the source populations of Paleolithic humans survived the glacial maxima (including the Last Glacial Maximum) in sparsely wooded areas and dispersed through areas of high primary productivity while avoiding dense forest cover. Glacial refugia, where human populations found refuge during the last glacial period, may have played a crucial role in shaping the emergence and diversification of the language families that exist in the world today. More recently, refugia has been used to refer to areas that could offer relative climate stability in the face of modern climate change. Speciation As an example of a locale refugia study, Jürgen Haffer first proposed the concept of refugia to explain the biological diversity of bird populations in the Amazonian river basin. Haffer suggested that climatic change in the late Pleistocene led to reduced reservoirs of habitable forests in which populations become allopatric. Over time, that led to speciation: populations of the same species that found themselves in different refugia evolved differently, creating parapatric sister-species. As the Pleistocene ended, the arid conditions gave way to the present humid rainforest environment, reconnecting the refugia. Scholars have since expanded the idea of this mode of speciation and used it to explain population patterns in other areas of the world, such as Africa, Eurasia, and North America. Theoretically, current biogeographical patterns can be used to infer past refugia: if several unrelated species follow concurrent range patterns, the area may have been a refugium. Moreover, the current distribution of species with narrow ecological requirements tend to be associated with the spatial position of glacial refugia. Simple environment examples of temperature One can provide a simple explanation of refugia involving core temperatures and exposure to sunlight. In the northern hemisphere, north-facing sites on hills or mountains, and places at higher elevations count as cold sites. The reverse are sun- or heat-exposed, lower-elevation, south-facing sites: hot sites. (The opposite directions apply in the southern hemisphere.) Each site becomes a refugium, one as a "cold-surviving refugium" and the other as a "hot-surviving refugium". Canyons with deep hidden areas (the opposite of hillsides, mountains, mesas, etc. or other exposed areas) lead to these separate types of refugia. A concept not often referenced is that of "sweepstakes colonization": when a dramatic ecological event occurs, for example a meteor strike, and global, multiyear effects occur. The sweepstake-winning species happens to already be living in a fortunate site, and their environment is rendered even more advantageous, as opposed to the "losing" species, which immediately fails to reproduce. Past climate change refugia Ecological understanding and geographic identification of climate refugia that remained significant strongholds for plant and animal survival during the extremes of past cooling and warming episodes largely pertain to the Quaternary glaciation cycles during the past several million years, especially in the Northern Hemisphere. A number of defining characteristics of past refugia are prevalent, including "an area where distinct genetic lineages have persisted through a series of Tertiary or Quaternary climate fluctuations owing to special, buffering environmental characteristics", "a geographical region that a species inhabits during the period of a glacial/interglacial cycle that represents the species' maximum contraction in geographical range," and "areas where local populations of a species can persist through periods of unfavorable regional climate." Future climate change refugia In systematic conservation planning, the term refugium has been used to define areas that could be used in protected area development to protect species from climate change. The term has been used alternatively to refer to areas with stable habitats or stable climates. More specifically, the term in situ refugium is used to refer to areas that will allow species that exist in an area to remain there even as conditions change, whereas ex situ refugium refers to an area into which species distributions can move to in response to climate change. Sites that offer in situ refugia are also called resilient sites in which species will continue to have what they need to survive even as climate changes. One study found with downscaled climate models that areas near the coast are predicted to experience overall less warming than areas toward the interior of the US State of Washington. Other research has found that old-growth forests are particularly insulated from climatic changes due to evaporative cooling effects from evapotranspiration and their ability to retain moisture. The same study found that such effects in the Pacific Northwest would create important refugia for bird species. A review of refugia-focused conservation strategy in the Klamath-Siskiyou Ecoregion found that, in addition to old-growth forest, the northern aspects of hillslopes and deep gorges would provide relatively cool areas for wildlife and seeps or bogs surrounded by mature and old-growth forests would continue to supply moisture even as water availability decreases. Beginning in 2010 the concept of geodiversity (a term used previously in efforts to preserve scientifically important geological features) entered into the literature of conservation biologists as a potential way to identify climate change refugia and as a surrogate (in other words, a proxy used when planning for protected areas) for biodiversity. While the language to describe this mode of conservation planning hadn't fully developed until recently, the use of geophysical diversity in conservation planning goes back at least as far as the work by Hunter and others in 1988, and Richard Cowling and his colleagues in South Africa also used "spatial features" as surrogates for ecological processes in establishing conservation areas in the late 1990s and early 2000s. The most recent efforts have used the idea of land facets (also referred to as geophysical settings, enduring features, or geophysical stages), which are unique combinations of topographical features (such as slope steepness, slope direction, and elevation) and soil composition, to quantify physical features. The density of these facets, in turn, is used as a measure of geodiversity. Because geodiversity has been shown to be correlated with biodiversity, even as species move in response to climate change, protected areas with high geodiversity may continue to protect biodiversity as niches get filled by the influx of species from neighboring areas. Highly geodiverse protected areas may also allow for the movement of species within the area from one land facet or elevation to another. Conservation scientists, however, emphasize that the use of refugia to plan for climate change is not a substitute for fine-scale (more localized) and traditional approaches to conservation, as individual species and ecosystems will need to be protected where they exist in the present. They also emphasize that responding to climate change in conservation is not a substitute for actually limiting the causes of climate change. See also Notes References Biogeography Biomes Habitat Population ecology pt:Teoria dos Refúgios
Refugium (population biology)
[ "Biology" ]
1,854
[ "Biogeography" ]
14,463,498
https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Mazur%20swindle
In mathematics, the Eilenberg–Mazur swindle, named after Samuel Eilenberg and Barry Mazur, is a method of proof that involves paradoxical properties of infinite sums. In geometric topology it was introduced by and is often called the Mazur swindle. In algebra it was introduced by Samuel Eilenberg and is known as the Eilenberg swindle or Eilenberg telescope (see telescoping sum). The Eilenberg–Mazur swindle is similar to the following well known joke "proof" that 1 = 0: 1 = 1 + (−1 + 1) + (−1 + 1) + ... = 1 − 1 + 1 − 1 + ... = (1 − 1) + (1 − 1) + ... = 0 This "proof" is not valid as a claim about real numbers because Grandi's series 1 − 1 + 1 − 1 + ... does not converge, but the analogous argument can be used in some contexts where there is some sort of "addition" defined on some objects for which infinite sums do make sense, to show that if A + B = 0 then A = B = 0. Mazur swindle In geometric topology the addition used in the swindle is usually the connected sum of knots or manifolds. Example : A typical application of the Mazur swindle in geometric topology is the proof that the sum of two non-trivial knots A and B is non-trivial. For knots it is possible to take infinite sums by making the knots smaller and smaller, so if A + B is trivial then so A is trivial (and B by a similar argument). The infinite sum of knots is usually a wild knot, not a tame knot. See for more geometric examples. Example: The oriented n-manifolds have an addition operation given by connected sum, with 0 the n-sphere. If A + B is the n-sphere, then A + B + A + B + ... is Euclidean space so the Mazur swindle shows that the connected sum of A and Euclidean space is Euclidean space, which shows that A is the 1-point compactification of Euclidean space and therefore A is homeomorphic to the n-sphere. (This does not show in the case of smooth manifolds that A is diffeomorphic to the n-sphere, and in some dimensions, such as 7, there are examples of exotic spheres A with inverses that are not diffeomorphic to the standard n-sphere.) Eilenberg swindle In algebra the addition used in the swindle is usually the direct sum of modules over a ring. Example: A typical application of the Eilenberg swindle in algebra is the proof that if A is a projective module over a ring R then there is a free module F with . To see this, choose a module B such that is free, which can be done as A is projective, and put F = B ⊕ A ⊕ B ⊕ A ⊕ B ⊕ ⋯. so that A ⊕ F = A ⊕ (B ⊕ A) ⊕ (B ⊕ A) ⊕ ⋯ = (A ⊕ B) ⊕ (A ⊕ B) ⊕ ⋯ ≅ F. Example: Finitely generated free modules over commutative rings R have a well-defined natural number as their dimension which is additive under direct sums, and are isomorphic if and only if they have the same dimension. This is false for some noncommutative rings, and a counterexample can be constructed using the Eilenberg swindle as follows. Let X be an abelian group such that X ≅ X ⊕ X (for example the direct sum of an infinite number of copies of any nonzero abelian group), and let R be the ring of endomorphisms of X. Then the left R-module R is isomorphic to the left R-module R ⊕ R. Example: If A and B are any groups then the Eilenberg swindle can be used to construct a ring R such that the group rings R[A] and R[B] are isomorphic rings: take R to be the group ring of the restricted direct product of infinitely many copies of A ⨯ B. Other examples The proof of the Cantor–Bernstein–Schroeder theorem might be seen as antecedent of the Eilenberg–Mazur swindle. In fact, the ideas are quite similar. If there are injections of sets from X to Y and from Y to X, this means that formally we have and for some sets A and B, where + means disjoint union and = means there is a bijection between two sets. Expanding the former with the latter, X = X + A + B. In this bijection, let Z consist of those elements of the left hand side that correspond to an element of X on the right hand side. This bijection then expands to the bijection X = A + B + A + B + ⋯ + Z. Substituting the right hand side for X in Y = B + X gives the bijection Y = B + A + B + A + ⋯ + Z. Switching every adjacent pair B + A yields Y = A + B + A + B + ⋯ + Z. Composing the bijection for X with the inverse of the bijection for Y then yields X = Y. This argument depended on the bijections and as well as the well-definedness of infinite disjoint union. Notes References External links Exposition by Terence Tao on Mazur's swindle in topology Knot theory Module theory
Eilenberg–Mazur swindle
[ "Mathematics" ]
1,178
[ "Fields of abstract algebra", "Module theory" ]
14,463,750
https://en.wikipedia.org/wiki/Stress%20field
A stress field is the distribution of internal forces in a body that balance a given set of external forces. Stress fields are widely used in fluid dynamics and materials science. Consider that one can picture the stress fields as the stress created by adding an extra half plane of atoms to a crystal. The bonds are clearly stretched around the location of the dislocation and this stretching causes the stress field to form. Atomic bonds farther and farther away from the dislocation centre are less and less stretched which is why the stress field dissipates as the distance from the dislocation centre increases. Each dislocation within the material has a stress field associated with it. The creation of these stress fields is a result of the material trying to dissipate mechanical energy that is being exerted on the material. By convention, these dislocations are labelled as either positive or negative depending on whether the stress field of the dislocation is mostly compressive or tensile. By modelling of dislocations and their stress fields as either a positive (compressive field) or negative (tensile field) charges we can understand how dislocations interact with each other in the lattice. If two like fields come in contact with one another they will be repelled by one another. On the other hand, if two opposing charges come into contact with one another they will be attracted to one another. These two interactions will both strengthen the material in different ways. If two equivalently charged fields come in contact and are confined to a particular region, excessive force is needed to overcome the repulsive forces needed to elicit dislocation movement past one another. If two oppositely charged fields come into contact with one another they will merge with one another to form a jog. A jog can be modelled as a potential well that traps dislocations. Thus, excessive force is needed to force the dislocations apart. Since dislocation motion is the primary mechanism behind plastic deformation, increasing the stress required to move dislocations directly increases the yield strength of the material. The theory of stress fields can be applied to various strengthening mechanisms for materials. Stress fields can be created by adding different sized atoms to the lattice (solute strengthening). If a smaller atom is added to the lattice a tensile stress field is created. The atomic bonds are longer due to the smaller radius of the solute atom. Similarly, if a larger atom is added to the lattice a compressive stress field is created. The atomic bonds are shorter due to the larger radius of the solute atom. The stress fields created by adding solute atoms form the basis of the material strengthening process that occurs in alloys. Further reading Arno Zang, Ove Stephansson, Stress Field of the Earth's Crust, Springer, 2010. Chapter 1, Introduction, page 1 Classical mechanics Materials science
Stress field
[ "Physics", "Materials_science", "Engineering" ]
582
[ "Applied and interdisciplinary physics", "Classical mechanics", "Materials science", "Mechanics", "nan" ]
14,464,469
https://en.wikipedia.org/wiki/Outline%20of%20black%20holes
The following outline is provided as an overview of and topical guide to black holes: Black hole – mathematically defined region of spacetime exhibiting such a strong gravitational pull that no particle or electromagnetic radiation can escape from inside it. The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon. Although crossing the event horizon has enormous effect on the fate of the object crossing it, it appears to have no locally detectable features. In many ways a black hole acts like an ideal black body, as it reflects no light. Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, making it essentially impossible to observe. What type of thing is a black hole? A black hole can be described as all of the following: Astronomical object Black body Collapsed star Types of black holes Schwarzschild metric – In Einstein's theory of general relativity, the Schwarzschild solution, named after Karl Schwarzschild, describes the gravitational field outside a spherical, uncharged, non-rotating mass such as a star, planet, or black hole. Rotating black hole – black hole that possesses spin angular momentum. Charged black hole – black hole that possesses electric charge. Virtual black hole – black hole that exists temporarily as a result of a quantum fluctuation of spacetime. Types of black holes, by size Micro black hole – predicted as tiny black holes, also called quantum mechanical black holes, or mini black holes, for which quantum mechanical effects play an important role. These could potentially have arisen as primordial black holes. Extremal black hole – black hole with the minimal possible mass that can be compatible with a given charge and angular momentum. Black hole electron – if there were a black hole with the same mass and charge as an electron, it would share many of the properties of the electron including the magnetic moment and Compton wavelength. Stellar black hole – black hole formed by the gravitational collapse of a massive star. They have masses ranging from about three to several tens of solar masses. Intermediate-mass black hole – black hole whose mass is significantly more than stellar black holes yet far less than supermassive black holes. Supermassive black hole – largest type of black hole in a galaxy, on the order of hundreds of thousands to billions of solar masses. Quasar – very energetic and distant active galactic nucleus. Active galactic nucleus – compact region at the centre of a galaxy that has a much higher than normal luminosity over at least some portion, and possibly all, of the electromagnetic spectrum. Blazar – very compact quasar associated with a presumed supermassive black hole at the center of an active, giant elliptical galaxy. Specific black holes List of black holes – incomplete list of black holes organized by size; some items in this list are galaxies or star clusters that are believed to be organized around a black hole. Black hole exploration Rossi X-ray Timing Explorer – satellite that observes the time structure of astronomical X-ray sources, named after Bruno Rossi. Formation of black holes Stellar evolution – process by which a star undergoes a sequence of radical changes during its lifetime. Gravitational collapse – inward fall of a body due to the influence of its own gravity. Neutron star – type of stellar remnant that can result from the gravitational collapse of a massive star during a Type II, Type Ib or Type Ic supernova event. Compact star – white dwarfs, neutron stars, other exotic dense stars, and black holes. Quark star – hypothetical type of exotic star composed of quark matter, or strange matter. Exotic star – compact star composed of something other than electrons, protons, and neutrons balanced against gravitational collapse by degeneracy pressure or other quantum properties. Tolman–Oppenheimer–Volkoff limit – upper bound to the mass of stars composed of neutron-degenerate matter. White dwarf – also called a degenerate dwarf, is a small star composed mostly of electron-degenerate matter. Supernova – stellar explosion that is more energetic than a nova. Hypernova – also known as a Type Ic Supernova, refers to an immensely large star that collapses at the end of its lifespan. Gamma-ray burst – flashes of gamma rays associated with extremely energetic explosions that have been observed in distant galaxies. Properties of black holes Accretion disk – structure (often a circumstellar disk) formed by diffused material in orbital motion around a massive central body, typically a star. Accretion disks of black holes radiate in the X-ray part of the spectrum. Black hole thermodynamics – area of study that seeks to reconcile the laws of thermodynamics with the existence of black hole event horizons. Schwarzschild radius – distance from the center of an object such that, if all the mass of the object were compressed within that sphere, the escape speed from the surface would equal the speed of light. M–sigma relation – empirical correlation between the stellar velocity dispersion of a galaxy bulge and the mass M of the supermassive black hole at Event horizon – boundary in spacetime beyond which events cannot affect an outside observer. Quasi-periodic oscillation – manner in which the X-ray light from an astronomical object flickers about certain frequencies. Photon sphere – spherical region of space where gravity is strong enough that photons are forced to travel in orbits. Ergosphere – region located outside a rotating black hole. Hawking radiation – black-body radiation that is predicted to be emitted by black holes, due to quantum effects near the event horizon. Penrose process – process theorised by Roger Penrose wherein energy can be extracted from a rotating black hole. Bondi accretion – spherical accretion onto an object. Spaghettification – vertical stretching and horizontal compression of objects into long thin shapes in a very strong gravitational field, and is caused by extreme tidal forces. Gravitational lens – distribution of matter between a distant source and an observer, that is capable of bending the light from the source, as it travels towards the observer. History of black holes History of black holes Timeline of black hole physics – Timeline of black hole physics John Michell – geologist who first proposed the idea "dark stars" in 1783 Dark star Pierre-Simon Laplace – early mathematical theorist (1796) of the idea of black holes Albert Einstein – in 1915, arrived at the theory of general relativity Karl Schwarzschild – described the gravitational field of a point mass in 1915 Subrahmanyan Chandrasekhar – in 1931, using special relativity, postulated that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at 1.4 solar masses) has no stable solutions. Tolman–Oppenheimer–Volkoff limit, a higher limit than the Chandrasekhar limit above which neutron stars would certainly collapse further, was predicted in 1939, along with a description of the mechanism by which a black hole could be produced. David Finkelstein – identified the Schwarzschild surface as an event horizon Roy Kerr – In 1963, found the exact solution for a rotating black hole Stephen Hawking and Roger Penrose show that global singularities can occur and black holes are not a mathematical artifact in the late 1960s. Cygnus X-1, discovered in 1964, was the first astrophysical object commonly accepted to be a black hole after further observations in the early 1970s. James Bardeen and Jacob Bekenstein formulate black hole thermodynamics alongside Hawking and others in the early 1970s. Hawking predicts Hawking radiation in 1974 as a consequence of black hole thermodynamics. The LIGO Scientific Collaboration announces the first detection of a black hole merger via gravitational wave observations on February 11, 2016. The Event Horizon Telescope observes the supermassive black hole in Messier 87's galactic center in 2017, leading to the first direct image of a black hole being published on April 10, 2019. Models of black holes Gravitational singularity – or spacetime singularity is a location where the quantities that are used to measure the gravitational field become infinite in a way that does not depend on the coordinate system. Penrose–Hawking singularity theorems – set of results in general relativity which attempt to answer the question of when gravitation produces singularities. Primordial black hole – hypothetical type of black hole that is formed not by the gravitational collapse of a large star but by the extreme density of matter present during the universe's early expansion. Gravastar – object hypothesized in astrophysics as an alternative to the black hole theory by Pawel Mazur and Emil Mottola. Dark star (Newtonian mechanics) – theoretical object compatible with Newtonian mechanics that, due to its large mass, has a surface escape velocity that equals or exceeds the speed of light. Dark-energy star Black star (semiclassical gravity) – gravitational object composed of matter. Magnetospheric eternally collapsing object – proposed alternatives to black holes advocated by Darryl Leiter and Stanley Robertson. Fuzzball (string theory) – theorized by some superstring theory scientists to be the true quantum description of black holes. White hole – hypothetical region of spacetime which cannot be entered from the outside, but from which matter and light have the ability to escape. Naked singularity – gravitational singularity without an event horizon. Ring singularity – describes the altering gravitational singularity of a rotating black hole, or a Kerr black hole, so that the gravitational singularity becomes shaped like a ring. Immirzi parameter – numerical coefficient appearing in loop quantum gravity, a nonperturbative theory of quantum gravity. Membrane paradigm – useful "toy model" method or "engineering approach" for visualising and calculating the effects predicted by quantum mechanics for the exterior physics of black holes, without using quantum-mechanical principles or calculations. Kugelblitz (astrophysics) – concentration of light so intense that it forms an event horizon and becomes self-trapped: according to general relativity, if enough radiation is aimed into a region, the concentration of energy can warp spacetime enough for the region to become a black hole. Wormhole – hypothetical topological feature of spacetime that would be, fundamentally, a "shortcut" through spacetime. Quasi-star – hypothetical type of extremely massive star that may have existed very early in the history of the Universe. Black hole neural network Issues pertaining to black holes No-hair theorem – postulates that all black hole solutions of the Einstein-Maxwell equations of gravitation and electromagnetism in general relativity can be completely characterized by only three externally observable classical parameters: mass, electric charge, and angular momentum. Black hole information paradox – results from the combination of quantum mechanics and general relativity. Cosmic censorship hypothesis – two mathematical conjectures about the structure of singularities arising in general relativity. Nonsingular black hole models – mathematical theory of black holes that avoids certain theoretical problems with the standard black hole model, including information loss and the unobservable nature of the black hole event horizon. Holographic principle – property of quantum gravity and string theories which states that the description of a volume of space can be thought of as encoded on a boundary to the region—preferably a light-like boundary like a gravitational horizon. Black hole complementarity – conjectured solution to the black hole information paradox, proposed by Leonard Susskind and Gerard 't Hooft. Black hole metrics Schwarzschild metric – describes the gravitational field outside a spherical, uncharged, non-rotating mass such as a star, planet, or black hole. Kerr metric – describes the geometry of empty spacetime around an uncharged, rotating black hole (axially symmetric with an event horizon which is topologically a sphere) Reissner–Nordström metric – static solution to the Einstein-Maxwell field equations, which corresponds to the gravitational field of a charged, non-rotating, spherically symmetric body of mass M. Kerr-Newman metric – solution of the Einstein–Maxwell equations in general relativity, describing the spacetime geometry in the region surrounding a charged, rotating mass. Astronomical objects including a black hole Hypercompact stellar system – dense cluster of stars around a supermassive black hole that has been ejected from the centre of its host galaxy. Persons influential in black hole research Stephen Hawking Jacob Bekenstein - for the foundation of black hole thermodynamics and the elucidation of the relation between entropy and the area of a black hole's event horizon. Karl Schwarzschild - found a solution to the equations of general relativity that characterizes a black hole. J. Robert Oppenheimer - for the discovery of the Tolman–Oppenheimer–Volkoff limit and his work with Hartland Snyder showing how a black hole could develop. Roger Penrose - showed, alongside Hawking, that global singularities can exist. Albert Einstein - arrived at the theory of general relativity; published a paper in 1939 arguing black holes cannot actually exist. See also Outline of astronomy Outline of space science References External links Stanford Encyclopedia of Philosophy: "Singularities and Black Holes" by Erik Curiel and Peter Bokulich. Black Holes: Gravity's Relentless Pull—Interactive multimedia Web site about the physics and astronomy of black holes from the Space Telescope Science Institute Frequently Asked Questions (FAQs) on Black Holes "Schwarzschild Geometry" Videos 16-year-long study tracks stars orbiting Milky Way black hole Movie of Black Hole Candidate from Max Planck Institute Nature.com 2015-04-20 3D simulations of colliding black holes Computer visualisation of the signal detected by LIGO Two Black Holes Merge into One (based upon the signal GW150914 Black hole Black hole Black holes, Outline Black holes, Outine
Outline of black holes
[ "Physics", "Astronomy" ]
2,872
[ "Black holes", "Physical phenomena", "Physical quantities", "Galaxies", "Unsolved problems in physics", "Astrophysics", "Density", "Theory of relativity", "Stellar phenomena", "Astronomical objects" ]
14,464,979
https://en.wikipedia.org/wiki/Outline%20of%20cell%20biology
The following outline is provided as an overview of and topical guide to cell biology: Cell biology – A branch of biology that includes study of cells regarding their physiological properties, structure, and function; the organelles they contain; interactions with their environment; and their life cycle, division, and death. This is done both on a microscopic and molecular level. Cell biology research extends to both the great diversities of single-celled organisms like bacteria and the complex specialized cells in multicellular organisms like humans. Formerly, the field was called cytology (from Greek κύτος, kytos, "a hollow;" and -λογία, -logia). A branch of science Cell biology can be described as all of the following: Branch of science – A systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. Branch of natural science – The branch of science concerned with the description, prediction, and understanding of natural phenomena based on observational and empirical evidence. Validity, accuracy, and social mechanisms ensuring quality control, such as peer review and repeatability of findings, are among the criteria and methods used for this purpose. Branch of biology – The study of life and living organisms, including their structure, function, growth, evolution, distribution, and taxonomy. Academic discipline – Focused study in one academic field or profession. A discipline incorporates expertise, people, projects, communities, challenges, studies, inquiry, and research areas that are strongly associated with a given discipline. Essence of cell biology Cell – The structural and functional unit of all known living organisms. It is the smallest unit of an organism that is classified as living, and also known as the building block of life. Cell comes from the Latin cellula, meaning, a small room. Robert Hooke first coined the term in his book, Micrographia, where he compared the structure of cork cells viewed through his microscope to that of the small rooms (or monks' "cells") of a monastery. Cell theory – The scientific theory which states that all organisms are composed of one or more cells. Vital functions of an organism occur within cells. All cells come from preexisting cells and contain the hereditary information necessary for regulating cell functions and for transmitting information to the next generation of cells. Cell biology – (formerly cytology) The study of cells. Cell division – The process of one parent cell separating into two or more daughter cells. Endosymbiotic theory – The evolutionary theory that certain eukaryotic organelles originated as separate prokaryotic organisms which were taken inside the cell as endosymbionts. Cellular respiration – The metabolic reactions and processes that take place in a cell or across the cell membrane to convert biochemical energy from fuel molecules into adenosine triphosphate (ATP) and then release the cell's waste products. Lipid bilayer – A membrane composed of two layers of lipid molecules (usually phospholipids). The lipid bilayer is a critical component of the cell membrane. Aspects of cells Homeostasis – The property of either an open system or a closed system, especially a living organism, that regulates its internal environment so as to maintain a stable, constant condition. Life – A condition of growth through metabolism, reproduction, and the power of adaptation to environment through changes originating internally. Microscopic – The scale of objects, like cells, that are too small to be seen easily by the naked eye and which require a lens or microscope to see them clearly. Unicellular – Organisms which are composed of only one cell. Multicellular – Organisms consisting of more than one cell and having differentiated cells that perform specialized functions. Tissues – A collection of interconnected cells that perform a similar function within an organism. Cellular differentiation – A concept in developmental biology whereby less specialized cells become a more specialized cell type in multicellular organisms. Types of cells Cell type – Distinct morphological or functional form of cell. When a cell switches state from one cell type to another, it undergoes cellular differentiation. There are at least several hundred distinct cell types in the adult human body. By organism Eukaryote – Organisms whose cells are organized into complex structures enclosed within membranes, including plants, animals, fungi, and protists. Animal cell – Eukaryotic cells belonging to kingdom Animalia, characteristically having no cell wall or chloroplasts. Plant cell – Eukaryotic cells belonging to kingdom Plantae and having chloroplasts, cellulose cell walls, and large central vacuoles. Fungal hypha – The basic cellular unit of organisms in kingdom fungi. Typically tubular, multinucleated, and with a chitinous cell wall. Protist – A highly variable kingdom of eukaryotic organisms which are mostly unicellular and not plants, animals, or fungi. Prokaryote – A group of organisms whose cells lack a membrane-bound cell nucleus, or any other membrane-bound organelles, including bacteria. Bacterial cells – A prokaryotic cell belonging to the mostly unicellular Domain Bacteria. Archea cell – A cell belonging to the prokaryotic and single-celled microorganisms in Domain Archea. By function Gamete – A haploid reproductive cell. Sperm and ova are gametes. Gametes fuse with another gamete during fertilization (conception) in organisms that reproduce sexually. Sperm – Male reproductive cell (a gamete). Ovum – Female reproductive cell (a gamete). Zygote – A cell that is the result of fertilization (the fusing of two gametes). Egg – The zygote of most birds and reptiles, resulting from fertilization of the ovum. The largest existing single cells currently known are (fertilized) eggs. Meristemic cell – Undifferentiated plants cells analogous to animal stem cells. Stem cell – Undifferentiated cells found in most multi-cellular organisms which are capable of retaining the ability to reinvigorate themselves through mitotic cell division and can differentiate into a diverse range of specialized cell types. Germ cell – Gametes and gonocytes, these are often . Germ cells should not be confused with "germs" (pathogens). Somatic cell – Any cells forming the body of an organism, as opposed to germline cells. more... General cellular anatomy Cellular compartment – All closed parts within a cell whose lumen is usually surrounded by a single or double lipid layer membrane. Organelles – A specialized subunit within a cell that has a specific function, and is separately enclosed within its own lipid membrane or traditionally any subcellular functional unit. Organelles Endomembrane system Endoplasmic reticulum – An organelle composed of an interconnected network of tubules, vesicles and cisternae. Membrane bound polyribosome – Polyribosomes that are attached to a cell's endoplasmic reticulum. Smooth endoplasmic reticulum – A section of endoplasmic reticulum on which ribosomes are not attached is termed as smooth endoplasmic reticulum. It has functions in several metabolic processes, including synthesis of lipids, metabolism of carbohydrates and calcium concentration, drug detoxification, and attachment of receptors on cell membrane proteins. Rough endoplasmic reticulum – A section of the endoplasmic reticulum on with the protein manufacturing organelle i.e. ribosomes are attached is termed as rough endoplasmic reticulum which give it a "rough" appearance (hence its name). Its primary function is the synthesis of enzymes and other proteins. Vesicle – A relatively small intracellular, membrane-enclosed sac that stores or transports substances. Golgi apparatus – A eukaryotic organelle that processes and packages macromolecules such as proteins and lipids that are synthesized by the cell. Nuclear envelope – It is the double lipid bilayer membrane which surrounds the genetic material and nucleolus in eukaryotic cells. The nuclear membrane consists of two lipid bilayers: Inner nuclear membrane Outer nuclear membrane Perinuclear space – Space between the nuclear membranes, a region contiguous with the lumen (inside) of the endoplasmic reticulum. The nuclear membrane has many small holes called nuclear pores that allow material to move in and out of the nucleus. Lysosomes – It is a membrane-bound cell organelle found in most animal cells (they are absent in red blood cells). Structurally and chemically, they are spherical vesicles containing hydrolytic enzymes capable of breaking down virtually all kinds of biomolecules, including proteins, nucleic acids, carbohydrates, lipids, and cellular debris. lysosomes act as the waste disposal system of the cell by digesting unwanted materials in the cytoplasm, both from outside of the cell and obsolete components inside the cell. For this function they are popularly referred to as "suicide bags" or "suicide sacs" of the cell. Endosomes – It is a membrane-bounded compartment inside eukaryotic cells. It is a compartment of the endocytic membrane transport pathway from the plasma membrane to the lysosome. Endosomes represent a major sorting compartment of the endomembrane system in cells. Cell nucleus – A membrane-enclosed organelle found in most eukaryotic cells. It contains most of the cell's genetic material, organized as multiple long linear DNA molecules in complex with a large variety of proteins, such as histones, to form chromosomes. Nucleoplasm – Viscous fluid, inside the nuclear envelope, similar to cytoplasm. Nucleolus – Where ribosomes are assembled from proteins and RNA. Chromatin – All DNA and its associated proteins in the nucleus. Chromosome – A single DNA molecule with attached proteins. Energy creators Mitochondrion – A membrane-enclosed organelle found in most eukaryotic cells. Often called "cellular power plants", mitochondria generate most of cells' supply of adenosine triphosphate (ATP), the body's main source of energy. Chloroplast – An organelles found in plant cells and eukaryotic algae that conduct photosynthesis. Centrosome – The main microtubule organizing center of animal cells as well as a regulator of cell-cycle progression. Lysosome – The organelles that contain digestive enzymes (acid hydrolases). They digest excess or worn-out organelles, food particles, and engulfed viruses or bacteria. Peroxisome – A ubiquitous organelle in eukaryotes that participates in the metabolism of fatty acids and other metabolites. Peroxisomes have enzymes that rid the cell of toxic peroxides. Ribosome – It is a large and complex molecular machine, found within all living cells, that serves as the site of biological protein synthesis (translation). Ribosomes build proteins from the genetic instructions held within messenger RNA. Symbiosome – A temporary organelle that houses a nitrogen-fixing endosymbiont. Vacuole – A membrane-bound compartments within some eukaryotic cells that can serve a variety of secretory, excretory, and storage functions. Structures Cell membrane – (also called the plasma membrane, plasmalemma or "phospholipid bilayer") A semipermeable lipid bilayer found in all cells; it contains a wide array of functional macromolecules. Cell wall – A fairly rigid layer surrounding a cell, located external to the cell membrane, which provides the cell with structural support, protection, and acts as a filtering mechanism. Centriole – A barrel shaped microtubule structure found in most eukaryotic cells other than those of plants and fungi. Cluster of differentiation – A cell surface molecules present on white blood cells initially but found in almost any kind of cell of the body, providing targets for immunophenotyping of cells. Physiologically, CD molecules can act in numerous ways, often acting as receptors or ligands (the molecule that activates a receptor) important to the cell. A signal cascade is usually initiated, altering the behavior of the cell (see cell signaling). Cytoskeleton – A cellular "scaffolding" or "skeleton" contained within the cytoplasm that is composed of three types of fibers: microfilaments, intermediate filaments, and microtubules. Cytoplasm – A gelatinous, semi-transparent fluid that fills most cells, it includes all cytosol, organelles and cytoplasmic inclusions. Cytosol – It is the internal fluid of the cell, and where a portion of cell metabolism occurs. Inclusions – A chemical substances found suspended directly in the cytosol. Photosystem – They are functional and structural units of protein complexes involved in photosynthesis that together carry out the primary photochemistry of photosynthesis: the absorption of light and the transfer of energy and electrons. They are found in the thylakoid membranes of plants, algae and cyanobacteria (in plants and algae these are located in the chloroplasts), or in the cytoplasmic membrane of photosynthetic bacteria. Plasmid – An extrachromosomal DNA molecule separate from the chromosomal DNA and capable of sexual replication, it is typically ring shaped and found in bacteria. Spindle fiber – The structure that separates the chromosomes into the daughter cells during cell division. Stroma – The colorless fluid surrounding the grana within the chloroplast. Within the stroma are grana, stacks of thylakoids, the sub-organelles, the daughter cells, where photosynthesis is commenced before the chemical changes are completed in the stroma. Thylakoid membrane – It is the site of the light-dependent reactions of photosynthesis with the photosynthetic pigments embedded directly in the membrane. Molecules DNA – Deoxyribonucleic acid (DNA) is a nucleic acid that contains the genetic instructions used in the development and functioning of all known living organisms and some viruses. DNA helicase DNA polymerase DNA ligase RNA – Ribonucleic acid is a nucleic acid made from a long chain of nucleotide, in a cell it is typically transcribed from DNA. RNA polymerase mRNA rRNA tRNA Proteins – Biochemical compounds consisting of one or more polypeptides typically folded into a globular or fibrous form, facilitating a biological function. List of proteins Enzymes – Proteins that catalyze (i.e. accelerate) the rates of specific chemical reactions within cells. Pigments Chlorophyll – It is a term used for several closely related green pigments found in cyanobacteria and the chloroplasts of algae and plants. Chlorophyll is an extremely important biomolecule, critical in photosynthesis, which allows plants to absorb energy from light. Carotenoid – They are organic pigments that are found in the chloroplasts and chromoplasts of plants and some other photosynthetic organisms, including some bacteria and some fungi. Carotenoids can be produced from fats and other basic organic metabolic building blocks by all these organisms. There are over 600 known carotenoids; they are split into two classes, xanthophylls (which contain oxygen) and carotenes (which are purely hydrocarbons, and contain no oxygen). Biological activity of cells Cellular metabolism Cellular respiration – Glycolysis – The foundational process of both aerobic and anaerobic respiration, glycolysis is the archetype of universal metabolic processes known and occurring (with variations) in many types of cells in nearly all organisms. Pyruvate dehydrogenase – Enzyme in the eponymous complex linking glycolysis and the subsequent citric acid cycle. Citric acid cycle – Also known as the Krebs cycle, an important aerobic metabolic pathway. Electron transport chain – A biochemical process which associates electron carriers (such as NADH and FADH2) and mediating biochemical reactions that produce adenosine triphosphate (ATP), which is a major energy intermediate in living organisms. Typically occurs across a cellular membrane. Photosynthesis – The conversion of light energy into chemical energy by living organisms. Light-dependent reactions – A series of biochemical reactions driven by light that take place across thylakoid membrane to provide for the Calvin cycle reactions. Calvin cycle – A series of anabolic biochemical reactions that takes place in the stroma of chloroplasts in photosynthetic organisms. It is one of the light-independent reactions or dark reactions. Electron transport chain – A biochemical process which associates electron carriers (such as NADH and FADH2) and mediating biochemical reactions that produce adenosine triphosphate (ATP), which is a major energy intermediate in living organisms. Typically occurs across a cellular membrane. Metabolic pathway – A series of chemical reactions occurring within a cell which ultimately leads to sequestering of energy. Alcoholic fermentation – The anaerobic metabolic process by which sugars such as glucose, fructose, and sucrose, are converted into cellular energy and thereby producing ethanol, and carbon dioxide as metabolic waste products. Lactic acid fermentation – An anaerobic metabolic process by which sugars such as glucose, fructose, and sucrose, are converted into cellular energy and the metabolic waste product lactic acid. Chemosynthesis – The biological conversion of one or more carbon molecules (usually carbon dioxide or methane) and nutrients into organic matter using the oxidation of inorganic molecules (e.g. hydrogen gas, hydrogen sulfide) or methane as a source of energy, rather than sunlight, as in photosynthesis. Important molecules: ADP – Adenosine diphosphate (ADP) (Adenosine pyrophosphate (APP)) is an important organic compound in metabolism and is essential to the flow of energy in living cells. A molecule of ADP consists of three important structural components: a sugar backbone attached to a molecule of adenine and two phosphate groups bonded to the 5 carbon atom of ribose. ATP – A multifunctional nucleotide that is most important as a "molecular currency" of intracellular energy transfer. NADH – A coenzyme found in all living cells which serves as an important electron carrier in metabolic processes. Pyruvate – It is the "energy-molecule" output of the aerobic metabolism of glucose known as glycolysis. Glucose – An important simple sugar used by cells as a source of energy and as a metabolic intermediate. Glucose is one of the main products of photosynthesis and starts cellular respiration in both prokaryotes and eukaryotes. Cellular reproduction Cell cycle – The series of events that take place in a eukaryotic cell leading to its replication. Interphase – The stages of the cell cycle that prepare the cell for division. Mitosis – In eukaryotes, the process of division of the nucleus and genetic material. Prophase – The stage of mitosis in which the chromatin condenses into a highly ordered structure called chromosomes and the nuclear membrane begins to break up. Metaphase – The stage of mitosis in which condensed chromosomes, carrying genetic information, align in the middle of the cell before being separated into each of the two daughter cells. Anaphase – The stage of mitosis when chromatids (identical copies of chromosomes) separate as they are pulled towards opposite poles within the cell. Telophase – The stage of mitosis when the nucleus reforms and chromosomes unravel into longer chromatin structures for reentry into interphase. Cytokinesis – The process cells use to divide their cytoplasm and organelles. Meiosis – The process of cell division used to create gametes in sexually reproductive eukaryotes. Chromosomal crossover – (or crossing over) It is the exchange of genetic material between homologous chromosomes that results in recombinant chromosomes during sexual reproduction. It is one of the final phases of genetic recombination, which occurs in the pachytene stage of prophase I of meiosis during a process called synapsis. Binary fission – The process of cell division used by prokaryotes. Transcription and Translation Transcription – Fundamental process of gene expression through turning DNA segment into a functional unit of RNA. Translation – It is the process in which cellular ribosomes create proteins. mRNA rRNA tRNA Introns Exons Miscellaneous cellular processes Cell transport Osmosis – The diffusion of water through a cell wall or membrane or any partially permeable barrier from a solution of low solute concentration to a solution with high solute concentration. Passive transport – Movement of molecules into and out of cells without the input of cellular energy. Active transport – Movement of molecules into and out of cells with the input of cellular energy. Bulk transport Endocytosis – It is a form of active transport in which a cell transports molecules (such as proteins) into the cell by engulfing them in an energy-using process. Exocytosis – It is a form of active transport in which a cell transports molecules (such as proteins) out of the cell by expelling them. Phagocytosis – The process a cell uses when engulfing solid particles into the cell membrane to form an internal phagosome, or "food vacuole." Tonicity – This is a measure of the effective osmotic pressure gradient (as defined by the water potential of the two solutions) of two solutions separated by a semipermeable membrane. Programmed cell death – The death of a cell in any form, mediated by an intracellular program (ex. apoptosis or autophagy). Apoptosis – A series of biochemical events leading to a characteristic cell morphology and death, which is not caused by damage to the cell. Autophagy – The process whereby cells "eat" their own internal components or microbial invaders. Cell senescence – The phenomenon where normal diploid differentiated cells lose the ability to divide after about 50 cell divisions. Cell signaling – Regulation of cell behavior by signals from outside. Cell adhesion – Holding together cells and tissues. Motility and Cell migration – The various means for a cell to move, guided by cues in its environment. Cytoplasmic streaming – Flowing of cytoplasm in eukaryotic cells. DNA repair – The process used by cells to fix damaged DNA sections. Applied cell biology concepts Cell therapy – The process of introducing new cells into a tissue in order to treat a disease. Cloning – Processes used to create copies of DNA fragments (molecular cloning), cells (cell cloning), or organisms. Cell disruption – A method or process for releasing biological molecules from inside a cell. Laboratory procedures Bacterial conjugation – Transfer of genetic material between bacterial cells by direct cell-to-cell contact or by a bridge-like connection between two cells. Conjugation is a convenient means for transferring genetic material to a variety of targets. In laboratories, successful transfers have been reported from bacteria to yeast, plants, mammalian cells and isolated mammalian mitochondria. Cell culture – The process by which cells are grown under controlled conditions, generally outside of their natural environment. In practice, the term "cell culture" now refers to the culturing of cells derived from multi-cellular eukaryotes, especially animal cells. Cell disruption, and cell unroofing – Methods for releasing molecules from cells. Cell fractionation – Separation of homogeneous sets from a larger population of cells. Cell incubator – The device used to grow and maintain microbiological cultures or cell cultures. The incubator maintains optimal temperature, humidity and other conditions such as the carbon dioxide () and oxygen content of the atmosphere inside. Cyto-Stain – Commercially available mix of staining dyes for polychromatic staining in histology. Fluorescent-activated cell sorting – Specialized type of flow cytometry. It provides a method for sorting a heterogeneous mixture of biological cells into two or more containers, one cell at a time, based upon the specific light scattering and fluorescent characteristics of each cell. Spinning – Using a special bioreactor which features an impeller, stirrer or similar device to agitate the contents (usually a mixture of cells, medium and products like proteins that can be harvested). History of cell biology See also Cell biologists below History of cell biology – is intertwined with the history of biochemistry and the history of molecular biology. Other articles pertaining to the history of cell biology include: History of cell theory, embryology and germ theory History of biochemistry, microbiology, and molecular biology History of the optical microscope Timeline of microscope technology Cell biologists Past Karl August Möbius – In 1884 first observed the structures that would later be called "organelles". Bengt Lidforss – Coined the word "organells" which later became "organelle". Robert Hooke – Coined the word "cell" after looking at cork under a microscope. Anton van Leeuwenhoek – First observed microscopic single celled organisms in apparently clean water. Hans Adolf Krebs – Discovered the citric acid cycle in 1937. Konstantin Mereschkowski – Russian botanist who in 1905 described the Theory of Endosymbiosis. Edmund Beecher Wilson – Known as America's first cellular biologist, discovered the sex chromosome arrangement in humans. Albert Claude – Shared the Nobel Prize in 1974 "for describing the structure and function of organelles in biological cells" Theodor Boveri – In 1888 identified the centrosome and described it as the 'special organ of cell division.' Peter D. Mitchell – British biochemist who was awarded the 1978 Nobel Prize for Chemistry for his discovery of the chemiosmotic mechanism of ATP synthesis. Lynn Margulis – An American biologist best known for her theory on the origin of eukaryotic organelles, and her contributions and support of the endosymbiotic theory. Current Günter Blobel – An American biologist who won a Nobel Prize for protein targeting in cells. Peter Agre – An American chemist who won a Nobel Prize for discovering cellular aquaporins. Christian de Duve – Shared the Nobel Prize in 1974 "for describing the structure and function of organelles in biological cells" George Emil Palade – Shared the Nobel Prize in 1974 "for describing the structure and function of organelles in biological cells.” Ira Mellman – An American cell biologist who discovered endosomes. Paul Nurse – Shared a 2001 Nobel Prize for discoveries regarding cell cycle regulation by cyclin and cyclin dependent kinases. Leland H. Hartwell – Shared a 2001 Nobel Prize for discoveries regarding cell cycle regulation by cyclin and cyclin dependent kinases. R. Timothy Hunt – Shared a 2001 Nobel Prize for discoveries regarding cell cycle regulation by cyclin and cyclin dependent kinases. Closely allied sciences Cytopathology – A branch of pathology that studies and diagnoses diseases on the cellular level. The most common use of cytopathology is the Pap smear, used to detect cervical cancer at an early treatable stage. Genetics – The science of heredity and variation in living organisms. Biochemistry – The study of the chemical processes in living organisms. It deals with the structure and function of cellular components, such as proteins, carbohydrates, lipids, nucleic acids, and other biomolecules. Cytochemistry – The biochemistry of cells, especially that of the macromolecules responsible for cell structure and function. Molecular biology – The study of biology at a molecular level, including the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis and learning how these interactions are regulated. Developmental biology – The study of the process by which organisms grow and develop, including the genetic control of cell growth, differentiation and "morphogenesis", which is the process that gives rise to tissues, organs and anatomy. Microbiology – The study of microorganisms, which are unicellular or cell-cluster microscopic organisms as well as viruses. Cellular microbiology – A discipline bridging microbiology and cell biology. See also Outline of biology Further reading Young John K (2010). Introduction to Cell Biology. & (pbk). References cell biology cell biology Cell biology Biology-related lists
Outline of cell biology
[ "Biology" ]
5,941
[ "Cell biology" ]
14,465,804
https://en.wikipedia.org/wiki/Papoose%20board
In the medical field a papoose board is a temporary medical stabilization board used to limit a patient's freedom of movement to decrease risk of injury while allowing safe completion of treatment. The term papoose board refers to a brand name. It is most commonly used during dental work, venipuncture, and other medical procedures. It is also sometimes used during medical emergencies to keep an individual from moving when total sedation is not possible. It is usually used on patients as a means of temporarily and safely limiting movement and is generally more effective than holding the person down. It is mostly used on young patients and patients with special needs. A papoose board is a cushioned board with fabric Velcro straps that can be used to help limit a patient's movement and hold them steady during the medical procedure. Sometimes oral, IV or gas sedation such as nitrous oxide will be used to calm the patient prior to or during use. Using a papoose board to temporarily and safely limit movement is often preferable to medical sedation, which presents serious potential risks, including death. As a result, restraint is preferred by some parents as an alternative to sedation, behavior management/anxiety reduction techniques, better pain management or a low-risk anxiolytic such as nitrous oxide. Informed consent from a parent or guardian is usually required before a papoose board can be used. If assent from the child is required, then in most cases, the papoose board would be prohibited as it is unlikely that a child would agree to restraint and not struggle. In some countries, the papoose board is banned and considered a serious breach of ethics (for example, the U.K.). Use of papoose boards in dentistry The American Academy of Pediatric Dentistry approves of partial or complete stabilization of the patient in cases when it is necessary to protect the patient, practitioner, staff, or parent from injury while providing dental care. As of 2004, 85 percent of dental programs across the U.S. teach protective stabilization as an acceptable behavioral management practice. By 2004 The Colorado Springs Gazette reported that the dental chain Small Smiles Dental Centers used papoose boards almost 7,000 times in one period of 18 months, according to Colorado state records. Michael and Edward DeRose, two of the owners of Small Smiles, said that they used papoose boards so that they could do dental work on larger numbers of children in a more rapid manner. Small Smiles dentists from other states learned the papoose board method in Colorado and began practicing the method in other states. As a result, a Colorado Board of Dental Examiners-appointed committee established a new Colorado state law forbidding the usage of papoose boards for children unless a dentist has exhausted other possibilities for controlling a child's behavior, and if the dentist uses a papoose board, he or she must document why the papoose board was used in the patient's record. Controversies In some countries, the papoose board is banned and considered a serious breach of ethical practice. Although the papoose board is discussed as a behavior management technique, it is simply a restraint technique although ethically questionable, thus preventing any behavior from occurring that could be managed with recognized behavioral and anxiety reduction techniques. Origins Papoose boards were originally a wood-and-leather device used by many Native American tribes to swaddle their infants and children. Papoose boards, also known as cradle boards, are still in use in many places. References Medical equipment
Papoose board
[ "Biology" ]
722
[ "Medical equipment", "Medical technology" ]
14,465,985
https://en.wikipedia.org/wiki/Dipropylcyclopentylxanthine
8-Cyclopentyl-1,3-dipropylxanthine (DPCPX, PD-116,948) is a drug which acts as a potent and selective antagonist for the adenosine A1 receptor. It has high selectivity for A1 over other adenosine receptor subtypes, but as with other xanthine derivatives DPCPX also acts as a phosphodiesterase inhibitor, and is almost as potent as rolipram at inhibiting PDE4. It has been used to study the function of the adenosine A1 receptor in animals, which has been found to be involved in several important functions such as regulation of breathing and activity in various regions of the brain, and DPCPX has also been shown to produce behavioural effects such as increasing the hallucinogen-appropriate responding produced by the 5-HT2A agonist DOI, and the dopamine release induced by MDMA, as well as having interactions with a range of anticonvulsant drugs. See also DMPX CPX Xanthine References Adenosine receptor antagonists Phosphodiesterase inhibitors Xanthines Cyclopentanes Propyl compounds
Dipropylcyclopentylxanthine
[ "Chemistry" ]
253
[ "Alkaloids by chemical classification", "Xanthines" ]
14,466,835
https://en.wikipedia.org/wiki/Photon%20Factory
The Photon Factory (PF) is a synchrotron located at KEK, in Tsukuba, Japan, about fifty kilometres from Tokyo. History The Photon Factory turned on its synchrotron for the first time in 1982, becoming the first light source accelerator in Japan to produce x-rays. In 1997, it joined the Institute of Materials Structure Science (IMSS), a Japanese-run international particle physics organization based at KEK. The current head of the Photon Factory is N. Igarashi. Research and design There are two major facilities, the Photon Factory itself which is a 2.5GeV synchrotron with a beam current of around 450mA, and the PF-AR 'Advanced Ring for Pulsed X-Rays', which is a 6.5GeV machine running in a single-bunch mode with a beam current of around 60mA. It operates with a pulse width of about 100 picoseconds. The Photon Factory’s photon accelerator is one of the Institute of Materials Structure Science’s four quantum beams used for particle physics research. Its macromolecular crystallography beam is used substantially for Japan's structural genomics project. More recently, the Photon Factory has partnered with the Saha Institute and Jawaharlal Nehru Centre in India to create the Indian Beam, which is open to Indian particle and nuclear physicists to use for experiments in power diffraction, scattering, and reflectivity. References External links Photon Factory - KEK IMSS website Synchrotron radiation facilities
Photon Factory
[ "Physics", "Materials_science" ]
316
[ "Particle physics stubs", "Materials testing", "Particle physics", "Synchrotron radiation facilities" ]
14,467,558
https://en.wikipedia.org/wiki/Surface%20force
Surface force denoted fs is the force that acts across an internal or external surface element in a material body. Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces. Surface force can be decomposed into two perpendicular components: normal forces and shear forces. A normal force acts normally over an area and a shear force acts tangentially over an area. Equations for surface force Surface force due to pressure , where f = force, p = pressure, and A = area on which a uniform pressure acts Examples Pressure related surface force Since pressure is , and area is a , a pressure of over an area of will produce a surface force of . See also Body force Contact force References Classical mechanics Fluid dynamics Force
Surface force
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
159
[ "Fluid dynamics stubs", "Force", "Physical quantities", "Chemical engineering", "Quantity", "Classical mechanics stubs", "Mass", "Classical mechanics", "Mechanics", "Piping", "Wikipedia categories named after physical quantities", "Matter", "Fluid dynamics" ]
14,468,359
https://en.wikipedia.org/wiki/Verification%20%28spaceflight%29
Verification in the field of space systems engineering covers two verification processes: Qualification and Acceptance Overview In the field of spaceflight verification standards are developed by DoD, NASA and the ECSS, among others. Large aerospace corporations may also developed their own internal standards. These standards exist in order to specify requirements for the verification of a space system product, such as: the fundamental concepts of the verification process the criteria for defining the verification strategy and the rules, organization, and process for the implementation of the verification program Verification or qualification, is one main reason that costs for space systems are high. All data are to be documented and to stay accessible for potential, later failure analyses. In previous times that approach was executed down to piece-parts level (resistors, switches etc.) whereas nowadays it is tried to reduce cost by usage of "CAM (Commercial, Avionics, Military) equipment" for non-safety relevant units. Qualification and Acceptance Qualification is the formal proof that the design meets all requirements of the specification and the parameters agreed in the Interface Control Documents (ICD) requirements with adequate margin, including tolerances due to manufacturing imperfections, wear-out within specified life-time, faults, etc. The end of the qualification process is the approval signature of the customer on the Certificate of Qualification (CoQ), or Qualification Description Document (QDD) agreeing that all the requirements are met by the product to be delivered under the terms of a contract. Acceptance is the formal proof that the product identified is free of workmanship defects and meets preset performance requirements with adequate margin. Acceptance is based on the preceding qualification by reference to the used design / manufacturing documentation. The end of the acceptance process is the approval signature of the customer on the CoA, or QDD, agreeing that all the requirements are met by the product to be delivered under the terms of a contract. There are five generally accepted Qualification methods: Analysis Test Inspection Demonstration Similarity (although Similarity is a form of Analysis, in most space applications, it is recommended to highlight it as its own category) Being qualified means demonstrating with margin that the design, and the implementation of the design, meets the intended preset requirements. There are many different Qualification strategies in order to reach the same goals. It consists of designing hardware (or software) to qualification requirements (including margin), testing dedicated hardware (or software) to qualification requirements to verify the design, followed by acceptance testing of flight hardware to screen workmanship defects. There are other strategies as well, the Proto-Qualification strategy for instance. Proto-Qualification consists of testing the first flight hardware to Proto-Qualification requirements to verify design, and testing subsequent flight hardware to acceptance levels to screen workmanship defects. This first Proto-Qualification unit is flight-worthy. There are three generally accepted Acceptance methods: Test Inspection Demonstration If a deviation against the qualified item is detected (higher tolerances, scratches etc.) a Non-Conformance is to be processed; to justify that this item can be used despite this deviation an Analysis might be required. See also Spacecraft System engineering References Further reading ECSS-E-ST-10-02: Verification (European Space Standard) DoD, MIL-STD-1540D: Product Verification Requirements for Launch, Upper Stage, and Space Vehicles Spaceflight concepts Systems engineering
Verification (spaceflight)
[ "Engineering" ]
665
[ "Systems engineering" ]
14,469,114
https://en.wikipedia.org/wiki/Rhenium%E2%80%93osmium%20dating
Rhenium–osmium dating is a form of radiometric dating based on the beta decay of the isotope 187Re to 187Os. This normally occurs with a half-life of 41.6 × 109 y, but studies using fully ionised 187Re atoms have found that this can decrease to only 33 y. Both rhenium and osmium are strongly siderophilic (iron loving), while Re is also chalcophilic (sulfur loving) making it useful in dating sulfide ores such as gold and Cu–Ni deposits. This dating method is based on an isochron calculated based on isotopic ratios measured using N-TIMS (Negative – Thermal Ionization Mass Spectrometry). Rhenium–osmium isochron Rhenium–osmium dating is carried out by the isochron dating method. Isochrons are created by analysing several samples believed to have formed at the same time from a common source. The Re–Os isochron plots the ratio of radiogenic 187Os to non-radiogenic 188Os against the ratio of the parent isotope 187Re to the non-radiogenic isotope 188Os. The stable and relatively abundant osmium isotope 188Os is used to normalize the radiogenic isotope in the isochron. The Re–Os isochron is defined by the following equation: where: t is the age of the sample, λ is the decay constant of 187Re, (eλt−1) is the slope of the isochron which defines the age of the system. A good example of an application of the Re–Os isochron method is a study on the dating of a gold deposit in the Witwatersrand mining camp, South Africa. Rhenium–osmium isotopic evolution Rhenium and osmium were strongly refractory and siderophile during the initial accretion of the Earth which caused both elements to preferentially enter the Earth's core. Thus the two elements should be depleted in the silicate Earth yet the 187Os / 188Os ratio of mantle is chondritic. The reason for this apparent contradiction is owed to the change in behavior between Re and Os in partial melt events. Re tends to enter the melt phase (incompatible) while Os remains in the solid residue (compatible). This causes high ratios of Re/Os in oceanic crust (which is derived from partial melting of mantle) and low ratios of Re/Os in the lower mantle. In this regard, the Re–Os system to study the geochemical evolution of mantle rocks and in defining the chronology of mantle differentiation is extremely helpful. Peridotite xenoliths which are thought to sample the upper mantle sometimes contain supra-chondritic Os-isotopic ratios. This is thought to evidence of subducted ancient high Re/Os basaltic crust that is being recycled back into the mantle. This combination of radiogenic (187Os that was created by decay of 187Re) and nonradiogenic melts helps to support the theory of at least two Os-isotopic reservoirs in the mantle. The volume of both these reservoirs is thought to be around 5–10% of the whole mantle. The first reservoir is characterized by depletion in Re and proxies for melt fertility (such as concentrations of elements like Ca and Al). The second reservoir is chondritic in composition. Direct measurement of the age of continental crust through Re–Os dating is difficult. Infiltration of xenoliths by their commonly Re-rich magma alters the true elemental Re/Os ratios. Instead, determining model ages can be done in two ways: "Re depletion" model ages or the "melting age" model. The former finds the minimum age of the extraction event assuming the elemental Re/Os ratio equals 0 (komatiite residues have Re/Os of 0, so this is assuming a xenolith was extracted from a near-komatiite melt). The latter gives the age of the melting event inferred from the point when a melt proxy like Al2O3 is equal to 0 (ancient subcontinental lithosphere has weight percentages of CaO and Al2O3 ranging from 0 to 2%). Pt–Re–Os systematics The radioactive decay of 190Pt to 186Os has a half-life of 4.83(3)×1011 years (which is longer than the age of the universe, so it is essentially stable). However, in-situ 187Os / 188Os and 186Os / 188Os of modern plume related magmas show simultaneous enrichment which implies a source that is supra-chondritic in Pt/Os and Re/Os. Since both parental isotopes have extremely long half-lives, the Os-isotope rich reservoir must be very old to allow enough time for the daughter isotopes to form. These observations are interpreted to support the theory that the Archean subducted crust contributed Os-isotope rich melts back into the mantle. References Radiometric dating Rhenium Osmium
Rhenium–osmium dating
[ "Chemistry" ]
1,050
[ "Radiometric dating", "Radioactivity" ]
14,469,299
https://en.wikipedia.org/wiki/Autonomous%20convergence%20theorem
In mathematics, an autonomous convergence theorem is one of a family of related theorems which specify conditions guaranteeing global asymptotic stability of a continuous autonomous dynamical system. History The Markus–Yamabe conjecture was formulated as an attempt to give conditions for global stability of continuous dynamical systems in two dimensions. However, the Markus–Yamabe conjecture does not hold for dimensions higher than two, a problem which autonomous convergence theorems attempt to address. The first autonomous convergence theorem was constructed by Russell Smith. This theorem was later refined by Michael Li and James Muldowney. An example autonomous convergence theorem A comparatively simple autonomous convergence theorem is as follows: Let be a vector in some space , evolving according to an autonomous differential equation . Suppose that is convex and forward invariant under , and that there exists a fixed point such that . If there exists a logarithmic norm such that the Jacobian satisfies for all values of , then is the only fixed point, and it is globally asymptotically stable. This autonomous convergence theorem is very closely related to the Banach fixed-point theorem. How autonomous convergence works Note: this is an intuitive description of how autonomous convergence theorems guarantee stability, not a strictly mathematical description. The key point in the example theorem given above is the existence of a negative logarithmic norm, which is derived from a vector norm. The vector norm effectively measures the distance between points in the vector space on which the differential equation is defined, and the negative logarithmic norm means that distances between points, as measured by the corresponding vector norm, are decreasing with time under the action of . So long as the trajectories of all points in the phase space are bounded, all trajectories must therefore eventually converge to the same point. The autonomous convergence theorems by Russell Smith, Michael Li and James Muldowney work in a similar manner, but they rely on showing that the area of two-dimensional shapes in phase space decrease with time. This means that no periodic orbits can exist, as all closed loops must shrink to a point. If the system is bounded, then according to Pugh's closing lemma there can be no chaotic behaviour either, so all trajectories must eventually reach an equilibrium. Michael Li has also developed an extended autonomous convergence theorem which is applicable to dynamical systems containing an invariant manifold. Notes Stability theory Fixed points (mathematics) Theorems in dynamical systems
Autonomous convergence theorem
[ "Mathematics" ]
500
[ "Theorems in dynamical systems", "Mathematical theorems", "Mathematical analysis", "Fixed points (mathematics)", "Stability theory", "Topology", "Mathematical problems", "Dynamical systems" ]
14,470,771
https://en.wikipedia.org/wiki/Light%20effects%20on%20circadian%20rhythm
Light effects on circadian rhythm are the response of circadian rhythms to light. Most animals and other organisms have a biological clock that synchronizes their physiology and behaviour with the daily changes in the environment. The physiological changes that follow these clocks are known as circadian rhythms. Because the endogenous period of these rhythms are approximately but not exactly 24 hours, these rhythms must be reset by external cues to synchronize with the daily cycles in the environment. This process is called entrainment. One of the most important cues to entrain circadian rhythms is light. Mechanism Light first passes into a mammal's system through the retina, then takes one of two paths: the light gets collected by rod cells and cone cells and the retinal ganglion cells (RGCs), or it is directly collected by these RGCs. The RGCs use the photopigment melanopsin to absorb the light energy. Specifically, this class of RGCs being discussed is referred to as "intrinsically photosensitive", which just means they are sensitive to light. There are five known types of intrinsically photosensitive retinal ganglion cells (ipRGCs): M1, M2, M3, M4, and M5. Each of these differently ipRGC types have different melanopsin content and photosensitivity. These connect to amacrine cells in the inner plexiform layer of the retina. Ultimately, via this retinohypothalamic tract (RHT) the suprachiasmatic nucleus (SCN) of the hypothalamus receives light information from these ipRGCs. The ipRGCs serve a different function than rods and cones, even when isolated from the other components of the retina, ipRGCs maintain their photo-sensitivity and as a result can be sensitive to different ranges of the light spectrum. Additionally, ipRGC firing patterns may respond to light conditions as low as 1 lux whereas previous research indicated 2500 lux was required to suppress melatonin production. Circadian and other behavioral responses have shown to be more sensitive at lower wavelengths than the photopic luminous efficiency function that is based on sensitivity to cone receptors. The core region of the SCN houses the majority of light-sensitive neurons. From here, signals are transmitted via a nerve connection with the pineal gland that regulates various hormones in the human body. There are specific genes that determine the regulation of circadian rhythm in conjunction with light. When light activates NMDA receptors in the SCN, CLOCK gene expression in that region is altered and the SCN is reset, and this is how entrainment occurs. Genes also involved with entrainment are PER1 and PER2. Some important structures directly impacted by the light–sleep relationship are the superior colliculus-pretectal area and the ventrolateral pre-optic nucleus. The progressive yellowing of the crystalline lens with age reduces the amount of short-wavelength light reaching the retina and may contribute to circadian alterations observed in older adulthood. Effects Primary All of the mechanisms of light-affected entrainment are not yet fully known, however numerous studies have demonstrated the effectiveness of light entrainment to the day/night cycle. Studies have shown that the timing of exposure to light influences entrainment; as seen on the phase response curve for light for a given species. In diurnal (day-active) species, exposure to light soon after wakening advances the circadian rhythm, whereas exposure before sleeping delays the rhythm. An advance means that the individual will tend to wake up earlier on the following day(s). A delay, caused by light exposure before sleeping, means that the individual will tend to wake up later on the following day(s). The hormones cortisol and melatonin are affected by the signals light sends through the body's nervous system. These hormones help regulate blood sugar to give the body the appropriate amount of energy that is required throughout the day. Cortisol levels are high upon waking and gradually decrease over the course of the day, melatonin levels are high when the body is entering and exiting a sleeping status and are very low over the course of waking hours. The earth's natural light-dark cycle is the basis for the release of these hormones. The length of light exposure influences entrainment. Longer exposures have a greater effect than shorter exposures. Consistent light exposure has a greater effect than intermittent exposure. In rats, constant light eventually disrupts the cycle to the point that memory and stress coping may be impaired. The intensity and the wavelength of light influence entrainment. Dim light can affect entrainment relative to darkness. Brighter light is more effective than dim light. In humans, a lower intensity short wavelength (blue/violet) light appears to be equally effective as a higher intensity of white light. Exposure to monochromatic light at the wavelengths of 460 nm and 550 nm on two control groups yielded results showing decreased sleepiness at 460 nm tested over two groups and a control group. Additionally, in the same study but testing thermoregulation and heart rate researchers found significantly increased heart rate in 460 nm light over the course of a 1.5-hour exposure period. In a study done on the effect of lighting intensity on delta waves, a measure of sleepiness, high levels of lighting (1700 lux) showed lower levels of delta waves measured through an EEG than low levels of lighting (450 lux). This shows that lighting intensity is directly correlated with alertness in an office environment. Humans are sensitive to light with a short wavelength. Specifically, melanopsin is sensitive to blue light with a wavelength of approximately 480 nm. The effect this wavelength of light has on melanopsin leads to physiological responses such as the suppression of melatonin production, increased alertness, and alterations to the circadian rhythm. Secondary While light has direct effects on circadian rhythm, there are indirect effects seen across studies. Seasonal affective disorder creates a model in which decreased day length during autumn and winter increases depressive symptoms. A shift in the circadian phase response curve creates a connection between the amount of light in a day (day length) and depressive symptoms in this disorder. Light seems to have therapeutic antidepressant effects when an organism is exposed to it at appropriate times during the circadian rhythm, regulating the sleep-wake cycle. In addition to mood, learning and memory become impaired when the circadian system shifts due to light stimuli, which can be seen in studies modeling jet lag and shift work situations. Frontal and parietal lobe areas involved in working memory have been implicated in melanopsin responses to light information. "In 2007, the International Agency for Research on Cancer classified shift work with circadian disruption or chronodisruption as a probable human carcinogen." Exposure to light during the hours of melatonin production reduces melatonin production. Melatonin has been shown to mitigate the growth of tumors in rats. By suppressing the production of melatonin over the course of the night rats showed increased rates of tumors over the course of a four-week period. Artificial light at night causing circadian disruption additionally impacts sex steroid production. Increased levels of progestogens and androgens was found in night shift workers as compared to "working hour" workers. The proper exposure to light has become an accepted way to alleviate some of the effects of seasonal affective disorder (SAD). In addition exposure to light in the morning has been shown to assist Alzheimer patients in regulating their waking patterns. In response to light exposure, alertness levels can increase as a result of suppression of melatonin secretion. A linear relationship has been found between alerting effects of light and activation in the posterior hypothalamus. Disruption of circadian rhythm as a result of light also produces changes in metabolism. Measured lighting for rating systems Historically light was measured in the units of luminous intensity (candelas), luminance (candelas/m2) and illuminance (lumen/m2). After the discovery of ipRGCs in 2002 additional units of light measurement have been researched in order to better estimate the impact of different inputs of the spectrum of light on various photoreceptors. However, due to the variability in sensitivity between rods, cones and ipRGCs and variability between the different ipRGC types a singular unit does not perfectly reflect the effects of light on the human body. The accepted current unit is equivalent melanopic lux, which is a calculated ratio multiplied by the unit lux. The melanopic ratio is determined taking into account the source type of light and the melanopic illuminance values for the eye's photopigments. The source of light, the unit used to measure illuminance and the value of illuminance informs the spectral power distribution. This is used to calculate the Photopic illuminance and the melanopic lux for the five photopigments of the human eye, which is weighted based on the optical density of each photopigment. The WELL Building standard was designed for "advancing health and well-being in buildings globally" Part of the standard is the implementation of Credit 54: Circadian Lighting Design. Specific thresholds for different office areas are designated in order to achieve credits. Light is measured at 1.2 m above the finished floor for all areas. Work areas must have at least a value of 200 equivalent melanopic lux present for 75% or more work stations between the hours of 09:00 and 13:00 for each day of the year when daylight is incorporated into calculations. If daylight is not taken into account all workstations require lighting at the value of 150 equivalent melanopic lux or greater. Living environments, which are bedrooms, bathrooms and rooms with windows, at least one fixture must provide a melanopic lux value of at least 200 during the day and a melanopic lux value less than 50 during the night, measured 0.76 m above the finished floor. Breakrooms require an average melanopic lux of 250. Learning areas require either that light models which may incorporate daylighting have an equivalent melanopic lux of 125 for at least 75% of desks for at least four hours per day or that ambient lights maintain the standard lux recommendations set forth by Table 3 of the IES-ANSI RP-3-13. The WELL Building standard additionally provides direction for circadian emulation in multi-family residences. In order to more accurately replicate natural cycles lighting users must be able to set a wake and bed time. An equivalent melanopic lux of 250 must be maintained in the period of the day between the indicated wake time and two hours before the indicated bed time. An equivalent melanopic lux of 50 or less is required for the period of the day spanning from two hours before the indicated bed time through the wake time. In addition at the indicated wake time melanopic lux should increase from 0 to 250 over the course of at least 15 minutes. Other factors Although many researchers consider light to be the strongest cue for entrainment, it is not the only factor acting on circadian rhythms. Other factors may enhance or decrease the effectiveness of entrainment. For instance, exercise and other physical activity, when coupled with light exposure, results in a somewhat stronger entrainment response. Other factors such as music and properly timed administration of the neurohormone melatonin have shown similar effects. Numerous other factors affect entrainment as well. These include feeding schedules, temperature, pharmacology, locomotor stimuli, social interaction, sexual stimuli and stress. Circadian-based effects have also been found on visual perception to discomfort glare. The time of day at which people are shown a light source that produces visual discomfort is not perceived evenly. As the day progress, people tend to become more tolerant to the same levels of discomfort glare (i.e., people are more sensitive to discomfort glare in the morning compared to later in the day.) Further studies on chronotype show that early chronotypes can also tolerate more discomfort glare in the morning compared to late chronotypes. See also Chronobiology Circadian advantage Circadian clock Circadian oscillator Circadian rhythm disorders Electronic media and sleep Light therapy Scotobiology References Circadian rhythm Circadian Health effects by subject
Light effects on circadian rhythm
[ "Physics", "Biology" ]
2,549
[ "Physical phenomena", "Behavior", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Waves", "Circadian rhythm", "Light", "Sleep" ]
14,473,878
https://en.wikipedia.org/wiki/Human%E2%80%93computer%20information%20retrieval
Human–computer information retrieval (HCIR) is the study and engineering of information retrieval techniques that bring human intelligence into the search process. It combines the fields of human-computer interaction (HCI) and information retrieval (IR) and creates systems that improve search by taking into account the human context, or through a multi-step search process that provides the opportunity for human feedback. History This term human–computer information retrieval was coined by Gary Marchionini in a series of lectures delivered between 2004 and 2006. Marchionini's main thesis is that "HCIR aims to empower people to explore large-scale information bases but demands that people also take responsibility for this control by expending cognitive and physical energy." In 1996 and 1998, a pair of workshops at the University of Glasgow on information retrieval and human–computer interaction sought to address the overlap between these two fields. Marchionini notes the impact of the World Wide Web and the sudden increase in information literacy – changes that were only embryonic in the late 1990s. A few workshops have focused on the intersection of IR and HCI. The Workshop on Exploratory Search, initiated by the University of Maryland Human-Computer Interaction Lab in 2005, alternates between the Association for Computing Machinery Special Interest Group on Information Retrieval (SIGIR) and Special Interest Group on Computer-Human Interaction (CHI) conferences. Also in 2005, the European Science Foundation held an Exploratory Workshop on Information Retrieval in Context. Then, the first Workshop on Human Computer Information Retrieval was held in 2007 at the Massachusetts Institute of Technology. Description HCIR includes various aspects of IR and HCI. These include exploratory search, in which users generally combine querying and browsing strategies to foster learning and investigation; information retrieval in context (i.e., taking into account aspects of the user or environment that are typically not reflected in a query); and interactive information retrieval, which Peter Ingwersen defines as "the interactive communication processes that occur during the retrieval of information by involving all the major participants in information retrieval (IR), i.e. the user, the intermediary, and the IR system." A key concern of HCIR is that IR systems intended for human users be implemented and evaluated in a way that reflects the needs of those users. Most modern IR systems employ a ranked retrieval model, in which the documents are scored based on the probability of the document's relevance to the query. In this model, the system only presents the top-ranked documents to the user. This systems are typically evaluated based on their mean average precision over a set of benchmark queries from organizations like the Text Retrieval Conference (TREC). Because of its emphasis in using human intelligence in the information retrieval process, HCIR requires different evaluation models – one that combines evaluation of the IR and HCI components of the system. A key area of research in HCIR involves evaluation of these systems. Early work on interactive information retrieval, such as Juergen Koenemann and Nicholas J. Belkin's 1996 study of different levels of interaction for automatic query reformulation, leverage the standard IR measures of precision and recall but apply them to the results of multiple iterations of user interaction, rather than to a single query response. Other HCIR research, such as Pia Borlund's IIR evaluation model, applies a methodology more reminiscent of HCI, focusing on the characteristics of users, the details of experimental design, etc. Goals HCIR researchers have put forth the following goals towards a system where the user has more control in determining relevant results. Systems should no longer only deliver the relevant documents, but must also provide semantic information along with those documents increase user responsibility as well as control; that is, information systems require human intellectual effort have flexible architectures so they may evolve and adapt to increasingly more demanding and knowledgeable user bases aim to be part of information ecology of personal and shared memories and tools rather than discrete standalone services support the entire information life cycle (from creation to preservation) rather than only the dissemination or use phase support tuning by end users and especially by information professionals who add value to information resources be engaging and fun to use In short, information retrieval systems are expected to operate in the way that good libraries do. Systems should help users to bridge the gap between data or information (in the very narrow, granular sense of these terms) and knowledge (processed data or information that provides the context necessary to inform the next iteration of an information seeking process). That is, good libraries provide both the information a patron needs as well as a partner in the learning process — the information professional — to navigate that information, make sense of it, preserve it, and turn it into knowledge (which in turn creates new, more informed information needs). Techniques The techniques associated with HCIR emphasize representations of information that use human intelligence to lead the user to relevant results. These techniques also strive to allow users to explore and digest the dataset without penalty, i.e., without expending unnecessary costs of time, mouse clicks, or context shift. Many search engines have features that incorporate HCIR techniques. Spelling suggestions and automatic query reformulation provide mechanisms for suggesting potential search paths that can lead the user to relevant results. These suggestions are presented to the user, putting control of selection and interpretation in the user's hands. Faceted search enables users to navigate information hierarchically, going from a category to its sub-categories, but choosing the order in which the categories are presented. This contrasts with traditional taxonomies in which the hierarchy of categories is fixed and unchanging. Faceted navigation, like taxonomic navigation, guides users by showing them available categories (or facets), but does not require them to browse through a hierarchy that may not precisely suit their needs or way of thinking. Lookahead provides a general approach to penalty-free exploration. For example, various web applications employ AJAX to automatically complete query terms and suggest popular searches. Another common example of lookahead is the way in which search engines annotate results with summary information about those results, including both static information (e.g., metadata about the objects) and "snippets" of document text that are most pertinent to the words in the search query. Relevance feedback allows users to guide an IR system by indicating whether particular results are more or less relevant. Summarization and analytics help users digest the results that come back from the query. Summarization here is intended to encompass any means of aggregating or compressing the query results into a more human-consumable form. Faceted search, described above, is one such form of summarization. Another is clustering, which analyzes a set of documents by grouping similar or co-occurring documents or terms. Clustering allows the results to be partitioned into groups of related documents. For example, a search for "java" might return clusters for Java (programming language), Java (island), or Java (coffee). Visual representation of data is also considered a key aspect of HCIR. The representation of summarization or analytics may be displayed as tables, charts, or summaries of aggregated data. Other kinds of information visualization that allow users access to summary views of search results include tag clouds and treemapping. Related areas Exploratory video search Information foraging References External links Information retrieval genres Human–computer interaction
Human–computer information retrieval
[ "Engineering" ]
1,515
[ "Human–computer interaction", "Human–machine interaction" ]
13,301,401
https://en.wikipedia.org/wiki/United%20States%20Radium%20Corporation
The United States Radium Corporation was a company, most notorious for its operations between the years 1917 to 1926 in Orange, New Jersey, in the United States that led to stronger worker protection laws. After initial success in developing a glow-in-the-dark radioactive paint, the company was subject to several lawsuits in the late 1920s in the wake of severe illnesses and deaths of workers (the Radium Girls) who had ingested radioactive material. The workers had been told that the paint was harmless. During World War I and World War II, the company produced luminous watches and gauges for the United States Army for use by soldiers. U.S. Radium workers, especially women who painted the dials of watches and other instruments with luminous paint, suffered serious radioactive contamination. Lawyer Edward Markley was in charge of defending the company in these cases. History The company was founded in 1914 in New York City, by Dr. Sabin Arnold von Sochocky and Dr. George S. Willis, as the Radium Luminous Material Corporation. The company produced uranium from carnotite ore and eventually moved into the business of producing radioluminescent paint, and then to the application of that paint. Over the next several years, it opened facilities in Newark, Jersey City, and Orange. In August 1921, von Sochocky was forced from the presidency, and the company was renamed the United States Radium Corporation, Arthur Roeder became the president of the company. In Orange, where radium was extracted from 1917 to 1926, the U.S. Radium facility processed half a ton of ore per day. The ore was obtained from "Undark mines" in Paradox Valley, Colorado and in Utah. A notable employee from 1921 to 1923 was Victor Francis Hess, who would later receive the Nobel Prize in Physics. The company's luminescent paint, marketed as Undark, was a mixture of radium and zinc sulfide; the radiation causing the sulfide to fluoresce. During World War I, demand for dials, watches, and aircraft instruments painted with Undark surged, and the company expanded operations considerably. The delicate task of painting watch and gauge faces was done mostly by young women, who were instructed to maintain a fine tip on their paintbrushes by licking them. At the time, the dangers of radiation were not well understood. Around 1920, a similar radium dial business, known as the Radium Dial Company, a division of the Standard Chemical Company, opened in Chicago. It soon moved its dial painting operation to Ottawa, Illinois to be closer to its major customer, the Westclox Clock Company. Several workers died, and the health risks associated with radium were allegedly known, but this company continued dial painting operations until 1940. U.S. Radium's management and scientists took precautions such as masks, gloves, and screens, but did not similarly equip the workers. Unbeknownst to the women, the paint was highly radioactive and therefore, carcinogenic. The ingestion of the paint by the women, brought about while licking the brushes, resulted in a condition called radium jaw (radium necrosis), a painful swelling and porosity of the upper and lower jaws that ultimately led to many of their deaths. This led to litigation against U.S. Radium by the so-called Radium Girls, starting with former dial painter Marguerite Carlough in 1925. The case was eventually settled in 1926 and several more suits were brought against the company in 1927 by Grace Fryer and Katherine Schaub. The company did not stop the hand painting of dials until 1947. The company struggled after World War I: the loss of military contracts sharply reduced demand for luminescent paint and dials, and in 1922, high-grade ore was discovered in Katanga, driving all U.S. suppliers out of business except U.S. Radium and the Standard Chemical Company. U.S. Radium consolidated its operations in Manhattan in 1927, leasing out the Orange plant and selling off other property. But demand for luminescent products surged again during World War II; by 1942, it employed as many as 1,000 workers, and in 1944 was reported to have radium mining, processing, and application facilities in Bloomsburg, Pennsylvania; Bernardsville, New Jersey; Whippany, New Jersey; and North Hollywood, California as well as New York City. In 1945 the Office of Strategic Services enlisted the company's help for tests of a psychological-warfare scheme to release foxes with glowing paint in Japan. After the war came another period of retrenchment. Not only did military supply contracts end, but luminous dial manufacturing shifted to promethium-147 and tritium. Also, radium mining in Canada ceased in 1954, driving up supply costs. In that year, the company consolidated its operations at facilities in Morristown, New Jersey and South Centre Township east of Bloomsburg, Pennsylvania. In Bloomsburg, it continued to produce items with luminescent paint using radium, strontium-90 and cesium-137 such as watch dials, instrument gauge faces, deck markers, and paint. It ceased radium processing altogether in 1968, spinning off those operations as Nuclear Radiation Development Corporation, LLC, based in Grand Island, New York. The following year, a new facility at the Bloomsburg plant opened for the manufacturing of "tritiated metal foils and tritium activated self-luminous light tubes," and the company switched focus to the manufacture of glow-in-the-dark exit and aircraft signs using tritium. Starting in 1979, the company underwent an extensive reorganization. A new corporation, Metreal, Inc., was created to hold the assets of the Bloomsburg plant. Manufacturing operations were subsequently moved into new wholly owned subsidiary corporations: Safety Light Corporation, USR Chemical Products, USR Lighting, USR Metals, and U.S. Natural Resources. Finally, in May 1980, U.S. Radium created a new holding company, USR Industries, Inc., and merged itself into it. The Safety Light Corporation, in turn, was sold to its management and spun off as an independent entity in 1982. Tritium-illuminated signs were marketed under the name Isolite, which also became the name of new subsidiary to market and distribute Safety Light Corporation's products. In 2005, the Nuclear Regulatory Commission declined to renew the licenses for the Bloomsburg facility, and shortly thereafter the EPA added the Bloomsburg facility to the National Priorities List for remediation through Superfund. All tritium operations at the plant ceased by the end of 2007. Immediate aftermath The chief medical examiner of Essex County, New Jersey, Harrison Stanford Martland, MD, published a report in 1925 that identified the radioactive material the women had ingested as the cause of their bone disease and aplastic anemia, and ultimately death. Illness and death resulting from ingestion of radium paint and the subsequent legal action taken by the women forced closure of the company's Orange facility in 1927. The case was settled out of court in 1928, but not before a substantial number of the litigants were seriously ill or had died from bone cancer and other radiation-related illnesses. The company, it was alleged, deliberately delayed settling litigation, leading to further deaths. In November 1928, Dr. von Sochocky, the inventor of the radium-based paint, died of aplastic anemia resulting from his exposure to the radioactive material, "a victim of his own invention." The victims were so contaminated that radiation could still be detected at their graves in 1987 using a Geiger counter. Superfund site The company processed about 1,000 pounds of ore daily while in operation, which was dumped on the site. The radon and radiation resulting from the 1,600 tons of material on the abandoned factory resulted in the site's designation as a Superfund site by the United States Environmental Protection Agency in 1983. From 1997 through 2005, the EPA remediated the site in a process that involved the excavation and off-site disposal of radium-contaminated material at the former plant site, and at 250 residential and commercial properties that had been contaminated in the intervening decades. In 2009, the EPA wrapped up their long-running Superfund cleanup effort. See also Undark Radium dials Radium Girls Radiation poisoning Radium Dial Company References External links Radium dial painters, 1920–1926 Radioluminescent Paint, Oak Ridge Associated Universities Radium Luminous Material Corporation stock certificate United States Radium Corporation stock certificate Nuclear safety and security Radium Radioactivity Orange, New Jersey Historic American Engineering Record in New Jersey History of New Jersey Superfund sites in New Jersey Defunct technology companies based in New Jersey Chemical companies established in 1914 Manufacturing companies disestablished in 1980 1914 establishments in New York City 1980 disestablishments in New Jersey American companies established in 1914 American companies disestablished in 1980 Defunct manufacturing companies based in New Jersey
United States Radium Corporation
[ "Physics", "Chemistry" ]
1,833
[ "Radioactivity", "Nuclear physics" ]
13,301,859
https://en.wikipedia.org/wiki/Diffusion-controlled%20reaction
Diffusion-controlled (or diffusion-limited) reactions are reactions in which the reaction rate is equal to the rate of transport of the reactants through the reaction medium (usually a solution). The process of chemical reaction can be considered as involving the diffusion of reactants until they encounter each other in the right stoichiometry and form an activated complex which can form the product species. The observed rate of chemical reactions is, generally speaking, the rate of the slowest or "rate determining" step. In diffusion controlled reactions the formation of products from the activated complex is much faster than the diffusion of reactants and thus the rate is governed by collision frequency. Diffusion control is rare in the gas phase, where rates of diffusion of molecules are generally very high. Diffusion control is more likely in solution where diffusion of reactants is slower due to the greater number of collisions with solvent molecules. Reactions where the activated complex forms easily and the products form rapidly are most likely to be limited by diffusion control. Examples are those involving catalysis and enzymatic reactions. Heterogeneous reactions where reactants are in different phases are also candidates for diffusion control. One classical test for diffusion control of a heterogeneous reaction is to observe whether the rate of reaction is affected by stirring or agitation; if so then the reaction is almost certainly diffusion controlled under those conditions. Derivation The following derivation is adapted from Foundations of Chemical Kinetics. This derivation assumes the reaction . Consider a sphere of radius , centered at a spherical molecule A, with reactant B flowing in and out of it. A reaction is considered to occur if molecules A and B touch, that is, when the distance between the two molecules is apart. If we assume a local steady state, then the rate at which B reaches is the limiting factor and balances the reaction. Therefore, the steady state condition becomes 1. where is the flux of B, as given by Fick's law of diffusion, 2. , where is the diffusion coefficient and can be obtained by the Stokes-Einstein equation, and the second term is the gradient of the chemical potential with respect to position. Note that [B] refers to the average concentration of B in the solution, while [B](r) is the "local concentration" of B at position r. Inserting 2 into 1 results in 3. . It is convenient at this point to use the identity allowing us to rewrite 3 as 4. . Rearranging 4 allows us to write 5. Using the boundary conditions that , ie the local concentration of B approaches that of the solution at large distances, and consequently , as , we can solve 5 by separation of variables, we get 6. or 7. (where : ) For the reaction between A and B, there is an inherent reaction constant , so . Substituting this into 7 and rearranging yields 8. Limiting conditions Very fast intrinsic reaction Suppose is very large compared to the diffusion process, so A and B react immediately. This is the classic diffusion limited reaction, and the corresponding diffusion limited rate constant, can be obtained from 8 as . 8 can then be re-written as the "diffusion influenced rate constant" as 9. Weak intermolecular forces If the forces that bind A and B together are weak, ie for all r except very small r, . The reaction rate 9 simplifies even further to 10. This equation is true for a very large proportion of industrially relevant reactions in solution. Viscosity dependence The Stokes-Einstein equation describes a frictional force on a sphere of diameter as where is the viscosity of the solution. Inserting this into 9 gives an estimate for as , where R is the gas constant, and is given in centipoise. For the following molecules, an estimate for is given: See also Diffusion limited enzyme References Chemical reactions Chemical reaction engineering Chemical kinetics
Diffusion-controlled reaction
[ "Chemistry", "Engineering" ]
784
[ "Chemical engineering", "Chemical kinetics", "Chemical reaction engineering", "nan" ]
13,301,952
https://en.wikipedia.org/wiki/Orphan%20Train
The Orphan Train Movement was a supervised welfare program that transported children from crowded Eastern cities of the United States to foster homes located largely in rural areas of the Midwest short on farming labor. The orphan trains operated between 1854 and 1929, relocating from about 200,000 children. The co-founders of the orphan train movement claimed that these children were orphaned, abandoned, abused, or homeless, but this was not always true. They were mostly the children of new immigrants and the children of the poor and destitute families living in these cities. Criticisms of the program include ineffective screening of caretakers, insufficient follow-ups on placements, and that many children were used as strictly slave farm labor. Three charitable institutions, Children's Village (founded 1851 by 24 philanthropists ), the Children's Aid Society (established 1853 by Charles Loring Brace) and later, New York Foundling Hospital, endeavored to help these children. The institutions were supported by wealthy donors and operated by professional staff. The three institutions developed a program that placed homeless, orphaned, and abandoned city children, who numbered an estimated 30,000 in New York City alone in the 1850s, in foster homes throughout the country. The children were transported to their new homes on trains that were labeled "orphan trains" or "baby trains". This relocation of children ended in 1930 due to decreased need for farm labor in the Midwest. Background The first orphanage in the United States was reportedly established in 1729 in Natchez, Mississippi, but institutional orphanages were uncommon before the early 19th century. Relatives or neighbors usually raised children who had lost their parents. Arrangements were informal and rarely involved courts. Around 1830, the number of homeless children in large Eastern cities such as New York City exploded. In 1850, there were an estimated 10,000 to 30,000 homeless children in New York City. At the time, New York City's population was only 500,000. Some children were orphaned when their parents died in epidemics of typhoid, yellow fever or the flu. Others were abandoned due to poverty, illness, or addiction. Many children sold matches, rags, or newspapers to survive. For protection against street violence, they banded together and formed gangs. In 1853, a young minister named Charles Loring Brace became concerned with the plight of street children (often known as "street Arabs"). He founded the Children's Aid Society. During its first year the Children's Aid Society primarily offered boys religious guidance and vocational and academic instruction. Eventually, the society established the nation's first runaway shelter, the Newsboys' Lodging House, where vagrant boys received inexpensive room and board and basic education. Brace and his colleagues attempted to find jobs and homes for individual children, but they soon became overwhelmed by the numbers needing placement. Brace hit on the idea of sending groups of children to rural areas for adoption. Brace believed that street children would have better lives if they left the poverty and debauchery of their lives in New York City and were instead raised by morally upright farm families. Recognizing the need for labor in the expanding farm country, Brace believed that farmers would welcome homeless children, take them into their homes and treat them as their own. His program would turn out to be a forerunner of modern foster care. After a year of dispatching children individually to farms in nearby Connecticut, Pennsylvania and rural New York, the Children's Aid Society mounted its first large-scale expedition to the Midwest in September 1854. The term "orphan train" The phrase "orphan train" was first used in 1854 to describe the transportation of children from their home area via the railroad. However, the term "orphan train" was not widely used until long after the orphan train program had ended. The Children's Aid Society referred to its relevant division first as the Emigration Department, then as the Home-Finding Department, and finally, as the Department of Foster Care. Later, the New York Foundling Hospital sent out what it called "baby" or "mercy" trains. Organizations and families generally used the terms "family placement" or "out-placement" ("out" to distinguish it from the placement of children "in" orphanages or asylums) to refer to orphan train passengers. Widespread use of the term "orphan train" may date to 1978, when CBS aired a fictional miniseries entitled The Orphan Trains. One reason the term was not used by placement agencies was that less than half of the children who rode the trains were in fact orphans, and as many as 25 percent had two living parents. Children with both parents living ended up on the trains—or in orphanages—because their families did not have the money or desire to raise them or because they had been abused or abandoned or had run away. And many teenage boys and girls went to orphan train sponsoring organizations simply in search of work or a free ticket out of the city. The term "orphan trains" is also misleading because a substantial number of the placed-out children didn't take the railroad to their new homes and some didn't even travel very far. The state that received the greatest number of children (nearly one-third of the total) was New York. Connecticut, New Jersey, and Pennsylvania also received substantial numbers of children. For most of the orphan train era, the Children's Aid Society bureaucracy made no distinction between local placements and even its most distant ones. They were all written up in the same record books and, on the whole, managed by the same people. Also, the same child might be placed one time in the West and the next time—if the first home did not work out—in New York City. The decision about where to place a child was made almost entirely on the basis of which alternative was most readily available at the moment the child needed help. The first orphan train The first group of 45 children arrived in Dowagiac, Michigan, on 1 October 1854. The children had traveled for days in uncomfortable conditions. They were accompanied by E. P. Smith of the Children's Aid Society. Smith himself had let two different passengers on the riverboat from Manhattan adopt boys without checking their references. Smith added a boy he met in the Albany railroad yard—a boy whose claim to orphanhood Smith never bothered to verify. At a meeting in Dowagiac, Smith played on his audience's sympathy while pointing out that the boys were handy and the girls could be used for all types of housework. In an account of the trip published by the Children's Aid Society, Smith said that in order to get a child, applicants had to have recommendations from their pastor and a justice of the peace, but it is unlikely that this requirement was strictly enforced. By the end of that first day, fifteen boys and girls had been placed with local families. Five days later, twenty-two more children had been adopted. Smith and the remaining eight children traveled to Chicago where Smith put them on a train to Iowa City by themselves where a Reverend C. C. Townsend, who ran a local orphanage, took them in and attempted to find them foster families. This first expedition was considered such a success that in January 1855 the society sent out two more parties of homeless children to Pennsylvania. Logistics of orphan trains Committees of prominent local citizens were organized in the towns where orphan trains stopped. These committees were responsible for arranging a site for the adoptions, publicizing the event, and arranging lodging for the orphan train group. These committees were also required to consult with the Children's Aid Society on the suitability of local families interested in adopting children. Brace's system put its faith in the kindness of strangers. Orphan train children were placed in homes for free and were expected to serve as an extra pair of hands to help with chores around the farm. Families were expected to raise them as they would their natural-born children, providing them with decent food and clothing, a "common" education, and $100 when they turned twenty-one. Older children placed by The Children's Aid Society were supposed to be paid for their labors. Legal adoption was not required. According to the Children's Aid Society's "Terms on Which Boys are Placed in Homes," boys under twelve were to be "treated by the applicants as one of their own children in matters of schooling, clothing, and training," and boys twelve to fifteen were to be "sent to a school a part of each year." Representatives from the society were supposed to visit each family once a year to check conditions, and children were expected to write letters back to the society twice a year. There were only a handful of agents to monitor thousands of placements. Before they boarded the train, children were dressed in new clothing, given a Bible, and placed in the care of Children's Aid Society agents who accompanied them west. Few children understood what was happening. Once they did, their reactions ranged from delight at finding a new family to anger and resentment at being "placed out" when they had relatives "back home". Most children on the trains were white. An attempt was made to place non-English speakers with people who spoke their language. Babies were easiest to place, but finding homes for children older than 14 was always difficult because of concern that they were too set in their ways or might have bad habits. Children who were physically or mentally disabled or sickly were difficult to find homes for. Although many siblings were sent out together on orphan trains, prospective parents could choose to take a single child, separating siblings. Many orphan train children went to live with families that placed orders specifying age, gender, and hair and eye color. Others were paraded from the depot into a local playhouse, where they were put up on stage, thus the origin of the term "up for adoption." According to an exhibit panel from the National Orphan Train Complex, the children "took turns giving their names, singing a little ditty, or 'saying a piece." According to Sara Jane Richter, professor of history at Oklahoma Panhandle State University, the children often had unpleasant experiences. "People came along and prodded them, and looked, and felt, and saw how many teeth they had." Press accounts convey the spectacle, and sometimes auction-like atmosphere, attending the arrival of a new group of children. ''Some ordered boys, others girls, some preferred light babies, others dark, and the orders were filled out properly and every new parent was delighted,'' reported The Daily Independent of Grand Island, NE in May 1912. ''They were very healthy tots and as pretty as anyone ever laid eyes on.'' Brace raised money for the program through his writings and speeches. Wealthy people occasionally sponsored trainloads of children. Charlotte Augusta Gibbs, wife of John Jacob Astor III, had sent 1,113 children west on the trains by 1884. Railroads gave discount fares to the children and the agents who cared for them. Scope of the orphan train movement The Children's Aid Society's sent an average of 3,000 children via train each year from 1855 to 1875. Orphan trains were sent to 45 states, as well as Canada and Mexico. During the early years, Indiana received the largest number of children. At the beginning of the Children's Aid Society orphan train program, children were not sent to the southern states, as Brace was an ardent abolitionist. By the 1870s, the New York Foundling Hospital and the New England Home for Little Wanderers in Boston had orphan train programs of their own. New York Foundling Hospital "Mercy Trains" The New York Foundling Hospital was established in 1869 by Sister Mary Irene Fitzgibbon of the Sisters of Charity of New York as a shelter for abandoned infants. The Sisters worked in conjunction with Priests throughout the Midwest and South in an effort to place these children in Catholic families. The Foundling Hospital sent infants and toddlers to prearranged Roman Catholic homes from 1875 to 1914. Parishioners in the destination regions were asked to accept children, and parish priests provided applications to approved families. This practice was first known as the "Baby Train," then later the "Mercy Train." By the 1910s, 1,000 children a year were placed with new families. Criticisms Linda McCaffery, a professor at Barton County Community College, explained the range of orphan train experiences: "Many were used as strictly slave farm labor, but there are stories, wonderful stories of children ending up in fine families that loved them, cherished them, [and] educated them." Orphan train children faced obstacles ranging from the prejudice of classmates because they were ''train children'' to feeling like outsiders in their families during their entire lives. Many rural people viewed the orphan train children with suspicion, as the incorrigible offspring of drunkards and prostitutes. Criticisms of the orphan train movement focused on concerns that initial placements were made hastily, without proper investigation, and that there was an insufficient follow-up on placements. Charities were also criticized for not keeping track of children placed while under their care. In 1883, Brace consented to an independent investigation. It found the local committees were ineffective at screening foster parents. The supervision was lax. Many older boys had run away. But its overall conclusion was positive. The majority of children under fourteen were leading satisfactory lives. Applicants for children were supposed to be screened by committees of local businessmen, ministers, or physicians, but the screening was rarely very thorough. Small-town ministers, judges, and other local leaders were often reluctant to reject a potential foster parent as unfit if he were also a friend or customer. Many children lost their identity through forced name changes and repeated moves. In 1996, Alice Ayler said, "I was one of the luckier ones because I know my heritage. They took away the identity of the younger riders by not allowing contact with the past." Many children who were placed out west had survived on the streets of New York, Boston or other large Eastern cities and generally, they were not the obedient children which many families expected them to be. In 1880, a Mr. Coffin of Indiana editorialized, "Children so thrown out from the cities are a source of much corruption in the country places where they are thrown. ... Very few such children are useful." Some residents of placement locations charged that orphan trains were dumping undesirable children from the East onto Western communities. In 1874, the National Prison Reform Congress charged that these practices resulted in increased correctional expenses in the West. Older boys wanted to be paid for their labor, sometimes, they asked for additional pay or they left a placement in order to find a higher paying placement. It is estimated that young men initiated 80% of the placement changes. One of the many children who rode the train was Lee Nailing. Lee's mother died of sickness; after her death, Lee's father could not afford to keep his children. Another orphan train child was named Alice Ayler. Alice rode the train because her single mother could not provide for her children; before the journey, they lived on "berries" and "green water." Catholic clergy maintained that some charities were deliberately placing Catholic children in Protestant homes in order to change their religious practices. The Society for the Protection of Destitute Roman Catholic Children in the City of New York (known as the Protectory) was founded in 1863. The Protectory ran orphanages and place out programs for Catholic youth in response to Brace's Protestant-centered program. Similar charges of conversion via adoption were made concerning the placement of Jewish children. Not all of the orphan train children were real orphans, but they were classified as orphans after they were forcibly removed from their biological families and transported to other states. Some claimed that this practice was a deliberate pattern which was intended to break up immigrant Catholic families. Some abolitionists opposed placements of children with Western families, viewing indentureship as a form of slavery. Orphan trains were the target of lawsuits, generally filed by parents who attempted to reclaim their children. Suits were occasionally filed by receiving parents or receiving family members who claimed that they either lost money or were harmed as the result of the placement. The Minnesota State Board of Corrections and Charities reviewed Minnesota orphan train placements between 1880 and 1883. The Board found that while children were hastily placed into their placements without proper investigations, only a few children were "depraved" or abused. The review criticized local committee members who were swayed by pressure from wealthy and important individuals in their community. The Board also pointed out that older children were frequently placed with farmers who expected to profit from their labor. The Board recommended that paid agents replace or supplement local committees in investigating and reviewing all applications and placements. A complicated lawsuit arose from a 1904 Arizona Territory orphan train placement in which the New York Foundling Hospital sent 40 white children between the ages of 18 months and 5 years to be indentured to Catholic families in an Arizona Territory parish. The families which were approved for placement by the local priest were identified as "Mexican Indian" families in the subsequent litigation. The nuns who escorted these children were unaware of the racial tension which existed between local Anglo and Mexican groups and as a result, they placed white children with Mexican Indian families. A group of white men, described as "just short of a lynch mob," forcibly took the children from the Mexican Indian homes and placed most of them with Anglo families. Some of the children were returned to the Foundling Hospital, but 19 of them remained with the Anglo Arizona Territory families. The Foundling Hospital filed a writ of habeas corpus in which it sought the return of these children. The Arizona Supreme Court ruled that the best interests of the children required them to remain in their new Arizona homes. On appeal, the U.S. Supreme Court ruled that the filing of a writ of habeas corpus which sought the return of a child constituted an improper use of the writ. Habeas corpus writs should be used "solely in cases of arrest and forcible imprisonment under color or claim of warrant of law," and they should not be used to obtain or transfer the custody of children. At the time, these events were well-documented in published newspaper stories which were titled "Babies Sold Like Sheep," telling readers that the New York Foundling Hospital "has for years been shipping children in car-loads all over the country, and they are given away and sold like cattle." End of the orphan train movement As the West was settled, the demand for adoptable children declined. Additionally, Midwestern cities such as Chicago, Cleveland, and St. Louis began to experience the neglected children problems that New York, Boston, and Philadelphia had experienced in the mid-1800s. These cities began to seek ways to care for their own orphan populations. In 1895, Michigan passed a statute prohibiting out-of-state children from local placement without payment of a bond guaranteeing that children placed in Michigan would not become a public charge in the State. Similar laws were passed by Indiana, Illinois, Kansas, Minnesota, Missouri, and Nebraska. Negotiated agreements between one or more New York charities and several western states allowed the continued placement of children in these states. Such agreements included large bonds as a security for placed children. In 1929, however, these agreements expired and were not renewed as charities changed their child care support strategies. Lastly, the need for the orphan train movement decreased as legislation was passed providing in-home family support. Charities began developing programs to support destitute and needy families limiting the need for intervention to place out children. Legacy of the program Between 1854 and 1929, an estimated 200,000 American children traveled west by rail in search of new homes. The Children's Aid Society rated its transplanted wards successful if they grew into "creditable members of society," and frequent reports documented the success stories. A 1910 survey concluded that 87 percent of the children sent to country homes had "done well," while 8 percent had returned to New York and the other 5 percent had either died, disappeared, or gotten arrested. Brace's notion that children are better cared for by families than in institutions is the most basic tenet of present-day foster care. Organizations The Orphan Train Heritage Society of America, Inc. founded in 1986 in Springdale, Arkansas preserves the history of the orphan train era. The National Orphan Train Complex in Concordia, KS is a museum and research center dedicated to the Orphan Train Movement, the various institutions that participated, and the children and agents who rode the trains. The museum is located at the restored Union Pacific Railroad Depot in Concordia which is listed on the National Register of Historic Places. The Complex maintains an archive of riders' stories and houses a research facility. Services offered by the museum include rider research, educational material, and a collection of photos and other memorabilia. The Louisiana Orphan Train Museum was founded in 2009 in a restored Union Pacific freight depot housed within Le Vieux Village Heritage Park in Opelousas, Louisiana. The museum has a collection of original documents, clothing, and photographs of orphan train riders as both children and adults. It focuses particularly on how the riders assimilated into the South Louisiana community, as the majority were legally adopted into their foster families. The museum is also the seat for the Louisiana Orphan Train Society. Founded in 1990 and chartered in 2003, this society staffs the volunteer-run museum, conducts historical outreach, researches the stories of riders, and hosts a large annual event akin to a family reunion. Forwarding institutions Some of the children who took the trains came from the following institutions: (partial list) Angel Guardian Home Association for Befriending Children & Young Girls Association for Benefit of Colored Orphans Baby Fold Baptist Children's Home of Long Island Bedford Maternity, Inc. Bellevue Hospital Bensonhurst Maternity Berachah Orphanage Berkshire Farm for boys Berwind Maternity Clinic Beth Israel Hospital Bethany Samaritan Society Bethlehem Lutheran Children's Home Booth Memorial Hospital Borough Park Maternity Hospital Brace Memorial Newsboys House Bronx Maternity Hospital Brooklyn Benevolent Society Brooklyn Hebrew Orphan Asylum Brooklyn Home for Children Brooklyn Hospital Brooklyn Industrial school Brooklyn Maternity Hospital Brooklyn Nursery & Infants Hospital Brookwood Child Care Catholic Child Care Society Catholic Committee for Refugees Catholic Guardian Society Catholic Home Bureau Child Welfare League of America Children's Aid Society Children's Haven Children's Village, Inc. Church Mission of Help Colored Orphan Asylum Convent of Mercy Dana House Door of Hope Duval College for Infant Children Edenwald School for Boys Erlanger Home Euphrasian Residence Family Reception Center Fellowship House for boys Ferguson House Five Points House of Industry Florence Crittendon League Goodhue Home Grace Hospital Graham Windham Services Greer-Woodycrest Children's Services Guardian Angel Home Guild of the Infant Savior Hale House for Infants, Inc. Half-Orphan Asylum Harman Home for Children Heartsease Home Hebrew Orphan Asylum Hebrew Sheltering Guardian Society Holy Angels' School Home for Destitute Children Home for Destitute Children of Seamen Home for Friendless Women and Children Hopewell Society of Brooklyn House of the Good Shepherd House of Mercy House of Refuge Howard Mission & Home for Little Wanderers Infant Asylum Infants' Home of Brooklyn Institution of Mercy Jewish Board of Guardians Jewish Protector & Aid Society Kallman Home for Children Little Flower Children's Services Maternity Center Association McCloskey School & Home McMahon Memorial Shelter Mercy Orphanage Messiah Home for Children Methodist Child Welfare Society Misericordia Hospital Mission of the Immaculate Virgin Morrisania City Hospital Mother Theodore's Memorial Girls' Home Mothers & Babies Hospital Mount Siani Hospital New York Foundling Hospital New York Home for Friendless Boys New York House of Refuge New York Juvenile Asylum (Children's Village) New York Society for Prevention of Cruelty to Children Ninth St. Day Nursery & Orphans' Home Orphan Asylum Society of the City of Brooklyn Orphan House Ottilie Home for Children In popular media Big Brother by Annie Fellow-Johnson, an 1893 children's fiction book. Extra! Extra! The Orphan Trains and Newsboys of New York by Renée Wendinger, an unabridged nonfiction resource book and pictorial history about the orphan trains. Good Boy (Little Orphan at the Train), a Norman Rockwell painting "Eddie Rode The Orphan Train", a song by Jim Roll and covered by Jason Ringenberg Last Train Home: An Orphan Train Story, a 2014 historical novella by Renée Wendinger Orphan Train, a 1979 television film directed by William A. Graham. "Rider on an Orphan Train", a song by David Massengill from his 1995 album The Return Orphan Train, a 2013 novel by Christina Baker Kline Placing Out, a 2007 documentary sponsored by the Kansas Humanities Council Toy Story 3, a 2010 Pixar animated film in which "Orphan Train" is referenced briefly at 00:02:04 - 00:02:07. Foster relationships are a reoccurring theme throughout the series. "Orphan Train", a song by U. Utah Phillips released on disc 3 of the 4-CD compilation Starlight on the Rails: A Songbook in 2005 Swamplandia!, a novel by Karen Russell, in which a character, Louis Thanksgiving, had been taken from New York to the MidWest on an Orphan Train by The New York Foundling Society after his unwed immigrant mother died in childbirth. In his albumn The Man From God Knows Where Tom Russell includes the track Rider on an Orphan Train, recounting the loss to the family of two relatives that ran away from home on an orphan train whilst young boys. Lost Children Archive, a novel by Valeria Luiselli, where the main character researches the forced movement of several demographics throughout the Americas' history, including the Orphan Trains. The Copper Children, a play by Karen Zacarías premiered in 2020 at the Oregon Shakespeare Festival. My Heart Remembers, a 2008 novel by Kim Vogel Sawyer, where the main character and her siblings were separated at a young age as orphans on the orphan train. "Orphan Train Series" by Jody Hedlund, a series about three orphaned sisters in the 1850s, the New York Children's Aid Society, and the resettling of orphans from New York to the Midwest 0.5 An Awakened Heart (2017) 1. With You Always (2017) 2. Together Forever (2018) 3. Searching for You (2018) ’’ Orphan Train, episode 16, season 2 of Dr. Quinn, Medicine Woman Buffalo Kids a 2024 animated film where the main protagonists ride a train until they get left behind and eventually have to rescue the other kids from the main antagonists of the movie Orphan Train children eden ahbez songwriter Nature Boy Joe Aillet John Green Brady Andrew H. Burke Henry L. Jost See also Home Children – similar program in the UK Treni della felicità – similar program in Italy Notes Further reading Creagh, Dianne. "The Baby Trains: Catholic Foster Care and Western Migration, 1873–1929", Journal of Social History (2012) 46(1): 197–218. Holt, Marilyn Irvin. The Orphan Trains: Placing Out in America. Lincoln: University of Nebraska Press, 1992. Johnson, Mary Ellen, ed. Orphan Train Riders: Their Own Stories. (2 vol. 1992), Magnuson, James and Dorothea G. Petrie. Orphan Train. New York: Dial Press, 1978. O'Connor, Stephen. Orphan Trains: The Story of Charles Loring Brace and the Children He Saved and Failed. Boston: Houghton Mifflin, 2001. Patrick, Michael, and Evelyn Trickel. Orphan Trains to Missouri. Columbia: University of Missouri Press, 1997. Patrick, Michael, Evelyn Sheets, and Evelyn Trickel. We Are Part of History: The Story of the Orphan Trains. Santa Fe, NM: The Lightning Tree, 1990. Riley, Tom. The Orphan Trains. New York: LGT Press, 2004. Donna Nordmark Aviles. "Orphan Train To Kansas – A True Story". Wasteland Press 2018. Renee Wendinger. "Extra! Extra! The Orphan Trains and Newsboys of New York". Legendary Publications 2009. Clark Kidder. "Emily's Story – The Brave Journey of an Orphan Train Rider". 2007. Downs, Susan Whitelaw, and Michael W. Sherraden. “The Orphan Asylum in the Nineteenth Century.” Social Service Review, vol. 57, no. 2, 1983, pp. 272–290. JSTOR, The Orphan Asylum in the Nineteenth Century. Accessed 1 March 2023. Clement, Priscilla Ferguson. “Children and Charity: Orphanages in New Orleans, 1817–1914.” Louisiana History: The Journal of the Louisiana Historical Association, vol. 27, no. 4, 1986, pp. 337–351. JSTOR, Children and Charity: Orphanages in New Orleans, 1817-1914. Accessed 1 March 2023. Facts about The Orphan Train Movement: America’s Largest Child Migration. The Orphan Train External links West by Orphan Train – A documentary film by Colleen Bradford Krantz and Clark Kidder, 2014 DiPasquale, Connie. "Orphan Trains of Kansas" "He rode the 'Orphan Train' across the country" – CNN "Orphan train riders, offspring seek answers about heritage" – USA Today "The Orphan Train" – CBS "98-Year-Old Woman Recounts Experience As ‘Orphan Train’ Rider" – CBS The Cawker City Public Record, 8 April 1886 "Placing Out" Department form "The Orphan Trains", American Experience, PBS National Orphan Train Complex Adoption, fostering, orphan care and displacement Child welfare in the United States Adoption history Rail transportation in the United States History of New York City 1854 establishments in the United States 1929 disestablishments in the United States Trains
Orphan Train
[ "Technology" ]
6,021
[ "Trains", "Transport systems" ]
13,307,983
https://en.wikipedia.org/wiki/Resistive%20ballooning%20mode
The resistive ballooning mode (RBM) is an instability occurring in magnetized plasmas, particularly in magnetic confinement devices such as tokamaks, when the pressure gradient is opposite to the effective gravity created by a magnetic field. Linear growth rate The linear growth rate of the RBM instability is given as where is the pressure gradient is the effective gravity produced by a non-homogeneous magnetic field, R0 is the major radius of the device, Lp is a characteristic length of the pressure gradient, and cs is the plasma sound speed. Similarity with the Rayleigh–Taylor instability The RBM instability is similar to the Rayleigh–Taylor instability (RT), with Earth gravity replaced by the effective gravity , except that for the RT instability, acts on the mass density of the fluid, whereas for the RBM instability, acts on the pressure of the plasma. Plasma instabilities Stability theory Tokamaks
Resistive ballooning mode
[ "Physics", "Mathematics" ]
184
[ "Physical phenomena", "Plasma physics", "Plasma phenomena", "Plasma instabilities", "Stability theory", "Plasma physics stubs", "Dynamical systems" ]
13,310,687
https://en.wikipedia.org/wiki/Convective%20momentum%20transport
Convective momentum transport usually describes a vertical flux of the momentum of horizontal winds or currents. That momentum is carried like a non-conserved flow tracer by vertical air motions in convection. In the atmosphere, convective momentum transport by small but vigorous (cumulus type) cloudy updrafts can be understood as an interplay of three main mechanisms: Vertical advection of ambient momentum due to subsidence of environmental air that compensates the in-cloud upward mass flux, Detrainment of in-cloud momentum where updrafts stop ascending, Accelerations by the pressure gradient force around clouds whose inner momentum differs from their environment. The net effect of these interacting mechanisms depends on the detailed configuration or 'organization' of the convective cloud or storm system. See also momentum vertical motion References Tropical meteorology Continuum mechanics
Convective momentum transport
[ "Physics" ]
171
[ "Classical mechanics stubs", "Classical mechanics", "Continuum mechanics" ]
8,660,513
https://en.wikipedia.org/wiki/American%20Association%20of%20Physics%20Teachers
The American Association of Physics Teachers (AAPT) was founded in 1930 for the purpose of "dissemination of knowledge of physics, particularly by way of teaching." There are more than 10,000 members in over 30 countries. AAPT publications include two peer-reviewed journals, the American Journal of Physics and The Physics Teacher. The association has two annual National Meetings (winter and summer) and has regional sections with their own meetings and organization. The association also offers grants and awards for physics educators, including the Richtmyer Memorial Award and programs and contests for physics educators and students. It is headquartered at the American Center for Physics in College Park, Maryland. History The American Association of Physics Teachers was founded on December 31, 1930, when forty-five physicists held a meeting during the joint APS-AAAS meeting in Cleveland specifically for that purpose. The AAPT became a founding member of the American Institute of Physics after the other founding members were convinced of the stability of the AAPT itself after a new constitution for the AAPT was agreed upon. Contests The AAPT sponsors a number of competitions. The Physics Bowl, Six Flags' roller coaster contest, and the US Physics Team are just a few. The US physics team is determined by two preliminary exams and a week and a half long "boot camp". Each year, five members are selected to compete against dozens of countries in the International Physics Olympiad (IPHO). Publications The Physics Teacher American Journal of Physics See also American Institute of Physics Oersted Medal Physics outreach References External links American Association of Physics Teachers web page AAPT sponsored events Archival collections Finding Aid for the American Association of Physics Teachers, South Atlantic Coast Section Records at the University of North Carolina at Greensboro Niels Bohr Library & Archives American Association of Physics Teachers Richard M. Sutton records, 1934-1949 American Association of Physics Teachers miscellaneous publications, 1934-2013 American Association of Physics Teachers records of David Locke Webster, 1930-1958 American Journal of Physics editor's reports, 1967-2001 AAPT Chesapeake Section records of the secretary, 1956-1984 American Association of Physics Teachers Office of the Executive Officer records of Bernard Khoury, 1985-2002 AAPT Office of the Secretary John Layman records, 1947-2000, 2011, undated American Association of Physics Teachers Office of the Secretary records of Alfred Romer, 1960-1971 American Association of Physics Teachers Office of the Secretary records of Roderick M. Grant, 1968-1991 (bulk 1977-1983) AAPT Physics Teaching Resource Agents program records, 1983-2007, undated Commission on College Physics records, 1960-1971 Eastern Association of Physics Teachers records, 1895-1979 Physics education Physics societies Professional associations based in the United States Academic organizations based in the United States Organizations established in 1930 Educational organizations based in the United States Teacher associations based in the United States American education-related professional associations 1930 establishments in the United States
American Association of Physics Teachers
[ "Physics" ]
582
[ "Applied and interdisciplinary physics", "Physics education" ]
8,661,211
https://en.wikipedia.org/wiki/Dielectric%20barrier%20discharge
Dielectric-barrier discharge (DBD) is the electrical discharge between two electrodes separated by an insulating dielectric barrier. Originally called silent (inaudible) discharge and also known as ozone production discharge or partial discharge, it was first reported by Ernst Werner von Siemens in 1857. Process The process normally uses high voltage alternating current, ranging from lower RF to microwave frequencies. However, other methods were developed to extend the frequency range all the way down to the DC. One method was to use a high resistivity layer to cover one of the electrodes. This is known as the resistive barrier discharge. Another technique using a semiconductor layer of gallium arsenide (GaAs) to replace the dielectric layer, enables these devices to be driven by a DC voltage between 580 V and 740 V. Construction DBD devices can be made in many configurations, typically planar, using parallel plates separated by a dielectric or cylindrical, using coaxial plates with a dielectric tube between them. In a common coaxial configuration, the dielectric is shaped in the same form as common fluorescent tubing. It is filled at atmospheric pressure with either a rare gas or rare gas-halide mix, with the glass walls acting as the dielectric barrier. Due to the atmospheric pressure level, such processes require high energy levels to sustain. Common dielectric materials include glass, quartz, ceramics and polymers. The gap distance between electrodes varies considerably, from less than 0.1 mm in plasma displays, several millimetres in ozone generators and up to several centimetres in CO2 lasers. Depending on the geometry, DBD can be generated in a volume (VDBD) or on a surface (SDBD). For VDBD the plasma is generated between two electrodes, for example between two parallel plates with a dielectric in between. At SDBD the microdischarges are generated on the surface of a dielectric, which results in a more homogeneous plasma than can be achieved using the VDBD configuration At SDBD the microdischarges are limited to the surface, therefore their density is higher compared to the VDBD. The plasma is generated on top of the surface of an SDBD plate. To easily ignite VDBD and obtain a uniformly distributed discharge in the gap, a pre-ionization DBD can be used. A particular compact and economic DBD plasma generator can be built based on the principles of the piezoelectric direct discharge. In this technique, the high voltage is generated with a piezo-transformer, the secondary circuit of which acts also as the high voltage electrode. Since the transformer material is a dielectric, the produced electric discharge resembles properties of the dielectric barrier discharge. Manipulation of the encapsulated electrode and distributing the encapsulated electrode throughout the dielectric layer has been shown to alter the performance of the dielectric barrier discharge (DBD) plasma actuator. Actuators with a shallow initial electrode are able to more efficiently impart momentum and mechanical power into the flow. Operation A multitude of random arcs form in operation gap exceeding 1.5 mm between the two electrodes during discharges in gases at the atmospheric pressure . As the charges collect on the surface of the dielectric, they discharge in microseconds (millionths of a second), leading to their reformation elsewhere on the surface. Similar to other electrical discharge methods, the contained plasma is sustained if the continuous energy source provides the required degree of ionization, overcoming the recombination process leading to the extinction of the discharge plasma. Such recombinations are directly proportional to the collisions between the molecules and in turn to the pressure of the gas, as explained by Paschen's Law. The discharge process causes the emission of an energetic photon, the frequency and energy of which corresponds to the type of gas used to fill the discharge gap. Applications Usage of generated radiation DBDs can be used to generate optical radiation by the relaxation of excited species in the plasma. The main application here is the generation of UV-radiation. Such excimer ultraviolet lamps can produce light with short wavelengths which can be used to produce ozone in industrial scales. Ozone is still used extensively in industrial air and water treatment. Early 20th-century attempts at commercial nitric acid and ammonia production used DBDs as several nitrogen-oxygen compounds are generated as discharge products. Usage of the generated plasma Since the 19th century, DBDs were known for their decomposition of different gaseous compounds, such as NH3, H2S and CO2. Other modern applications include semiconductor manufacturing, germicidal processes, polymer surface treatment, high-power CO2 lasers typically used for welding and metal cutting, pollution control and plasma displays panels, aerodynamic flow control... The relatively lower temperature of DBDs makes it an attractive method of generating plasma at atmospheric pressure. Industry The plasma itself is used to modify or clean (plasma cleaning) surfaces of materials (e.g. polymers, semiconductor surfaces), that can also act as dielectric barrier, or to modify gases applied further to "soft" plasma cleaning and increasing adhesion of surfaces prepared for coating or gluing (flat panel display technologies). A dielectric barrier discharge is one method of plasma treatment of textiles at atmospheric pressure and room temperature. The treatment can be used to modify the surface properties of the textile to improve wettability, improve the absorption of dyes and adhesion, and for sterilization. DBD plasma provides a dry treatment that doesn't generate waste water or require drying of the fabric after treatment. For textile treatment, a DBD system requires a few kilovolts of alternating current, at between 1 and 100 kilohertz. Voltage is applied to insulated electrodes with a millimetre-size gap through which the textile passes. An excimer lamp can be used as a powerful source of short-wavelength ultraviolet light, useful in chemical processes such as surface cleaning of semiconductor wafers. The lamp relies on a dielectric barrier discharge in an atmosphere of xenon and other gases to produce the excimers. Water treatment An additional process when using chlorine gas for removal of bacteria and organic contaminates in drinking water supplies. Treatment of public swimming baths, aquariums and fish ponds involves the use of ultraviolet radiation produced when a dielectric mixture of xenon gas and glass are used. Surface modification of materials An application where DBDs can be successfully used is to modify the characteristics of a material surface. The modification can target a change in its hydrophilicity, the surface activation, the introduction of functional groups, and so on. Polymeric surfaces are easy to be processed using DBDs which, in some cases, offer a high processing area. Medicine Dielectric barrier discharges were used to generate relatively large volume diffuse plasmas at atmospheric pressure and applied to inactivate bacteria in the mid 1990s. This eventually led to the development of a new field of applications, the biomedical applications of plasmas. In the field of biomedical application, three main approaches have emerged: direct therapy, surface modification, and plasma polymer deposition. Plasma polymers can control and steer biological–biomaterial interactions (i.e. adhesion, proliferation, and differentiation) or inhibition of bacteria adhesion. Aeronautics Interest in plasma actuators as active flow control devices is growing rapidly due to their lack of mechanical parts, light weight and high response frequency. Properties Due to their nature, these devices have the following properties: capacitive electric load: low power factor in range of 0.1 to 0.3 high ignition voltage 1–10 kV huge amount of energy stored in electric field – requirement of energy recovery if DBD is not driven continuously voltages and currents during discharge event have major influence on discharge behaviour (filamented, homogeneous). Operation with continuous sine waves or square waves is mostly used in high power industrial installations. Pulsed operation of DBDs may lead to higher discharge efficiencies. Driving circuits Drivers for this type of electric load are power HF-generators that in many cases contain a transformer for high voltage generation. They resemble the control gear used to operate compact fluorescent lamps or cold cathode fluorescent lamps. The operation mode and the topologies of circuits to operate [DBD] lamps with continuous sine or square waves are similar to those standard drivers. In these cases, the energy that is stored in the DBD's capacitance does not have to be recovered to the intermediate supply after each ignition. Instead, it stays within the circuit (oscillates between the [DBD]'s capacitance and at least one inductive component of the circuit) and only the real power, that is consumed by the lamp, has to be provided by the power supply. Differently, drivers for pulsed operation suffer from rather low power factor and in many cases must fully recover the DBD's energy. Since pulsed operation of [DBD] lamps can lead to increased lamp efficiency, international research led to suiting circuit concepts. Basic topologies are resonant flyback and resonant half bridge. A flexible circuit, that combines the two topologies is given in two patent applications, and may be used to adaptively drive DBDs with varying capacitance. An overview of different circuit concepts for the pulsed operation of DBD optical radiation sources is given in "Resonant Behaviour of Pulse Generators for the Efficient Drive of Optical Radiation Sources Based on Dielectric Barrier Discharges". References Electrical phenomena Electricity Electrostatics
Dielectric barrier discharge
[ "Physics" ]
1,973
[ "Physical phenomena", "Electrical phenomena" ]
8,663,363
https://en.wikipedia.org/wiki/Cabozoa
In the classification of eukaryotes (living organisms with a cell nucleus), Cabozoa was a taxon proposed by Cavalier-Smith. It was a putative clade comprising the Rhizaria and Excavata. More recent research places the Rhizaria with the Alveolata and Stramenopiles instead of the Excavata, however, so "Cabozoa" is polyphyletic. See also Corticata References Obsolete eukaryote taxa Bikont unranked clades
Cabozoa
[ "Biology" ]
113
[ "Eukaryotes", "Eukaryote taxa" ]
8,666,134
https://en.wikipedia.org/wiki/Physcomitrella%20patens
Physcomitrella patens is a synonym of Physcomitrium patens, the spreading earthmoss. It is a moss, a bryophyte used as a model organism for studies on plant evolution, development, and physiology. Distribution and ecology Physcomitrella patens is an early colonist of exposed mud and earth around the edges of pools of water. P. patens has a disjunct distribution in temperate parts of the world, with the exception of South America. The standard laboratory strain is the "Gransden" isolate, collected by H. Whitehouse from Gransden Wood, in Cambridgeshire in 1962. Model organism Mosses share fundamental genetic and physiological processes with vascular plants, although the two lineages diverged early in land-plant evolution. A comparative study between modern representatives of the two lines may give insight into the evolution of mechanisms that contribute to the complexity of modern plants. In this context, P. patens is used as a model organism. P. patens is one of a few known multicellular organisms with highly efficient homologous recombination. meaning that an exogenous DNA sequence can be targeted to a specific genomic position (a technique called gene targeting) to create knockout mosses. This approach is called reverse genetics and it is a powerful and sensitive tool to study the function of genes and, when combined with studies in higher plants such as Arabidopsis thaliana, can be used to study molecular plant evolution. The targeted deletion or alteration of moss genes relies on the integration of a short DNA strand at a defined position in the genome of the host cell. Both ends of this DNA strand are engineered to be identical to this specific gene locus. The DNA construct is then incubated with moss protoplasts in the presence of polyethylene glycol. As mosses are haploid organisms, the regenerating moss filaments (protonemata) can be directly assayed for gene targeting within 6 weeks using PCR methods. The first study using knockout moss appeared in 1998 and functionally identified ftsZ as a pivotal gene for the division of an organelle in a eukaryote. In addition, P. patens is increasingly used in biotechnology. Examples are the identification of moss genes with implications for crop improvement or human health and the safe production of complex biopharmaceuticals in moss bioreactors. By multiple gene knockout Physcomitrella plants were engineered that lack plant-specific post-translational protein glycosylation. These knockout mosses are used to produce complex biopharmaceuticals in a process called molecular farming. The genome of P. patens, with about 500 megabase pairs organized into 27 chromosomes, was completely sequenced in 2008. Physcomitrella ecotypes, mutants, and transgenics are stored and made freely available to the scientific community by the International Moss Stock Center (IMSC). The accession numbers given by the IMSC can be used for publications to ensure safe deposit of newly described moss materials. Lifecycle Like all mosses, the lifecycle of P. patens is characterized by an alternation of two generations: a haploid gametophyte that produces gametes and a diploid sporophyte where haploid spores are produced. A spore develops into a filamentous structure called protonema, composed of two types of cells – chloronema with large and numerous chloroplasts and caulonema with very fast growth. Protonema filaments grow exclusively by tip growth of their apical cells and can originate side branches from subapical cells. Some side-branch initial cells can differentiate into buds rather than side branches. These buds give rise to gametophores (0.5–5.0 mm), more complex structures bearing leaf-like structures, rhizoids, and the sexual organs: female archegonia and male antheridia. P. patens is monoicous, meaning that male and female organs are produced in the same plant. If water is available, flagellate sperm cells can swim from the antheridia to an archegonium and fertilize the egg within. The resulting diploid zygote develops into a sporophyte composed of a foot, seta, and capsule, where thousands of haploid spores are produced by meiosis. DNA repair and homologous recombination P. patens is an excellent model in which to analyze repair of DNA damages in plants by the homologous recombination pathway. Failure to repair double-strand breaks and other DNA damages in somatic cells by homologous recombination can lead to cell dysfunction or death, and when failure occurs during meiosis, it can cause loss of gametes. The genome sequence of P. patens has revealed the presence of numerous genes that encode proteins necessary for repair of DNA damages by homologous recombination and by other pathways. PpRAD51, a protein at the core of the homologous recombination repair reaction, is required to preserve genome integrity in P. patens. Loss of PpRAD51 causes marked hypersensitivity to the double-strand break-inducing agent bleomycin, indicating that homologous recombination is used for repair of somatic cell DNA damages. PpRAD51 is also essential for resistance to ionizing radiation. The DNA mismatch repair protein PpMSH2 is a central component of the P. patens mismatch repair pathway that targets base pair mismatches arising during homologous recombination. The PpMsh2 gene is necessary in P. patens to preserve genome integrity. Genes Ppmre11 and Pprad50 of P. patens encode components of the MRN complex, the principal sensor of DNA double-strand breaks. These genes are necessary for accurate homologous recombinational repair of DNA damages in P. patens. Mutant plants defective in either Ppmre11 or Pprad50 exhibit severely restricted growth and development (possibly reflecting accelerated senescence), and enhanced sensitivity to UV-B and bleomycin-induced DNA damage compared to wild-type plants. Taxonomy P. patens was first described by Johann Hedwig in his 1801 work , under the name Phascum patens. Physcomitrella is sometimes treated as a synonym of the genus Aphanorrhegma, in which case P. patens is known as Aphanorrhegma patens. The generic name Physcomitrella implies a resemblance to Physcomitrium, which is named for its large calyptra, unlike that of Physcomitrella. In 2019 it was proposed that the correct name for this moss is Physcomitrium patens. References Further reading External links cosmoss.org - moss transcriptome and genome resource including genome browser The Japanese Physcomitrella transcriptome resource (Physcobase) The NCBI Physcomitrella patens genome project page JGI genome browser The moss Physcomitrella patens gives insights into RNA interference in plants A small moss turns professional Physcomitrella patens facts, developmental stages, organs at GeoChemBio Plant models Funariales Plants described in 1801 Taxa named by Philipp Bruch
Physcomitrella patens
[ "Biology" ]
1,527
[ "Model organisms", "Plant models" ]