id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
4,781,981
https://en.wikipedia.org/wiki/Steven%20Strogatz
Steven Henry Strogatz (; born August 13, 1959) is an American mathematician and author, and the Susan and Barton Winokur Distinguished Professor for the Public Understanding of Science and Mathematics at Cornell University. He is known for his work on nonlinear systems, including contributions to the study of synchronization in dynamical systems, and for his research in a variety of areas of applied mathematics, including mathematical biology and complex network theory. Strogatz is the host of Quanta Magazine'''s The Joy of Why podcast. He previously hosted The Joy of x podcast, named after his book of the same name. His published books include Sync, The Joy of x, The Calculus of Friendship, and Infinite Powers. Education Strogatz attended high school at Loomis Chaffee from 1972 to 1976. He then attended Princeton University, graduating summa cum laude with a B.A. in mathematics. Strogatz completed his senior thesis, titled "The mathematics of supercoiled DNA: an essay in geometric biology", under the supervision of Frederick J. Almgren, Jr. Strogatz then studied as a Marshall Scholar at Trinity College, Cambridge, from 1980 to 1982, and then received a Ph.D. in applied mathematics from Harvard University in 1986 for his research on the dynamics of the human sleep-wake cycle. He completed his postdoc under Nancy Kopell at Boston University. Career After spending three years as a National Science Foundation Postdoctoral Fellow at Harvard and Boston University, Strogatz joined the faculty of the department of mathematics at MIT in 1989. His research on dynamical systems was recognized with a Presidential Young Investigator Award from the National Science Foundation in 1990. In 1994 he moved to Cornell where he is a professor of mathematics. From 2007 to 2023 he was the Jacob Gould Schurman Professor of Applied Mathematics, and in 2023 he was named the inaugural holder of the Susan and Barton Winokur Distinguished Professorship for the Public Understanding of Science and Mathematics. From 2004 to 2010, he was also on the external faculty of the Santa Fe Institute. Research Early in his career, Strogatz worked on a variety of problems in mathematical biology, including the geometry of supercoiled DNA, the topology of three-dimensional chemical waves, and the collective behavior of biological oscillators, such as swarms of synchronously flashing fireflies. In the 1990s, his work focused on nonlinear dynamics and chaos applied to physics, engineering, and biology. Several of these projects dealt with coupled oscillators, such as lasers, superconducting Josephson junctions, and crickets that chirp in unison. His more recent work examines complex systems and their consequences in everyday life, such as the role of crowd synchronization in the wobbling of London's Millennium Bridge on its opening day, and the dynamics of structural balance in social systems. Perhaps his best-known research contribution is his 1998 Nature paper with Duncan Watts, entitled "Collective dynamics of small-world networks". This paper is widely regarded as a foundational contribution to the interdisciplinary field of complex networks, whose applications reach from graph theory and statistical physics to sociology, business, epidemiology, and neuroscience. As one measure of its importance, it was the most highly cited article about networks between 1998 and 2008, and the sixth most highly cited paper in all of physics. It has now been cited more than 50,000 times, according to Google Scholar; as of 17 October 2014, it was the 63rd most highly cited research article of all time. Writing and outreach Strogatz's writing for the general public includes four books and frequent newspaper articles. His book Sync was chosen as a Best Book of 2003 by Discover Magazine. His 2009 book The Calculus of Friendship was called "a genuine tearjerker" and "part biography, part autobiography and part off-the-beaten-path guide to calculus." His 2012 book, The Joy of x, won the 2014 Euler Book Prize. It grew out of his series of New York Times columns on the elements of mathematics. Harvard Business Review described these columns as "a model for how mathematics needs to be popularized" and as "must reads for entrepreneurs and executives who grasp that mathematics is now the lingua franca of serious business analysis.". Strogatz's second New York Times series, "Me, Myself and Math" appeared in the fall of 2012. His most recent book, Infinite Powers, was shortlisted for the Royal Society Insight Investment Science Book Prize and was a New York Times Best Seller. Published in 2019, it "evocatively conveys how calculus illuminates the patterns of the Universe, large and small," according to a review in Nature. In 2020 Strogatz began hosting a podcast for Quanta Magazine called “The Joy of x” in which he chats “with a wide range of scientists about their lives and work.” Awards Strogatz is a Member of the National Academy of Sciences and a Fellow of the Society for Industrial and Applied Mathematics, the American Academy of Arts and Sciences, the American Physical Society, and the American Mathematical Society. Strogatz has been lauded for his ability as a teacher and communicator. In 1991 he was honored with the E. M. Baker Memorial Award for Excellence in Undergraduate Teaching, MIT's only institute-wide teaching award selected and awarded solely by students. He has also won several teaching awards at Cornell, including Cornell's highest undergraduate teaching prize, the Stephen H. Weiss Presidential Fellowship (2016). At the national level, Strogatz received the JPBM Communications Award in 2007. Presented annually, this award recognizes outstanding achievement in communicating about mathematics to nonmathematicians. The JPBM represents the American Mathematical Society, the American Statistical Association, the Mathematical Association of America, and the Society for Industrial and Applied Mathematics. In 2013 he received the AAAS Public Engagement with Science Award for "his exceptional commitment to and passion for conveying the beauty and importance of mathematics to the general public." Strogatz was selected to be the 2009 Rouse Ball Lecturer at Cambridge and an MIT Mathematics 2011 Simons lecturer. In 2014 he was awarded the Euler Book Prize by The Mathematical Association of America for "The Joy of x". The award citation describes the book as "a masterpiece of expository writing" and remarks that it is "directed to the millions of readers who claim they never really understood what the mathematics they studied was all about, for whom math was a series of techniques to be mastered for no apparent reason." Along with Ian Stewart, Strogatz was awarded the 2015 Lewis Thomas Prize for Writing about Science. References External links Profile Edge'' Living people 1959 births Complex systems scientists Chaos theorists Cornell University faculty Alumni of Trinity College, Cambridge Princeton University alumni Harvard John A. Paulson School of Engineering and Applied Sciences alumni Marshall Scholars 20th-century American mathematicians 21st-century American mathematicians Mathematics popularizers Fellows of the Society for Industrial and Applied Mathematics Fellows of the American Academy of Arts and Sciences Fellows of the American Physical Society Fellows of the American Mathematical Society Loomis Chaffee School alumni Dynamical systems theorists American network scientists
Steven Strogatz
[ "Mathematics" ]
1,477
[ "Dynamical systems theorists", "Dynamical systems" ]
1,871,162
https://en.wikipedia.org/wiki/Spin%E2%80%93orbit%20interaction
In quantum mechanics, the spin–orbit interaction (also called spin–orbit effect or spin–orbit coupling) is a relativistic interaction of a particle's spin with its motion inside a potential. A key example of this phenomenon is the spin–orbit interaction leading to shifts in an electron's atomic energy levels, due to electromagnetic interaction between the electron's magnetic dipole, its orbital motion, and the electrostatic field of the positively charged nucleus. This phenomenon is detectable as a splitting of spectral lines, which can be thought of as a Zeeman effect product of two effects: the apparent magnetic field seen from the electron perspective due to special relativity and the magnetic moment of the electron associated with its intrinsic spin due to quantum mechanics. For atoms, energy level splitting produced by the spin–orbit interaction is usually of the same order in size as the relativistic corrections to the kinetic energy and the zitterbewegung effect. The addition of these three corrections is known as the fine structure. The interaction between the magnetic field created by the electron and the magnetic moment of the nucleus is a slighter correction to the energy levels known as the hyperfine structure. A similar effect, due to the relationship between angular momentum and the strong nuclear force, occurs for protons and neutrons moving inside the nucleus, leading to a shift in their energy levels in the nucleus shell model. In the field of spintronics, spin–orbit effects for electrons in semiconductors and other materials are explored for technological applications. The spin–orbit interaction is at the origin of magnetocrystalline anisotropy and the spin Hall effect. In atomic energy levels This section presents a relatively simple and quantitative description of the spin–orbit interaction for an electron bound to a hydrogen-like atom, up to first order in perturbation theory, using some semiclassical electrodynamics and non-relativistic quantum mechanics. This gives results that agree reasonably well with observations. A rigorous calculation of the same result would use relativistic quantum mechanics, using the Dirac equation, and would include many-body interactions. Achieving an even more precise result would involve calculating small corrections from quantum electrodynamics. Energy of a magnetic moment The energy of a magnetic moment in a magnetic field is given by where is the magnetic moment of the particle, and is the magnetic field it experiences. Magnetic field We shall deal with the magnetic field first. Although in the rest frame of the nucleus, there is no magnetic field acting on the electron, there is one in the rest frame of the electron (see classical electromagnetism and special relativity). Ignoring for now that this frame is not inertial, we end up with the equation where is the velocity of the electron, and is the electric field it travels through. Here, in the non-relativistic limit, we assume that the Lorentz factor . Now we know that is radial, so we can rewrite . Also we know that the momentum of the electron . Substituting these and changing the order of the cross product (using the identity ) gives Next, we express the electric field as the gradient of the electric potential . Here we make the central field approximation, that is, that the electrostatic potential is spherically symmetric, so is only a function of radius. This approximation is exact for hydrogen and hydrogen-like systems. Now we can say that where is the potential energy of the electron in the central field, and is the elementary charge. Now we remember from classical mechanics that the angular momentum of a particle . Putting it all together, we get It is important to note at this point that is a positive number multiplied by , meaning that the magnetic field is parallel to the orbital angular momentum of the particle, which is itself perpendicular to the particle's velocity. Spin magnetic moment of the electron The spin magnetic moment of the electron is where is the spin (or intrinsic angular-momentum) vector, is the Bohr magneton, and is the electron-spin g-factor. Here is a negative constant multiplied by the spin, so the spin magnetic moment is antiparallel to the spin. The spin–orbit potential consists of two parts. The Larmor part is connected to the interaction of the spin magnetic moment of the electron with the magnetic field of the nucleus in the co-moving frame of the electron. The second contribution is related to Thomas precession. Larmor interaction energy The Larmor interaction energy is Substituting in this equation expressions for the spin magnetic moment and the magnetic field, one gets Now we have to take into account Thomas precession correction for the electron's curved trajectory. Thomas interaction energy In 1926 Llewellyn Thomas relativistically recomputed the doublet separation in the fine structure of the atom. Thomas precession rate is related to the angular frequency of the orbital motion of a spinning particle as follows: where is the Lorentz factor of the moving particle. The Hamiltonian producing the spin precession is given by To the first order in , we obtain Total interaction energy The total spin–orbit potential in an external electrostatic potential takes the form The net effect of Thomas precession is the reduction of the Larmor interaction energy by factor of about 1/2, which came to be known as the Thomas half. Evaluating the energy shift Thanks to all the above approximations, we can now evaluate the detailed energy shift in this model. Note that and are no longer conserved quantities. In particular, we wish to find a new basis that diagonalizes both (the non-perturbed Hamiltonian) and . To find out what basis this is, we first define the total angular momentum operator Taking the dot product of this with itself, we get (since and commute), and therefore It can be shown that the five operators , , , , and all commute with each other and with ΔH. Therefore, the basis we were looking for is the simultaneous eigenbasis of these five operators (i.e., the basis where all five are diagonal). Elements of this basis have the five quantum numbers: (the "principal quantum number"), (the "total angular momentum quantum number"), (the "orbital angular momentum quantum number"), (the "spin quantum number"), and (the " component of total angular momentum"). To evaluate the energies, we note that for hydrogenic wavefunctions (here is the Bohr radius divided by the nuclear charge ); and Final energy shift We can now say that where the spin-orbit coupling constant is For the exact relativistic result, see the solutions to the Dirac equation for a hydrogen-like atom. The derivation above calculates the interaction energy in the (momentaneous) rest frame of the electron and in this reference frame there's a magnetic field that's absent in the rest frame of the nucleus. Another approach is to calculate it in the rest frame of the nucleus, see for example George P. Fisher: Electric Dipole Moment of a Moving Magnetic Dipole (1971). However the rest frame calculation is sometimes avoided, because one has to account for hidden momentum. In solids A crystalline solid (semiconductor, metal etc.) is characterized by its band structure. While on the overall scale (including the core levels) the spin–orbit interaction is still a small perturbation, it may play a relatively more important role if we zoom in to bands close to the Fermi level (). The atomic (spin–orbit) interaction, for example, splits bands that would be otherwise degenerate, and the particular form of this spin–orbit splitting (typically of the order of few to few hundred millielectronvolts) depends on the particular system. The bands of interest can be then described by various effective models, usually based on some perturbative approach. An example of how the atomic spin–orbit interaction influences the band structure of a crystal is explained in the article about Rashba and Dresselhaus interactions. In crystalline solid contained paramagnetic ions, e.g. ions with unclosed d or f atomic subshell, localized electronic states exist. In this case, atomic-like electronic levels structure is shaped by intrinsic magnetic spin–orbit interactions and interactions with crystalline electric fields. Such structure is named the fine electronic structure. For rare-earth ions the spin–orbit interactions are much stronger than the crystal electric field (CEF) interactions. The strong spin–orbit coupling makes J a relatively good quantum number, because the first excited multiplet is at least ~130 meV (1500 K) above the primary multiplet. The result is that filling it at room temperature (300 K) is negligibly small. In this case, a -fold degenerated primary multiplet split by an external CEF can be treated as the basic contribution to the analysis of such systems' properties. In the case of approximate calculations for basis , to determine which is the primary multiplet, the Hund principles, known from atomic physics, are applied: The ground state of the terms' structure has the maximal value allowed by the Pauli exclusion principle. The ground state has a maximal allowed value, with maximal . The primary multiplet has a corresponding when the shell is less than half full, and , where the fill is greater. The , and of the ground multiplet are determined by Hund's rules. The ground multiplet is degenerated – its degeneracy is removed by CEF interactions and magnetic interactions. CEF interactions and magnetic interactions resemble, somehow, the Stark and the Zeeman effect known from atomic physics. The energies and eigenfunctions of the discrete fine electronic structure are obtained by diagonalization of the (2J + 1)-dimensional matrix. The fine electronic structure can be directly detected by many different spectroscopic methods, including the inelastic neutron scattering (INS) experiments. The case of strong cubic CEF (for 3d transition-metal ions) interactions form group of levels (e.g. T2g, A2g), which are partially split by spin–orbit interactions and (if occur) lower-symmetry CEF interactions. The energies and eigenfunctions of the discrete fine electronic structure (for the lowest term) are obtained by diagonalization of the (2L + 1)(2S + 1)-dimensional matrix. At zero temperature (T = 0 K) only the lowest state is occupied. The magnetic moment at T = 0 K is equal to the moment of the ground state. It allows the evaluation of the total, spin and orbital moments. The eigenstates and corresponding eigenfunctions can be found from direct diagonalization of Hamiltonian matrix containing crystal field and spin–orbit interactions. Taking into consideration the thermal population of states, the thermal evolution of the single-ion properties of the compound is established. This technique is based on the equivalent operator theory defined as the CEF widened by thermodynamic and analytical calculations defined as the supplement of the CEF theory by including thermodynamic and analytical calculations. Examples of effective Hamiltonians Hole bands of a bulk (3D) zinc-blende semiconductor will be split by into heavy and light holes (which form a quadruplet in the -point of the Brillouin zone) and a split-off band ( doublet). Including two conduction bands ( doublet in the -point), the system is described by the effective eight-band model of Kohn and Luttinger. If only top of the valence band is of interest (for example when , Fermi level measured from the top of the valence band), the proper four-band effective model is where are the Luttinger parameters (analogous to the single effective mass of a one-band model of electrons) and are angular momentum 3/2 matrices ( is the free electron mass). In combination with magnetization, this type of spin–orbit interaction will distort the electronic bands depending on the magnetization direction, thereby causing magnetocrystalline anisotropy (a special type of magnetic anisotropy). If the semiconductor moreover lacks the inversion symmetry, the hole bands will exhibit cubic Dresselhaus splitting. Within the four bands (light and heavy holes), the dominant term is where the material parameter for GaAs (see pp. 72 in Winkler's book, according to more recent data the Dresselhaus constant in GaAs is 9 eVÅ3; the total Hamiltonian will be ). Two-dimensional electron gas in an asymmetric quantum well (or heterostructure) will feel the Rashba interaction. The appropriate two-band effective Hamiltonian is where is the 2 × 2 identity matrix, the Pauli matrices and the electron effective mass. The spin–orbit part of the Hamiltonian, is parametrized by , sometimes called the Rashba parameter (its definition somewhat varies), which is related to the structure asymmetry. Above expressions for spin–orbit interaction couple spin matrices and to the quasi-momentum , and to the vector potential of an AC electric field through the Peierls substitution . They are lower order terms of the Luttinger–Kohn k·p perturbation theory in powers of . Next terms of this expansion also produce terms that couple spin operators of the electron coordinate . Indeed, a cross product is invariant with respect to time inversion. In cubic crystals, it has a symmetry of a vector and acquires a meaning of a spin–orbit contribution to the operator of coordinate. For electrons in semiconductors with a narrow gap between the conduction and heavy hole bands, Yafet derived the equation where is a free electron mass, and is a -factor properly renormalized for spin–orbit interaction. This operator couples electron spin directly to the electric field through the interaction energy . Oscillating electromagnetic field Electric dipole spin resonance (EDSR) is the coupling of the electron spin with an oscillating electric field. Similar to the electron spin resonance (ESR) in which electrons can be excited with an electromagnetic wave with the energy given by the Zeeman effect, in EDSR the resonance can be achieved if the frequency is related to the energy band splitting given by the spin–orbit coupling in solids. While in ESR the coupling is obtained via the magnetic part of the EM wave with the electron magnetic moment, the ESDR is the coupling of the electric part with the spin and motion of the electrons. This mechanism has been proposed for controlling the spin of electrons in quantum dots and other mesoscopic systems. See also Angular momentum coupling Angular momentum diagrams (quantum mechanics) Electric dipole spin resonance Kugel–Khomskii coupling Lamb shift Relativistic angular momentum Spherical basis Stark effect Footnotes References Textbooks Further reading Atomic physics Magnetism Spintronics
Spin–orbit interaction
[ "Physics", "Chemistry", "Materials_science" ]
3,046
[ "Spintronics", "Quantum mechanics", "Atomic physics", " molecular", "Condensed matter physics", "Atomic", " and optical physics" ]
1,872,457
https://en.wikipedia.org/wiki/Phorone
Phorone, or diisopropylidene acetone, is a yellow crystalline substance with a geranium odor, with formula or . Preparation It was first obtained in 1837 in impure form by the French chemist Auguste Laurent, who called it "camphoryle". In 1849, the French chemist Charles Frédéric Gerhardt and his student Jean Pierre Liès-Bodart prepared it in a pure state and named it "phorone". On both occasions it was produced by ketonization through the dry distillation of the calcium salt of camphoric acid. It is now typically obtained by the acid-catalysed twofold aldol condensation of three molecules of acetone. Mesityl oxide is obtained as an intermediate and can be isolated. Crude phorone can be purified by repeated recrystallization from ethanol or ether, in which it is soluble. Reactions Phorone can condense with ammonia to form triacetone amine. See also Isophorone References Merck Index, 11th Edition, 7307. External links International Chemical Safety Cards Ketones Alkene derivatives
Phorone
[ "Chemistry" ]
231
[ "Ketones", "Functional groups" ]
1,873,277
https://en.wikipedia.org/wiki/Residual%20stress
In materials science and solid mechanics, residual stresses are stresses that remain in a solid material after the original cause of the stresses has been removed. Residual stress may be desirable or undesirable. For example, laser peening imparts deep beneficial compressive residual stresses into metal components such as turbine engine fan blades, and it is used in toughened glass to allow for large, thin, crack- and scratch-resistant glass displays on smartphones. However, unintended residual stress in a designed structure may cause it to fail prematurely. Residual stresses can result from a variety of mechanisms including inelastic (plastic) deformations, temperature gradients (during thermal cycle) or structural changes (phase transformation). Heat from welding may cause localized expansion, which is taken up during welding by either the molten metal or the placement of parts being welded. When the finished weldment cools, some areas cool and contract more than others, leaving residual stresses. Another example occurs during semiconductor fabrication and microsystem fabrication when thin film materials with different thermal and crystalline properties are deposited sequentially under different process conditions. The stress variation through a stack of thin film materials can be very complex and can vary between compressive and tensile stresses from layer to layer. Applications While uncontrolled residual stresses are undesirable, some designs rely on them. In particular, brittle materials can be toughened by including compressive residual stress, as in the case for toughened glass and pre-stressed concrete. The predominant mechanism for failure in brittle materials is brittle fracture, which begins with initial crack formation. When an external tensile stress is applied to the material, the crack tips concentrate stress, increasing the local tensile stresses experienced at the crack tips to a greater extent than the average stress on the bulk material. This causes the initial crack to enlarge quickly (propagate) as the surrounding material is overwhelmed by the stress concentration, leading to fracture. A material having compressive residual stress helps to prevent brittle fracture because the initial crack is formed under compressive (negative tensile) stress. To cause brittle fracture by crack propagation of the initial crack, the external tensile stress must overcome the compressive residual stress before the crack tips experience sufficient tensile stress to propagate. The manufacture of some swords utilises a gradient in martensite formation to produce particularly hard edges (notably the katana). The difference in residual stress between the harder cutting edge and the softer back of the sword gives such swords their characteristic curve. In toughened glass, compressive stresses are induced on the surface of the glass, balanced by tensile stresses in the body of the glass. Due to the residual compressive stress on the surface, toughened glass is more resistant to cracks, but shatter into small shards when the outer surface is broken. A demonstration of the effect is shown by Prince Rupert's Drop, a material-science novelty in which a molten glass globule is quenched in water: Because the outer surface cools and solidifies first, when the volume cools and solidifies, it "wants" to take up a smaller volume than the outer "skin" has already defined; this puts much of the volume in tension, pulling the "skin" in, putting the "skin" in compression. As a result, the solid globule is extremely tough, able to be hit with a hammer, but if its long tail is broken, the balance of forces is upset, causing the entire piece to shatter violently. In certain types of gun barrels made with two tubes forced together, the inner tube is compressed while the outer tube stretches, preventing cracks from opening in the rifling when the gun is fired. Compressive residual stress Common methods to induce compressive residual stress are shot peening for surfaces and High frequency impact treatment for weld toes. Depth of compressive residual stress varies depending on the method. Both methods can increase lifetime of constructions significantly. Creation of residual stress There are some techniques which are used to create uniform residual stress in a beam. For example, the four point bend allows inserting residual stress by applying a load on a beam using two cylinders. Measurement techniques Overview There are many techniques used to measure residual stresses, which are broadly categorised into destructive, semi-destructive and non-destructive techniques. The selection of the technique depends on the information required and the nature of the measurement specimen. Factors include the depth/penetration of the measurement (surface or through-thickness), the length scale to be measured over (macroscopic, mesoscopic or microscopic), the resolution of the information required, and also the composition geometry and location of the specimen. Additionally, some of the techniques need to be performed in specialised laboratory facilities, meaning that "on-site" measurements are not possible for all of the techniques. Destructive techniques Destructive techniques result in large and irreparable structural change to the specimen, meaning that either the specimen cannot be returned to service or a mock-up or spare must be used. These techniques function using a "strain release" principle; cutting the measurement specimen to relax the residual stresses and then measuring the deformed shape. As these deformations are usually elastic, there is an exploitable linear relationship between the magnitude of the deformation and magnitude of the released residual stress. Destructive techniques include: Contour Method – measures the residual stress on a 2D plane section through a specimen, in a uniaxial direction normal to a surface cut through the specimen with wire EDM. Slitting (Crack Compliance) – measures residual stress through the thickness of a specimen, at a normal to a cut "slit". Block Removal/Splitting/Layering Sachs' Boring Semi-destructive techniques Similarly to the destructive techniques, these also function using the "strain release" principle. However, they remove only a small amount of material, leaving the overall integrity of the structure intact. These include: Deep Hole Drilling – measures the residual stresses through the thickness of a component by relaxing the stresses in a "core" surrounding a small diameter drilled hole. Centre Hole Drilling – measures the near-surface residual stresses by strain release corresponding to a small shallow drilled hole with a strain gauge rosette. Centre hole drilling is appropriate for up to 4 mm in depth. Alternatively, blind hole drilling can be used for thin parts. Center hole drilling can also be performed in the field for on-site testing. Ring Core – similar to Centre Hole Drilling, but with greater penetration, and with the cutting taking place around the strain gauge rosette rather than through its centre. Non-destructive techniques The non-destructive techniques measure the effects of relationships between the residual stresses and their action of crystallographic properties of the measured material. Some of these work by measuring the diffraction of high frequency electromagnetic radiation through the atomic lattice spacing (which has been deformed due to the stress) relative to a stress-free sample. The Ultrasonic and Magnetic techniques exploit the acoustic and ferromagnetic properties of materials to perform relative measurements of residual stress. Non-destructive techniques include: Electromagnetic a.k.a. eStress - Can be used with a wide range of sample dimensions and materials, with accuracy on par to that of neutron diffraction. Portable systems are available such as the eStress system, which can be used for on-site measurements or permanently installed for continuous monitoring. Speed of measurement is 1–10 seconds per location. Neutron Diffraction - A proven technique that can measure through-thickness but which requires a neutron source (like a nuclear reactor). Synchrotron Diffraction - Requires a synchrotron but provide similarly useful data as eStress and the neutron diffraction methods. X-Ray Diffraction - a limited surface technique with penetration of a few hundred microns only. Ultrasonic - relatively new and promising experimental process known from 70s-80s of last century [14] Magnetic - Can be used with very limited sample dimensions. Relief of residual stress When undesired residual stress is present from prior metalworking operations, the amount of residual stress may be reduced using several methods. These methods may be classified into thermal and mechanical (or nonthermal) methods. All the methods involve processing the part to be stress relieved as a whole. Thermal method The thermal method involves changing the temperature of the entire part uniformly, either through heating or cooling. When parts are heated for stress relief, the process may also be known as stress relief bake. Cooling parts for stress relief is known as cryogenic stress relief and is relatively uncommon. Stress relief bake Most metals, when heated, experience a reduction in yield strength. If the material's yield strength is sufficiently lowered by heating, locations within the material that experienced residual stresses greater than the yield strength (in the heated state) would yield or deform. This leaves the material with residual stresses that are at most as high as the yield strength of the material in its heated state. Stress relief bake should not be confused with annealing or tempering, which are heat treatments to increase ductility of a metal. Although those processes also involve heating the material to high temperatures and reduce residual stresses, they also involve a change in metallurgical properties, which may be undesired. For certain materials such as low alloy steel, care must be taken during stress relief bake so as not to exceed the temperature at which the material achieves maximum hardness (See Tempering in alloy steels). Cryogenic stress relief Cryogenic stress relief involves placing the material (usually steel) into a cryogenic environment such as liquid nitrogen. In this process, the material to be stress relieved will be cooled to a cryogenic temperature for a long period, then slowly brought back to room temperature. Nonthermal methods Mechanical methods to relieve undesirable surface tensile stresses and replace them with beneficial compressive residual stresses include shot peening and laser peening. Each works the surface of the material with a media: shot peening typically uses a metal or glass material; laser peening uses high intensity beams of light to induce a shock wave that propagates deep into the material. See also Autofrettage Shot peening Laser peening Low plasticity burnishing High-frequency impact treatment Deep hole drilling (DHD) measurement technique Hole drilling method References Further reading Hosford, William F. 2005. "Residual Stresses." In Mechanical Behavior of Materials, 308–321. Cambridge University Press. Cary, Howard B. and Scott C. Helzer (2005). Modern Welding Technology. Upper Saddle River, New Jersey: Pearson Education. Schajer, Gary S. 2013. Practical Residual Stress Measurement Methods. Wiley. Kehl, J.-H., Drafz, R., Pape, F. and Poll, G. 2016. Simulative investigations of the influence of surface indentations on residual stresses on inner raceways for roller element bearings, International Conference on Residual Stresses 2016 (Sydney), DOI: 10.21741/9781945291173-69 External links Comprehensive resources on Residual stresses at Cambridge University Civil engineering Continuum mechanics Engineering failures Mechanical engineering Metalworking terminology
Residual stress
[ "Physics", "Technology", "Engineering" ]
2,268
[ "Systems engineering", "Applied and interdisciplinary physics", "Reliability engineering", "Continuum mechanics", "Technological failures", "Classical mechanics", "Engineering failures", "Construction", "Civil engineering", "Mechanical engineering" ]
1,874,543
https://en.wikipedia.org/wiki/Cut%20and%20fill
In earthmoving, cut and fill is the process of constructing a railway, road or canal whereby the amount of material from cuts roughly matches the amount of fill needed to make nearby embankments to minimize the amount of construction labor. Overview Cut sections of roadway or rail are areas where the roadway has a lower elevation than the surrounding terrain. Fill sections are elevated sections of a roadway or trackbed. Cut and fill takes material from cut excavations and uses this to make fill sections. It costs resources to excavate material, relocate it, and to compact and otherwise prepare the filled sections. The technique aims to minimise the effort of relocating excavated material while also taking into account other constraints such as maintaining a specified grade over the route. Other considerations In addition to minimising construction cost, other factors influence the placement of cut or filled sections. For example, air pollutants can concentrate in the valleys created by the cut section. Conversely, noise pollution is mitigated by cut sections since an effective blockage of line-of-sight sound propagation is created by the depressed roadway design. The environmental effects of fill sections are typically favorable with respect to air pollution dispersal, but in respect to sound propagation, exposure of nearby residents is generally increased, since sound walls and other forms of sound path blockage are less effective in this geometry. The reasons for creating fills include the reduction of grade along a route or the elevation of the route above water, swampy ground, or areas where snow drifts frequently collect. Fills can also be used to cover tree stumps, rocks, or unstable soil, in which case material with a higher bearing capacity is placed on top of the obstacle in order to carry the weight of the roadway or railway and reduce differential settlement. History The practice of cut-and-fill was widely utilized to construct tracks along rolling terrain across the British Isles. It was later applied in the construction of new dwellings for returning veterans in Ireland at the end of World War II. This application was developed by Irish railway engineer Lachlan J. Boland, who saw the benefits of introducing railway practices to residential construction. Software A number of software products are available for calculating cut and fill. A simple approach involves defining different earthworks features in a computer program and then adjusting elevations manually to calculate the optimal cut and fill. More sophisticated software is able to automatically balance cut and fill while also considering the materials. Software that can do this falls under the broad category of earthworks estimation software. See also Cut-and-cover Fill dirt Regrading Trench References Environmental engineering Rail infrastructure Road cuttings Building engineering
Cut and fill
[ "Chemistry", "Engineering" ]
523
[ "Building engineering", "Chemical engineering", "Civil engineering", "Environmental engineering", "Architecture" ]
1,253,782
https://en.wikipedia.org/wiki/Relativistic%20quantum%20chemistry
Relativistic quantum chemistry combines relativistic mechanics with quantum chemistry to calculate elemental properties and structure, especially for the heavier elements of the periodic table. A prominent example is an explanation for the color of gold: due to relativistic effects, it is not silvery like most other metals. The term relativistic effects was developed in light of the history of quantum mechanics. Initially, quantum mechanics was developed without considering the theory of relativity. Relativistic effects are those discrepancies between values calculated by models that consider relativity and those that do not. Relativistic effects are important for heavier elements with high atomic numbers, such as lanthanides and actinides. Relativistic effects in chemistry can be considered to be perturbations, or small corrections, to the non-relativistic theory of chemistry, which is developed from the solutions of the Schrödinger equation. These corrections affect the electrons differently depending on the electron speed compared with the speed of light. Relativistic effects are more prominent in heavy elements because only in these elements do electrons attain sufficient speeds for the elements to have properties that differ from what non-relativistic chemistry predicts. History Beginning in 1935, Bertha Swirles described a relativistic treatment of a many-electron system, despite Paul Dirac's 1929 assertion that the only imperfections remaining in quantum mechanics "give rise to difficulties only when high-speed particles are involved and are therefore of no importance in the consideration of the atomic and molecular structure and ordinary chemical reactions in which it is, indeed, usually sufficiently accurate if one neglects relativity variation of mass and velocity and assumes only Coulomb forces between the various electrons and atomic nuclei". Theoretical chemists by and large agreed with Dirac's sentiment until the 1970s, when relativistic effects were observed in heavy elements. The Schrödinger equation had been developed without considering relativity in Schrödinger's 1926 article. Relativistic corrections were made to the Schrödinger equation (see Klein–Gordon equation) to describe the fine structure of atomic spectra, but this development and others did not immediately trickle into the chemical community. Since atomic spectral lines were largely in the realm of physics and not in that of chemistry, most chemists were unfamiliar with relativistic quantum mechanics, and their attention was on lighter elements typical for the organic chemistry focus of the time. Dirac's opinion on the role relativistic quantum mechanics would play for chemical systems has been largely dismissed for two main reasons. First, electrons in s and p atomic orbitals travel at a significant fraction of the speed of light. Second, relativistic effects give rise to indirect consequences that are especially evident for d and f atomic orbitals. Qualitative treatment One of the most important and familiar results of relativity is that the relativistic mass of the electron increases as where are the electron rest mass, velocity of the electron, and speed of light respectively. The figure at the right illustrates this relativistic effect as a function of velocity. This has an immediate implication on the Bohr radius (), which is given by where is the reduced Planck constant, and α is the fine-structure constant (a relativistic correction for the Bohr model). Bohr calculated that a 1s orbital electron of a hydrogen atom orbiting at the Bohr radius of 0.0529 nm travels at nearly 1/137 the speed of light. One can extend this to a larger element with an atomic number Z by using the expression for a 1s electron, where v is its radial velocity, i.e., its instantaneous speed tangent to the radius of the atom. For gold with Z = 79, v ≈ 0.58c, so the 1s electron will be moving at 58% of the speed of light. Substituting this in for v/c in the equation for the relativistic mass, one finds that mrel = 1.22me, and in turn putting this in for the Bohr radius above one finds that the radius shrinks by 22%. If one substitutes the "relativistic mass" into the equation for the Bohr radius it can be written It follows that At right, the above ratio of the relativistic and nonrelativistic Bohr radii has been plotted as a function of the electron velocity. Notice how the relativistic model shows the radius decreases with increasing velocity. When the Bohr treatment is extended to hydrogenic atoms, the Bohr radius becomes where is the principal quantum number, and Z is an integer for the atomic number. In the Bohr model, the angular momentum is given as . Substituting into the equation above and solving for gives From this point, atomic units can be used to simplify the expression into; Substituting this into the expression for the Bohr ratio mentioned above gives At this point one can see that a low value of and a high value of results in . This fits with intuition: electrons with lower principal quantum numbers will have a higher probability density of being nearer to the nucleus. A nucleus with a large charge will cause an electron to have a high velocity. A higher electron velocity means an increased electron relativistic mass, and as a result the electrons will be near the nucleus more of the time and thereby contract the radius for small principal quantum numbers. Periodic table deviations Mercury Mercury (Hg) is a liquid down to approximately −39 °C, its melting point. Bonding forces are weaker for Hg–Hg bonds than for their immediate neighbors such as cadmium (m.p. 321 °C) and gold (m.p. 1064 °C). The lanthanide contraction only partially accounts for this anomaly. Because the 6s2 orbital is contracted by relativistic effects and may therefore only weakly contribute to any chemical bonding, Hg–Hg bonding must be mostly the result of van der Waals forces. Mercury gas is mostly monatomic, Hg(g). Hg2(g) rarely forms and has a low dissociation energy, as expected due to the lack of strong bonds. Au2(g) and Hg(g) are analogous with H2(g) and He(g) with regard to having the same nature of difference. The relativistic contraction of the 6s2 orbital leads to gaseous mercury sometimes being referred to as a pseudo noble gas. Color of gold and caesium The reflectivity of aluminium (Al), silver (Ag), and gold (Au) is shown in the graph to the right. The human eye sees electromagnetic radiation with a wavelength near 600 nm as yellow. Gold absorbs blue light more than it absorbs other visible wavelengths of light; the reflected light reaching the eye is therefore lacking in blue compared with the incident light. Since yellow is complementary to blue, this makes a piece of gold under white light appear yellow to human eyes. The electronic transition from the 5d orbital to the 6s orbital is responsible for this absorption. An analogous transition occurs in silver, but the relativistic effects are smaller than in gold. While silver's 4d orbital experiences some relativistic expansion and the 5s orbital contraction, the 4d–5s distance in silver is much greater than the 5d–6s distance in gold. The relativistic effects increase the 5d orbital's distance from the atom's nucleus and decrease the 6s orbital's distance. Due to the decreased 6s orbital distance, the electronic transition primarily absorbs in the violet/blue region of the visible spectrum, as opposed to the UV region. Caesium, the heaviest of the alkali metals that can be collected in quantities sufficient for viewing, has a golden hue, whereas the other alkali metals are silver-white. However, relativistic effects are not very significant at Z = 55 for caesium (not far from Z = 47 for silver). The golden color of caesium comes from the decreasing frequency of light required to excite electrons of the alkali metals as the group is descended. For lithium through rubidium, this frequency is in the ultraviolet, but for caesium it reaches the blue-violet end of the visible spectrum; in other words, the plasmonic frequency of the alkali metals becomes lower from lithium to caesium. Thus caesium transmits and partially absorbs violet light preferentially, while other colors (having lower frequency) are reflected; hence it appears yellowish. Lead–acid battery Without relativity, lead (Z = 82) would be expected to behave much like tin (Z = 50), so tin–acid batteries should work just as well as the lead–acid batteries commonly used in cars. However, calculations show that about 10 V of the 12 V produced by a 6-cell lead–acid battery arises purely from relativistic effects, explaining why tin–acid batteries do not work. Inert-pair effect In Tl(I) (thallium), Pb(II) (lead), and Bi(III) (bismuth) complexes a 6s2 electron pair exists. The inert pair effect is the tendency of this pair of electrons to resist oxidation due to a relativistic contraction of the 6s orbital. Other effects Additional phenomena commonly caused by relativistic effects are the following: The effect of relativistic effects on metallophilic interactions is uncertain. Although Runeberg et al. (1999) calculated an attractive effect, Wan et al. (2021) instead calculated a repulsive effect. The stability of gold and platinum anions in compounds such as caesium auride. The slightly reduced reactivity of francium compared with caesium. About 10% of the lanthanide contraction is attributed to the relativistic mass of high-velocity electrons and the smaller Bohr radius that results. See also Ionization energy Electronegativity Electron affinity Quantum mechanics Relativistic quantum mechanics References Further reading P. A. Christiansen; W. C. Ermler; K. S. Pitzer. Relativistic Effects in Chemical Systems. Annual Review of Physical Chemistry 1985, 36, 407–432. Quantum chemistry Quantum chemistry
Relativistic quantum chemistry
[ "Physics", "Chemistry" ]
2,126
[ "Quantum chemistry", "Quantum mechanics", "Special relativity", "Theoretical chemistry", " molecular", "Theory of relativity", "Atomic", " and optical physics" ]
1,254,108
https://en.wikipedia.org/wiki/Cathleen%20Synge%20Morawetz
Cathleen Synge Morawetz (May 5, 1923 – August 8, 2017) was a Canadian mathematician who spent much of her career in the United States. Morawetz's research was mainly in the study of the partial differential equations governing fluid flow, particularly those of mixed type occurring in transonic flow. She was professor emerita at the Courant Institute of Mathematical Sciences at the New York University, where she had also served as director from 1984 to 1988. She was president of the American Mathematical Society from 1995 to 1996. She was awarded the National Medal of Science in 1998. Childhood Morawetz's father, John Lighton Synge, nephew of John Millington Synge, was an Irish mathematician, specializing in the geometry of general relativity. Her mother also studied mathematics for a time. Her uncle was Edward Hutchinson Synge who is credited as the inventor of the Near-field scanning optical microscope and very large astronomical telescopes, based on multiple mirrors. Her childhood was split between Ireland and Canada. Both her parents were supportive of her interest in mathematics and science, and it was a woman mathematician, Cecilia Krieger, who had been a family friend for many years and later encouraged Morawetz to pursue a PhD in mathematics. Morawetz said her father was influential in stimulating her interest in mathematics, but he wondered whether her studying mathematics would be wise (suggesting they might fight like the Bernoulli brothers). Education A 1945 graduate of the University of Toronto, she married Herbert Morawetz, a chemist, on October 28, 1945. She received her master's degree in 1946 at the Massachusetts Institute of Technology. Morawetz got a job at New York University where she edited Supersonic Flow and Shock Waves by Richard Courant and Kurt Otto Friedrichs. She earned her Ph.D. in 1951 at New York University, with a thesis on the stability of a spherical implosion, under the supervision of Kurt Otto Friedrichs. Her thesis was entitled Contracting Spherical Shocks Treated by a Perturbation Method Career After earning her doctorate, she spent a year as a research associate at MIT before returning to work as a research associate at the Courant Institute of Mathematical Sciences at NYU, for five more years. During this time she had no teaching requirements and could focus purely on research. She published work on a variety of topics in applied mathematics including viscosity, compressible fluids and transonic flows. Even if an aircraft remains subsonic, the air flowing around the wing can reach supersonic velocity. The mix of air at supersonic and subsonic velocity creates shock waves that can slow the airplane. Turning to the mathematics of transonic flow, she showed that specially designed shockless airfoils could not, in fact, prevent shocks. Shocks developed in response to even small perturbations, such as a gust of wind or an imperfection in a wing. This discovery opened up the problem of developing a theory for a flow with shocks. Subsequently, the shocks she predicted mathematically now have been observed in experiments as air flows around the wing of a plane. In 1957 she became an assistant professor at Courant. At this point she began to work more closely with her colleagues publishing important joint papers with Peter Lax and Ralph Phillips on the decay of solutions to the wave equation around a star shaped obstacle. She continued with important solo work on the wave equation and transonic flow around a profile until she was promoted to full professor by 1965. At this point her research expanded to a variety of problems including papers on the Tricomi equation the nonrelativistic wave equation including questions of decay and scattering. Her first doctoral student, Lesley Sibner, was graduated in 1964. In the 1970s she worked on questions of scattering theory and the nonlinear wave equation. She proved what is now known as the Morawetz Inequality. She died on August 8, 2017, in New York City. Honors In 1980, Morawetz won a Lester R. Ford Award. In 1981, she became the first woman to deliver the Gibbs Lecture of The American Mathematical Society, and in 1982 presented an Invited Address at a meeting of the Society for Industrial and Applied Mathematics. She received honorary degrees from Eastern Michigan University in 1980, Brown University, and Smith College in 1982, and Princeton in 1990. In 1983 and in 1988, she was selected as a Noether Lecturer. She was elected to the American Academy of Arts and Sciences in 1984. She was named Outstanding Woman Scientist for 1993 by the Association for Women in Science. In 1995, she became the second woman elected to the office of president of the American Mathematical Society. In 1996, she was awarded an honorary ScD degree by Trinity College Dublin, where her father JL Synge had been a student and later a faculty member. That same year, she was elected to the American Philosophical Society. In 1998 she was awarded the National Medal of Science; she was the first woman to receive the medal for work in mathematics. In 2004, she received the Leroy P. Steele Prize for Lifetime Achievement. In 2006, she won the George David Birkhoff Prize in Applied Mathematics. In 2012, she became a fellow of the American Mathematical Society. Morawetz was a member of the National Academy of Sciences and was the first woman to belong to the Applied Mathematics Section of that organization. Publications Personal life Morawetz lived in Greenwich Village with her husband Herbert Morawetz, a polymer chemist. They had four children, eight grandchildren, and three great grandchildren. Their children are Pegeen Rubinstein, John, Lida Jeck, and Nancy Morawetz (a professor at New York University School of Law who manages its Immigrant Rights Clinic). Upon being honored by the National Organization for Women for successfully combining career and family, Morawetz quipped, "Maybe I became a mathematician because I was so crummy at housework." She said her current non-mathematical interests are "grandchildren and sailing." References External links 1923 births 2017 deaths Canadian women academics Canadian people of Irish descent Massachusetts Institute of Technology alumni Mathematical analysts National Medal of Science laureates Courant Institute of Mathematical Sciences alumni Courant Institute of Mathematical Sciences faculty PDE theorists University of Toronto alumni Fellows of the Society for Industrial and Applied Mathematics Fellows of the American Mathematical Society Presidents of the American Mathematical Society Members of the United States National Academy of Sciences Scientists from New York City Scientists from Toronto New York University alumni 20th-century women mathematicians 20th-century Canadian mathematicians 21st-century Canadian mathematicians 21st-century Canadian women mathematicians Mathematicians from New York (state) Canadian expatriates in the United States Members of the American Philosophical Society
Cathleen Synge Morawetz
[ "Mathematics" ]
1,348
[ "Mathematical analysis", "Mathematical analysts" ]
1,254,543
https://en.wikipedia.org/wiki/Nebulae%20and%20Star%20Clusters
There are several astronomical catalogues referred to as Nebulae and Star Clusters. A Nebula is a cloud of dust and gas inside a galaxy. Nebulae become visible if the gas glows, or if the cloud reflects starlight or obscures light from more distant objects. The catalogues that it may refer to: Catalogue des nébuleuses et des amas d'étoiles (Messier "M" catalogue) first published 1771 Catalogue of Nebulae and Clusters of Stars (William Herschel 'CN'/"H" catalogue) first published 1786 General Catalogue of Nebulae and Clusters of Stars (John Herschel 'GC'/"h" catalogue) first published 1864 New General Catalogue of Nebulae and Clusters of Stars (Dreyer "NGC" catalogue) first published 1888 Index Catalogue of Nebulae and Clusters of Stars (JLE Dreyer's "IC" catalogue) See also Nebula, a type of celestial body Galaxy, a type of celestial body formerly referred to as nebulae Star cluster or cluster of stars, a type of celestial body Lists of books Astronomical catalogues
Nebulae and Star Clusters
[ "Astronomy" ]
223
[ "Astronomical catalogues", "Works about astronomy", "Astronomical objects" ]
1,254,904
https://en.wikipedia.org/wiki/Hopcalite
Hopcalite is the trade name for a number of mixtures that mainly consist of oxides of copper and manganese, which are used as catalysts for the conversion of carbon monoxide to carbon dioxide when exposed to the oxygen in the air at room temperature. The name "hopcalite" is derived from Johns Hopkins University - "Hop" and the University of California - "Cal", where basic research into carbon monoxide was carried out during the First World War and these catalysts were discovered in 1918. A variety of compositions are known, such as "hopcalite II" that is approximately 60% manganese dioxide and 40% copper oxide (the MnO2 : CuO molar ratio is 1.375) and "hopcalite I" that is a mixture of 50% MnO, 30% CuO, 15% Co2O3, and 5% Ag2O. Hopcalite has the properties of a porous mass and resembles activated carbon in its appearance. Preparation While typically hopcalite catalysts are prepared by calcining intimate mixtures of oxides and carbonates, various techniques have been employed for producing hopcalites in the laboratory and on an industrial scale, such as physical mixing of the (finely divided) metal oxides, co-precipitation of the metal oxides from metal salt solutions (see salts), thermal decomposition of mixtures of metal nitrates (see nitrate) and metal carbonates (see carbonate), one-step synthesis via flame spray pyrolysis from organic and inorganic precursor systems, e.g. Nanophase hopcalite catalysts have also been described. Although hopcalite-based catalysts have been used in practice for decades, many questions regarding their mode of action are still open. This is due to their complex structures, which make it difficult to obtain information about the active centers and the mechanisms of catalysis and deactivation. Applications Hopcalite is widely used in personal respiratory protective equipment (RPE) and collective protective equipment, among others. Different uses of hopcalite catalysts are listed below: in some types of gas mask filters designed to protect from carbon monoxide (Soviet-made DP-1, combined filters VK-450, SX (CO) filters, e.g.) in air filtration systems and breathing apparatus to purify breathing air supplies, for example those utilised in scuba diving, and firefighting. used as the main filtration ingredient in the self rescue respirators issued to miners (SPP-4) filtering self-rescuers designed for use in fire conditions (GDZK-EN, GDZK-U, GDZK-A, e.g.) in devices for monitoring the content of carbon monoxide (CO) in rooms used as a precaution with submersible air compressors, if they are driven by internal combustion engines (like on ships) In respiratory protective equipment, hopcalite is used to facilitate the rapid oxidation of the toxic carbon monoxide to harmless carbon dioxide with the oxygen from the air, which is then chemically bound to a sodium hydroxide layer, thus eliminating CO from the air stream, (which otherwise is not removed by activated charcoal air filters). Water vapor poisons the hopcalite catalyst, so in order to protect against water vapor, an additional filter based on Silica gel is introduced. In addition to that, the hopcalite layer is protected by a mechanical filter and a layer of activated carbon, purify the air of other contaminants. The operation of carbon monoxide (CO) detectors, on the other hand, is based on recording the heat released during the catalytic oxidation of carbon monoxide (CO) to carbon dioxide (CO2). Although primarily used to catalyze the conversion of CO to CO2, hopcalite catalysts are also used to remove ethylene oxide and other VOCs as well as ozone from gas streams. In addition, hopcalites catalyze the oxidation of various organic compounds at elevated temperatures (200–500 °C). See also Gas mask breathing apparatus Self-contained self-rescue device Carbon monoxide detector Catalysis heterogeneous catalysis Carbon monoxide poisoning References Manganese compounds Copper compounds Catalysts Diving support equipment
Hopcalite
[ "Chemistry" ]
887
[ "Catalysis", "Catalysts", "Chemical kinetics" ]
1,255,277
https://en.wikipedia.org/wiki/Figure%E2%80%93ground%20%28perception%29
Figure–ground organization is a type of perceptual grouping that is a vital necessity for recognizing objects through vision. In Gestalt psychology it is known as identifying a figure from the background. For example, black words on a printed paper are seen as the "figure", and the white sheet as the "background". Gestalt psychology The Gestalt theory was founded in the 20th century in Austria and Germany as a reaction against the associationist and structural schools' atomistic orientation. In 1912, the Gestalt school was formed by Max Wertheimer, Wolfgang Köhler, and Kurt Koffka. The word "gestalt" is a German word translated to English as "pattern" or "configuration." Gestalt concepts can also be referred to as "holism." Gestalt Psychologists were attempting to humanize what was considered a sterile approach. Gestalt psychology establishes that the whole of anything is greater than its parts. The concepts explored by Wertheimer, Köhler, and Koffka in the 20th century established the foundation for the modern study of perception. "The Gestalt concept is that "not only movement, or process as such, but also the direction and distribution of process is determined dynamically by interaction." Sensory organization is not dependent upon isolated stimuli and local stimulation, but upon the relative properties of stimulation and the dynamical context." Wertheimer described holism as the "fundamental formula" of Gestalt psychology: "There are wholes, the behavior of which is not determined by that of their individual elements, but where the part-processes are themselves determined by the intrinsic nature of the whole." Examples The Rubin vase faces–vase drawing that Danish psychologist Edgar Rubin described exemplifies one of the key aspects of figure–ground organization, edge-assignment and its effect on shape perception. In the faces–vase drawing, the perceived shape depends critically on the direction in which the border (edge) between the black and white regions is assigned. If the edges between the black and white regions are assigned inward, then the central white region is seen as a vase shape in front of a black background. No faces are perceived in this case. On the other hand, if the edges are assigned outward, then the two black profile faces are perceived on a white background, and no vase shape is perceived. The human visual system will settle on either of the interpretations of the Rubin vase and alternate between them, a phenomenon known as multistable perception. Functional brain imaging shows that, when people see the Rubin image as a face, there is activity in the temporal lobe, specifically in the face-selective region. An additional example is the "My Wife and My Mother-in-Law" illusion drawing. The image is famous for being reversible. "The viewer may either observe a young girl with her head turned to the right or an old woman with a large nose and protruding chin, depending on one's perspective." The Flag of Canada has also been cited as an example of figure–ground reversal, in which the background edges of the maple leaf can also be seen as two faces arguing. Development Figure–ground perception precedes all other visual perceptual skills and is one of the first to develop in a young baby. The development of perceptual organization develops as early as infancy in human beings. In regards to nature versus nurture, concepts such as "lightness" and "proximity" may develop as early as birth, but recognizing "form similarity" may not be functional until activated by particular experiences. Three- to four-month olds respond to differences in lightness rather than differences in form similarity. It is suggested that scaffolding (the development of new skills over time based on the building of other skills) is responsible for the development of perceptual organization. Environment plays a major role in the development of figure-ground perception. The development of figure–ground perception begins the day the baby can focus on an object. The faces of caregivers, parents, and familiar objects are the first to be focused on and understood. As babies develop, they learn to distinguish the objects they desire from their surroundings. Sitting up, crawling, and walking present ample opportunity to develop the skill during development. Between the ages of 2–4 the skill can be further cultivated by teaching the child to group or sort items. Perceptual process The perceptual decision in which the brain decides which item is the figure and which are part of the ground in a visual scene can be based on many cues, all of which are of a probabilistic nature. For instance, size assists in distinguishing between the figure and the ground, as smaller regions are often (but not always) figures. Object shape can assist in distinguishing figure from ground because figures tend to be convex. Movement also helps; the figure may be moving against a static environment. Color is also a cue because the background tends to continue as one color behind potentially multiple foreground figures, whose colors may vary. Edge assignment also helps; if the edge belongs to the figure, it defines the shape while the background exists behind the shape. However, it is sometimes difficult to distinguish between the two because the edge that would separate figure from ground is part of neither, equally defining both the figure and the background. The LOC (lateral occipital cortex) is highly important for figure–ground perception. This region of the visual cortex (located lateral to the fusiform gyrus and extending anteriorly and ventrally) has consistently shown stronger activation in response to objects versus non-objects." Evidently, the process of distinguishing figure from ground (sometimes called figure–ground segmentation) is inherently probabilistic, and the best that the brain can do is to take all relevant cues into account to generate a probabilistic best-guess. In this light, Bayesian figure–ground segmentation models have been proposed to simulate the probabilistic inference by which the brain may distinguish figure from ground. Subjective factors can also influence figure–ground perception. For instance, if a viewer has the intention to perceive one of the two regions as the figure, it will likely alter their ability to analyze the two regions objectively. In addition, if a viewer's gaze is fixated on a particular region, the viewer is more likely to view the fixated region as the figure. Although subjective factors can alter the probability of seeing the figure on one particular side of an edge, they tend not to overpower compositional cues. Artistic applications Figure–ground organization is used to help artists and designers in composition of a 2D piece. Figure–ground reversal may be used as an intentional visual design technique in which an existing image's foreground and background colors are purposely swapped to create new images. Non-visual Figure–ground perception can be expanded from visual perception to include non-visual concepts such as melody/harmony, subject/background and positive/negative space. The concept of figure and ground fully depends on the observer and not on the item itself. In the typical sonic scenarios people encounter, auditory figure and ground signals often overlap in time as well as in frequency content. In these situations, auditory objects are established by integrating sound components both over time and frequency. A 2011 study suggests that the auditory system possesses mechanisms that are sensitive to such cross-frequency and cross-time correlations. Results of this study demonstrated significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure–ground decomposition. In crowded rooms or parties, a person is able to zero in on the conversation they are having with one person (figure) while drowning out the background noise (ground). This can also be referred to as the "cocktail party effect." Figure–ground segregation in hearing is not automatic; rather, it requires attention and draws on resources that are shared across vision and audition. Types of figure–ground problems There are three types of figure–ground problems: The figure and the ground compete. The figure should be the ground and the ground should be the figure. The figure and ground create an optical illusion. See also Composition (visual arts) Ma (negative space) Negative space White space (visual arts) References External links Figure Ground, a puzzle game plays on the figure–ground illusion. Design Optical illusions Dichotomies Perception Cognitive psychology
Figure–ground (perception)
[ "Physics", "Engineering", "Biology" ]
1,725
[ "Physical phenomena", "Behavior", "Optical illusions", "Optical phenomena", "Behavioural sciences", "Cognitive psychology", "Design" ]
1,255,458
https://en.wikipedia.org/wiki/Pseudogroup
In mathematics, a pseudogroup is a set of homeomorphisms between open sets of a space, satisfying group-like and sheaf-like properties. It is a generalisation of the concept of a group, originating however from the geometric approach of Sophus Lie to investigate symmetries of differential equations, rather than out of abstract algebra (such as quasigroup, for example). The modern theory of pseudogroups was developed by Élie Cartan in the early 1900s. Definition A pseudogroup imposes several conditions on sets of homeomorphisms (respectively, diffeomorphisms) defined on open sets of a given Euclidean space or more generally of a fixed topological space (respectively, smooth manifold). Since two homeomorphisms and compose to a homeomorphism from to , one asks that the pseudogroup is closed under composition and inversion. However, unlike those for a group, the axioms defining a pseudogroup are not purely algebraic; the further requirements are related to the possibility of restricting and of patching homeomorphisms (similar to the gluing axiom for sections of a sheaf). More precisely, a pseudogroup on a topological space is a collection of homeomorphisms between open subsets of satisfying the following properties: The domains of the elements in cover ("cover"). The restriction of an element in to any open set contained in its domain is also in ("restriction"). The composition of two elements of , when defined, is in ("composition"). The inverse of an element of is in ("inverse"). The property of lying in is local, i.e. if is a homeomorphism between open sets of and is covered by open sets with restricted to lying in for each , then also lies in ("local"). As a consequence the identity homeomorphism of any open subset of lies in . Similarly, a pseudogroup on a smooth manifold is defined as a collection of diffeomorphisms between open subsets of satisfying analogous properties (where we replace homeomorphisms with diffeomorphisms). Two points in are said to be in the same orbit if an element of sends one to the other. Orbits of a pseudogroup clearly form a partition of ; a pseudogroup is called transitive if it has only one orbit. Examples A widespread class of examples is given by pseudogroups preserving a given geometric structure. For instance, if is a Riemannian manifold, one has the pseudogroup of its local isometries; if is a symplectic manifold, one has the pseudogroup of its local symplectomorphisms; etc. These pseudogroups should be thought as the set of the local symmetries of these structures. Pseudogroups of symmetries and geometric structures Manifolds with additional structures can often be defined using the pseudogroups of symmetries of a fixed local model. More precisely, given a pseudogroup , a -atlas on a topological space consists of a standard atlas on such that the changes of coordinates (i.e. the transition maps) belong to . An equivalent class of Γ-atlases is also called a -structure on . In particular, when is the pseudogroup of all locally defined diffeomorphisms of , one recovers the standard notion of a smooth atlas and a smooth structure. More generally, one can define the following objects as -structures on a topological space : flat Riemannian structures, for pseudogroups of isometries of with the canonical Euclidean metric; symplectic structures, for the pseudogroup of symplectomorphisms of with the canonical symplectic form; analytic structures, for the pseudogroup of (real-)analytic diffeomorphisms of ; Riemann surfaces, for the pseudogroup of invertible holomorphic functions of a complex variable. More generally, any integrable -structure and any -manifold are special cases of -structures, for suitable pseudogroups . Pseudogroups and Lie theory In general, pseudogroups were studied as a possible theory of infinite-dimensional Lie groups. The concept of a local Lie group, namely a pseudogroup of functions defined in neighbourhoods of the origin of a Euclidean space , is actually closer to Lie's original concept of Lie group, in the case where the transformations involved depend on a finite number of parameters, than the contemporary definition via manifolds. One of Cartan's achievements was to clarify the points involved, including the point that a local Lie group always gives rise to a global group, in the current sense (an analogue of Lie's third theorem, on Lie algebras determining a group). The formal group is yet another approach to the specification of Lie groups, infinitesimally. It is known, however, that local topological groups do not necessarily have global counterparts. Examples of infinite-dimensional pseudogroups abound, beginning with the pseudogroup of all diffeomorphisms of . The interest is mainly in sub-pseudogroups of the diffeomorphisms, and therefore with objects that have a Lie algebra analogue of vector fields. Methods proposed by Lie and by Cartan for studying these objects have become more practical given the progress of computer algebra. In the 1950s, Cartan's theory was reformulated by Shiing-Shen Chern, and a general deformation theory for pseudogroups was developed by Kunihiko Kodaira and D. C. Spencer. In the 1960s homological algebra was applied to the basic PDE questions involved, of over-determination; this though revealed that the algebra of the theory is potentially very heavy. In the same decade the interest for theoretical physics of infinite-dimensional Lie theory appeared for the first time, in the shape of current algebra. Intuitively, a Lie pseudogroup should be a pseudogroup which "originates" from a system of PDEs. There are many similar but inequivalent notions in the literature; the "right" one depends on which application one has in mind. However, all these various approaches involve the (finite- or infinite-dimensional) jet bundles of , which are asked to be a Lie groupoid. In particular, a Lie pseudogroup is called of finite order if it can be "reconstructed" from the space of its -jets. References External links Lie groups Algebraic structures Differential geometry Differential topology
Pseudogroup
[ "Mathematics" ]
1,320
[ "Lie groups", "Mathematical structures", "Mathematical objects", "Topology", "Algebraic structures", "Differential topology" ]
1,255,570
https://en.wikipedia.org/wiki/Bending
In applied mechanics, bending (also known as flexure) characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element. The structural element is assumed to be such that at least one of its dimensions is a small fraction, typically 1/10 or less, of the other two. When the length is considerably longer than the width and the thickness, the element is called a beam. For example, a closet rod sagging under the weight of clothes on clothes hangers is an example of a beam experiencing bending. On the other hand, a shell is a structure of any geometric form where the length and the width are of the same order of magnitude but the thickness of the structure (known as the 'wall') is considerably smaller. A large diameter, but thin-walled, short tube supported at its ends and loaded laterally is an example of a shell experiencing bending. In the absence of a qualifier, the term bending is ambiguous because bending can occur locally in all objects. Therefore, to make the usage of the term more precise, engineers refer to a specific object such as; the bending of rods, the bending of beams, the bending of plates, the bending of shells and so on. Quasi-static bending of beams A beam deforms and stresses develop inside it when a transverse load is applied on it. In the quasi-static case, the amount of bending deflection and the stresses that develop are assumed not to change over time. In a horizontal beam supported at the ends and loaded downwards in the middle, the material at the over-side of the beam is compressed while the material at the underside is stretched. There are two forms of internal stresses caused by lateral loads: Shear stress parallel to the lateral loading plus complementary shear stress on planes perpendicular to the load direction; Direct compressive stress in the upper region of the beam, applicable mostly to cement concreted elements and, Direct tensile stress, applicable to steel elements, and is at the lower region of the beam. These last two forces form a couple or moment as they are equal in magnitude and opposite in direction. This bending moment resists the sagging deformation characteristic of a beam experiencing bending. The stress distribution in a beam can be predicted quite accurately when some simplifying assumptions are used. Euler–Bernoulli bending theory In the Euler–Bernoulli theory of slender beams, a major assumption is that 'plane sections remain plane'. In other words, any deformation due to shear across the section is not accounted for (no shear deformation). Also, this linear distribution is only applicable if the maximum stress is less than the yield stress of the material. For stresses that exceed yield, refer to article plastic bending. At yield, the maximum stress experienced in the section (at the furthest points from the neutral axis of the beam) is defined as the flexural strength. Consider beams where the following are true: The beam is originally straight and slender, and any taper is slight The material is isotropic (or orthotropic), linear elastic, and homogeneous across any cross section (but not necessarily along its length) Only small deflections are considered In this case, the equation describing beam deflection () can be approximated as: where the second derivative of its deflected shape with respect to is interpreted as its curvature, is the Young's modulus, is the area moment of inertia of the cross-section, and is the internal bending moment in the beam. If, in addition, the beam is homogeneous along its length as well, and not tapered (i.e. constant cross section), and deflects under an applied transverse load , it can be shown that: This is the Euler–Bernoulli equation for beam bending. After a solution for the displacement of the beam has been obtained, the bending moment () and shear force () in the beam can be calculated using the relations Simple beam bending is often analyzed with the Euler–Bernoulli beam equation. The conditions for using simple bending theory are: The beam is subject to pure bending. This means that the shear force is zero, and that no torsional or axial loads are present. The material is isotropic (or orthotropic) and homogeneous. The material obeys Hooke's law (it is linearly elastic and will not deform plastically). The beam is initially straight with a cross section that is constant throughout the beam length. The beam has an axis of symmetry in the plane of bending. The proportions of the beam are such that it would fail by bending rather than by crushing, wrinkling or sideways buckling. Cross-sections of the beam remain plane during bending. Compressive and tensile forces develop in the direction of the beam axis under bending loads. These forces induce stresses on the beam. The maximum compressive stress is found at the uppermost edge of the beam while the maximum tensile stress is located at the lower edge of the beam. Since the stresses between these two opposing maxima vary linearly, there therefore exists a point on the linear path between them where there is no bending stress. The locus of these points is the neutral axis. Because of this area with no stress and the adjacent areas with low stress, using uniform cross section beams in bending is not a particularly efficient means of supporting a load as it does not use the full capacity of the beam until it is on the brink of collapse. Wide-flange beams (-beams) and truss girders effectively address this inefficiency as they minimize the amount of material in this under-stressed region. The classic formula for determining the bending stress in a beam under simple bending is: where is the bending stress – the moment about the neutral axis – the perpendicular distance to the neutral axis – the second moment of area about the neutral axis z. - the Resistance Moment about the neutral axis z. Extensions of Euler-Bernoulli beam bending theory Plastic bending The equation is valid only when the stress at the extreme fiber (i.e., the portion of the beam farthest from the neutral axis) is below the yield stress of the material from which it is constructed. At higher loadings the stress distribution becomes non-linear, and ductile materials will eventually enter a plastic hinge state where the magnitude of the stress is equal to the yield stress everywhere in the beam, with a discontinuity at the neutral axis where the stress changes from tensile to compressive. This plastic hinge state is typically used as a limit state in the design of steel structures. Complex or asymmetrical bending The equation above is only valid if the cross-section is symmetrical. For homogeneous beams with asymmetrical sections, the maximum bending stress in the beam is given by where are the coordinates of a point on the cross section at which the stress is to be determined as shown to the right, and are the bending moments about the y and z centroid axes, and are the second moments of area (distinct from moments of inertia) about the y and z axes, and is the product of moments of area. Using this equation it is possible to calculate the bending stress at any point on the beam cross section regardless of moment orientation or cross-sectional shape. Note that do not change from one point to another on the cross section. Large bending deformation For large deformations of the body, the stress in the cross-section is calculated using an extended version of this formula. First the following assumptions must be made: Assumption of flat sections – before and after deformation the considered section of body remains flat (i.e., is not swirled). Shear and normal stresses in this section that are perpendicular to the normal vector of cross section have no influence on normal stresses that are parallel to this section. Large bending considerations should be implemented when the bending radius is smaller than ten section heights h: With those assumptions the stress in large bending is calculated as: where is the normal force is the section area is the bending moment is the local bending radius (the radius of bending at the current section) is the area moment of inertia along the x-axis, at the place (see Steiner's theorem) is the position along y-axis on the section area in which the stress is calculated. When bending radius approaches infinity and , the original formula is back: . Timoshenko bending theory In 1921, Timoshenko improved upon the Euler–Bernoulli theory of beams by adding the effect of shear into the beam equation. The kinematic assumptions of the Timoshenko theory are: normals to the axis of the beam remain straight after deformation there is no change in beam thickness after deformation However, normals to the axis are not required to remain perpendicular to the axis after deformation. The equation for the quasistatic bending of a linear elastic, isotropic, homogeneous beam of constant cross-section beam under these assumptions is where is the area moment of inertia of the cross-section, is the cross-sectional area, is the shear modulus, is a shear correction factor, and is an applied transverse load. For materials with Poisson's ratios () close to 0.3, the shear correction factor for a rectangular cross-section is approximately The rotation () of the normal is described by the equation The bending moment () and the shear force () are given by Beams on elastic foundations According to Euler–Bernoulli, Timoshenko or other bending theories, the beams on elastic foundations can be explained. In some applications such as rail tracks, foundation of buildings and machines, ships on water, roots of plants etc., the beam subjected to loads is supported on continuous elastic foundations (i.e. the continuous reactions due to external loading is distributed along the length of the beam) Dynamic bending of beams The dynamic bending of beams, also known as flexural vibrations of beams, was first investigated by Daniel Bernoulli in the late 18th century. Bernoulli's equation of motion of a vibrating beam tended to overestimate the natural frequencies of beams and was improved marginally by Rayleigh in 1877 by the addition of a mid-plane rotation. In 1921 Stephen Timoshenko improved the theory further by incorporating the effect of shear on the dynamic response of bending beams. This allowed the theory to be used for problems involving high frequencies of vibration where the dynamic Euler–Bernoulli theory is inadequate. The Euler-Bernoulli and Timoshenko theories for the dynamic bending of beams continue to be used widely by engineers. Euler–Bernoulli theory The Euler–Bernoulli equation for the dynamic bending of slender, isotropic, homogeneous beams of constant cross-section under an applied transverse load is where is the Young's modulus, is the area moment of inertia of the cross-section, is the deflection of the neutral axis of the beam, and is mass per unit length of the beam. Free vibrations For the situation where there is no transverse load on the beam, the bending equation takes the form Free, harmonic vibrations of the beam can then be expressed as and the bending equation can be written as The general solution of the above equation is where are constants and Timoshenko–Rayleigh theory In 1877, Rayleigh proposed an improvement to the dynamic Euler–Bernoulli beam theory by including the effect of rotational inertia of the cross-section of the beam. Timoshenko improved upon that theory in 1922 by adding the effect of shear into the beam equation. Shear deformations of the normal to the mid-surface of the beam are allowed in the Timoshenko–Rayleigh theory. The equation for the bending of a linear elastic, isotropic, homogeneous beam of constant cross-section under these assumptions is where is the polar moment of inertia of the cross-section, is the mass per unit length of the beam, is the density of the beam, is the cross-sectional area, is the shear modulus, and is a shear correction factor. For materials with Poisson's ratios () close to 0.3, the shear correction factor are approximately Free vibrations For free, harmonic vibrations the Timoshenko–Rayleigh equations take the form This equation can be solved by noting that all the derivatives of must have the same form to cancel out and hence as solution of the form may be expected. This observation leads to the characteristic equation The solutions of this quartic equation are where The general solution of the Timoshenko-Rayleigh beam equation for free vibrations can then be written as Quasistatic bending of plates The defining feature of beams is that one of the dimensions is much larger than the other two. A structure is called a plate when it is flat and one of its dimensions is much smaller than the other two. There are several theories that attempt to describe the deformation and stress in a plate under applied loads two of which have been used widely. These are the Kirchhoff–Love theory of plates (also called classical plate theory) the Mindlin–Reissner plate theory (also called the first-order shear theory of plates) Kirchhoff–Love theory of plates The assumptions of Kirchhoff–Love theory are straight lines normal to the mid-surface remain straight after deformation straight lines normal to the mid-surface remain normal to the mid-surface after deformation the thickness of the plate does not change during a deformation. These assumptions imply that where is the displacement of a point in the plate and is the displacement of the mid-surface. The strain-displacement relations are The equilibrium equations are where is an applied load normal to the surface of the plate. In terms of displacements, the equilibrium equations for an isotropic, linear elastic plate in the absence of external load can be written as In direct tensor notation, Mindlin–Reissner theory of plates The special assumption of this theory is that normals to the mid-surface remain straight and inextensible but not necessarily normal to the mid-surface after deformation. The displacements of the plate are given by where are the rotations of the normal. The strain-displacement relations that result from these assumptions are where is a shear correction factor. The equilibrium equations are where Dynamic bending of plates Dynamics of thin Kirchhoff plates The dynamic theory of plates determines the propagation of waves in the plates, and the study of standing waves and vibration modes. The equations that govern the dynamic bending of Kirchhoff plates are where, for a plate with density , and The figures below show some vibrational modes of a circular plate. See also Bending moment Bending Machine (flat metal bending) Brake (sheet metal bending) Brazier effect Bending of plates Bending (metalworking) Continuum mechanics Contraflexure Deflection (engineering) Flexure bearing List of area moments of inertia Pipe bending Shear and moment diagram Shear strength Sandwich theory Vibration Vibration of plates References External links Flexure formulae Beam stress & deflection, beam deflection tables Statics Elasticity (physics) Structural system Deformation (mechanics)
Bending
[ "Physics", "Materials_science", "Technology", "Engineering" ]
3,087
[ "Structural engineering", "Statics", "Physical phenomena", "Elasticity (physics)", "Deformation (mechanics)", "Building engineering", "Classical mechanics", "Materials science", "Structural system", "Physical properties" ]
1,255,905
https://en.wikipedia.org/wiki/Borabenzene
Borabenzene is a hypothetical organoboron compound with the formula C5H5B. Unlike the related but highly stable benzene molecule, borabenzene would be electron-deficient. Related derivatives are the boratabenzene anions, including the parent [C5H5BH]−. Adducts Adducts of borabenzene with Lewis bases are isolatable. Since borabenzene is unavailable, these adducts require indirect methods. 4-Silyl-1-methoxyboracyclohexadiene is used as a precursor to the borabenzene: + → + MeOSiMe3 The pyridine adduct is structurally related to biphenyl. It is a yellow whereas biphenyl is colorless, indicating distinct electronic structures. The pyridine ligand is tightly bound: no exchange is observed with free pyridine, even at elevated temperatures. The borabenzene-pyridine adduct behaves like a diene, not an analog of biphenyl, and will undergo Diels-Alder reactions. See also 6-membered aromatic rings with one carbon replaced by another group: silabenzene, germabenzene, stannabenzene, pyridine, phosphorine, arsabenzene, stibabenzene, bismabenzene, pyrylium, thiopyrylium, selenopyrylium, telluropyrylium Borazine References Boron heterocycles Six-membered rings Hypothetical chemical compounds
Borabenzene
[ "Chemistry" ]
349
[ "Theoretical chemistry", "Hypotheses in chemistry", "Hypothetical chemical compounds" ]
1,256,194
https://en.wikipedia.org/wiki/V-model
The V-model is a graphical representation of a systems development lifecycle. It is used to produce rigorous development lifecycle models and project management models. The V-model falls into three broad categories, the German V-Modell, a general testing model, and the US government standard. The V-model summarizes the main steps to be taken in conjunction with the corresponding deliverables within computerized system validation framework, or project life cycle development. It describes the activities to be performed and the results that have to be produced during product development. The left side of the "V" represents the decomposition of requirements, and the creation of system specifications. The right side of the "V" represents an integration of parts and their validation. However, requirements need to be validated first against the higher level requirements or user needs. Furthermore, there is also something as validation of system models. This can partially be done on the left side also. To claim that validation only occurs on the right side may not be correct. The easiest way is to say that verification is always against the requirements (technical terms) and validation is always against the real world or the user's needs. The aerospace standard RTCA DO-178B states that requirements are validated—confirmed to be true—and the end product is verified to ensure it satisfies those requirements. Validation can be expressed with the query "Are you building the right thing?" and verification with "Are you building it right?" Types There are three general types of V-model. V-Modell "V-Modell" is the official project management method of the German government. It is roughly equivalent to PRINCE2, but more directly relevant to software development. The key attribute of using a "V" representation was to require proof that the products from the left-side of the V were acceptable by the appropriate test and integration organization implementing the right-side of the V. General testing Throughout the testing community worldwide, the V-model is widely seen as a vaguer illustrative depiction of the software development process as described in the International Software Testing Qualifications Board Foundation Syllabus for software testers. There is no single definition of this model, which is more directly covered in the alternative article on the V-Model (software development). US government standard The US also has a government standard V-model. Its scope is a narrower systems development lifecycle model, but far more detailed and more rigorous than most UK practitioners and testers would understand by the V-model. Validation vs. verification It is sometimes said that validation can be expressed by the query "Are you building the right thing?" and verification by "Are you building it right?" In practice, the usage of these terms varies. The PMBOK guide, also adopted by the IEEE as a standard (jointly maintained by INCOSE, the Systems engineering Research Council SERC, and IEEE Computer Society) defines them as follows in its 4th edition: "Validation. The assurance that a product, service, or system meets the needs of the customer and other identified stakeholders. It often involves acceptance and suitability with external customers. Contrast with verification." "Verification. The evaluation of whether or not a product, service, or system complies with a regulation, requirement, specification, or imposed condition. It is often an internal process. Contrast with validation." Objectives The V-model provides guidance for the planning and realization of projects. The following objectives are intended to be achieved by a project execution: Minimization of project risks: The V-model improves project transparency and project control by specifying standardized approaches and describing the corresponding results and responsible roles. It permits an early recognition of planning deviations and risks and improves process management, thus reducing the project risk. Improvement and guarantee of quality: As a standardized process model, the V-model ensures that the results to be provided are complete and have the desired quality. Defined interim results can be checked at an early stage. Uniform product contents will improve readability, understandability and verifiability. Reduction of total cost over the entire project and system life cycle: The effort for the development, production, operation and maintenance of a system can be calculated, estimated and controlled in a transparent manner by applying a standardized process model. The results obtained are uniform and easily retraced. This reduces the acquirer's dependency on the supplier and the effort for subsequent activities and projects. Improvement of communication between all stakeholders: The standardized and uniform description of all relevant elements and terms is the basis for the mutual understanding between all stakeholders. Thus, the frictional loss between user, acquirer, supplier and developer is reduced. V-model topics Systems engineering and verification The systems engineering process (SEP) provides a path for improving the cost-effectiveness of complex systems as experienced by the system owner over the entire life of the system, from conception to retirement. It involves early and comprehensive identification of goals, a concept of operations that describes user needs and the operating environment, thorough and testable system requirements, detailed design, implementation, rigorous acceptance testing of the implemented system to ensure it meets the stated requirements (system verification), measuring its effectiveness in addressing goals (system validation), on-going operation and maintenance, system upgrades over time, and eventual retirement. The process emphasizes requirements-driven design and testing. All design elements and acceptance tests must be traceable to one or more system requirements and every requirement must be addressed by at least one design element and acceptance test. Such rigor ensures nothing is done unnecessarily and everything that is necessary is accomplished. The two streams Specification stream The specification stream mainly consists of: User requirement specifications Functional requirement specifications Design specifications Testing stream The testing stream generally consists of: Installation qualification (IQ) Operational qualification (OQ) Performance qualification (PQ) The development stream can consist (depending on the system type and the development scope) of customization, configuration or coding. Applications The V-model is used to regulate the software development process within the German federal administration. Nowadays it is still the standard for German federal administration and defense projects, as well as software developers within the region. The concept of the V-model was developed simultaneously, but independently, in Germany and in the United States in the late 1980s: The German V-model was originally developed by IABG in Ottobrunn, near Munich, in cooperation with the Federal Office for Defense Technology and Procurement in Koblenz, for the Federal Ministry of Defense. It was taken over by the Federal Ministry of the Interior for the civilian public authorities domain in summer 1992. The US V-model, as documented in the 1991 proceedings for the National Council on Systems Engineering (NCOSE; now INCOSE as of 1995), was developed for satellite systems involving hardware, software, and human interaction. The V-model first appeared at Hughes Aircraft circa 1982 as part of the pre-proposal effort for the FAA Advanced Automation System (AAS) program. It eventually formed the test strategy for the Hughes AAS Design Competition Phase (DCP) proposal. It was created to show the test and integration approach which was driven by new challenges to surface latent defects in the software. The need for this new level of latent defect detection was driven by the goal to start automating the thinking and planning processes of the air traffic controller as envisioned by the automated enroute air traffic control (AERA) program. The reason the V is so powerful comes from the Hughes culture of coupling all text and analysis to multi dimensional images. It was the foundation of Sequential Thematic Organization of Publications (STOP) created by Hughes in 1963 and used until Hughes was divested by the Howard Hughes Medical Institute in 1985. The US Department of Defense puts the systems engineering process interactions into a V-model relationship. It has now found widespread application in commercial as well as defense programs. Its primary use is in project management and throughout the project lifecycle. One fundamental characteristic of the US V-model is that time and maturity move from left to right and one cannot move back in time. All iteration is along a vertical line to higher or lower levels in the system hierarchy, as shown in the figure. This has proven to be an important aspect of the model. The expansion of the model to a dual-Vee concept is treated in reference. As the V-model is publicly available many companies also use it. In project management it is a method comparable to PRINCE2 and describes methods for project management as well as methods for system development. The V-model, while rigid in process, can be very flexible in application, especially as it pertains to the scope outside of the realm of the System Development Lifecycle normal parameters. Advantages These are the advantages V-model offers in front of other systems development models: The users of the V-model participate in the development and maintenance of the V-model. A change control board publicly maintains the V-model. The change control board meets anywhere from every day to weekly and processes all change requests received during system development and test. The V-model provides concrete assistance on how to implement an activity and its work steps, defining explicitly the events needed to complete a work step: each activity schema contains instructions, recommendations and detailed explanations of the activity. Limitations The following aspects are not covered by the V-model, they must be regulated in addition, or the V-model must be adapted accordingly: The placing of contracts for services is not regulated. The organization and execution of operation, maintenance, repair and disposal of the system are not covered by the V-model. However, planning and preparation of a concept for these tasks are regulated in the V-model. The V-model addresses software development within a project rather than a whole organization. See also Engineering information management (EIM) ARCADIA (as supporting systems modeling method) IBM Rational Unified Process (as a supporting software process) Waterfall model of software development Systems architecture Systems design Systems engineering Model-based systems engineering Theory U References External links Software project management Systems engineering
V-model
[ "Engineering" ]
2,035
[ "Systems engineering" ]
1,256,751
https://en.wikipedia.org/wiki/Zeno%20machine
In mathematics and computer science, Zeno machines (abbreviated ZM, and also called accelerated Turing machine, ATM) are a hypothetical computational model related to Turing machines that are capable of carrying out computations involving a countably infinite number of algorithmic steps. These machines are ruled out in most models of computation. The idea of Zeno machines was first discussed by Hermann Weyl in 1927; the name refers to Zeno's paradoxes, attributed to the ancient Greek philosopher Zeno of Elea. Zeno machines play a crucial role in some theories. The theory of the Omega Point devised by physicist Frank J. Tipler, for instance, can only be valid if Zeno machines are possible. Definition A Zeno machine is a Turing machine that can take an infinite number of steps, and then continue take more steps. This can be thought of as a supertask where units of time are taken to perform the -th step; thus, the first step takes 0.5 units of time, the second takes 0.25, the third 0.125 and so on, so that after one unit of time, a countably infinite number of steps will have been performed. Infinite time Turing machines A more formal model of the Zeno machine is the infinite time Turing machine. Defined first in unpublished work by Jeffrey Kidder and expanded upon by Joel Hamkins and Andy Lewis, in Infinite Time Turing Machines, the infinite time Turing machine is an extension of the classical Turing machine model, to include transfinite time; that is time beyond all finite time. A classical Turing machine has a status at step (in the start state, with an empty tape, read head at cell 0) and a procedure for getting from one status to the successive status. In this way the status of a Turing machine is defined for all steps corresponding to a natural number. An maintains these properties, but also defines the status of the machine at limit ordinals, that is ordinals that are neither nor the successor of any ordinal. The status of a Turing machine consists of 3 parts: The state The location of the read-write head The contents of the tape Just as a classical Turing machine has a labeled start state, which is the state at the start of a program, an has a labeled limit state which is the state for the machine at any limit ordinal. This is the case even if the machine has no other way to access this state, for example no node transitions to it. The location of the read-write head is set to zero for at any limit step. Lastly the state of the tape is determined by the limit supremum of previous tape states. For some machine , a cell and, a limit ordinal then That is the th cell at time is the limit supremum of that same cell as the machine approaches . This can be thought of as the limit if it converges or otherwise. Computability Zeno machines have been proposed as a model of computation more powerful than classical Turing machines, based on their ability to solve the halting problem for classical Turing machines. Cristian Calude and Ludwig Staiger present the following pseudocode algorithm as a solution to the halting problem when run on a Zeno machine. begin program write 0 on the first position of the output tape; begin loop simulate 1 successive step of the given Turing machine on the given input; if the Turing machine has halted then write 1 on the first position of the output tape and break out of loop; end loop end program By inspecting the first position of the output tape after unit of time has elapsed we can determine whether the given Turing machine halts. In contrast Oron Shagrir argues that the state of a Zeno machine is only defined on the interval , and so it is impossible to inspect the tape at time . Furthermore since classical Turing machines don't have any timing information, the addition of timing information whether accelerating or not does not itself add any computational power. Infinite time Turing machines however, are capable of implementing the given algorithm, halting at time with the correct solution, since they do define their state for transfinite steps. All sets are decidable by infinite time Turing machines, and sets are semidecidable. Zeno machines cannot solve their own halting problem. See also Computation in the limit Specker sequence Ross–Littlewood paradox References Models of computation Turing machine Hypercomputation Supertasks Articles with example pseudocode
Zeno machine
[ "Mathematics" ]
908
[ "Supertasks", "Mathematical objects", "Infinity" ]
1,257,907
https://en.wikipedia.org/wiki/Valaciclovir
Valaciclovir, also spelled valacyclovir, is an antiviral medication used to treat outbreaks of herpes simplex or herpes zoster (shingles). It is also used to prevent cytomegalovirus following a kidney transplant in high risk cases. It is taken by mouth. Common side effects include headache and vomiting. Severe side effects may include kidney problems. Use in pregnancy appears to be safe. It is a prodrug, which works after being converted to aciclovir in a person's body. Valaciclovir was patented in 1987 and came into medical use in 1995. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 113th most commonly prescribed medication in the United States, with more than 5million prescriptions. Medical uses Valaciclovir is used for the treatment of HSV and VZV infections, including: Oral and genital herpes simplex (treatment and prevention) Reduction of HSV transmission from people with recurrent infection to uninfected individuals Herpes zoster (shingles): the typical dosage for treatment of herpes is 1,000 mg orally three times a day for seven consecutive days. Prevention of cytomegalovirus following organ transplantation Prevention of herpesviruses in immunocompromised people (such as those undergoing cancer chemotherapy) Chickenpox in children (ages 2–18) It has shown promise as a treatment for infectious mononucleosis and is preventively administered in suspected cases of herpes B virus exposure. Bell's palsy does not seem to benefit from using valaciclovir as its only treatment. Adverse effects Common adverse drug reactions (≥1% of people) associated with valaciclovir are the same as for aciclovir, its active metabolite. They include: nausea, vomiting, diarrhea and headache. Infrequent adverse effects (0.1–1% of patients) include: agitation, vertigo, confusion, dizziness, edema, arthralgia, sore throat, constipation, abdominal pain, rash, weakness and/or renal impairment. Rare adverse effects (<0.1% of patients) include: coma, seizures, neutropenia, leukopenia, tremor, ataxia, encephalopathy, psychotic symptoms, crystalluria, anorexia, fatigue, hepatitis, Stevens–Johnson syndrome, toxic epidermal necrolysis and/or anaphylaxis. Pharmacology Valaciclovir is a prodrug, an esterified version of aciclovir that has greater oral bioavailability (about 55%) than aciclovir. It is converted by esterases to the active drug, aciclovir, and the amino acid valine via hepatic first-pass metabolism. Aciclovir is selectively converted into a monophosphate form by viral thymidine kinase, which is more effective (3000 times) in phosphorylation of aciclovir than cellular thymidine kinase. Subsequently, the monophosphate form is further phosphorylated into a disphosphate by cellular guanylate kinase and then into the active triphosphate form, aciclo-GTP, by cellular kinases. Mechanism of action Aciclo-GTP, the active triphosphate metabolite of aciclovir, is a very potent inhibitor of viral DNA replication. Aciclo-GTP competitively inhibits and inactivates the viral DNA polymerase. Its monophosphate form also incorporates into the viral DNA, resulting in chain termination. It has also been shown that the viral enzymes cannot remove aciclo-GMP from the chain, which results in inhibition of further activity of DNA polymerase. Aciclo-GTP is fairly rapidly metabolized within the cell, possibly by cellular phosphatases. Aciclovir is active against most species in the herpesvirus family. In descending order of activity: Herpes simplex virus type I (HSV-1) Herpes simplex virus type II (HSV-2) Varicella zoster virus (VZV) Epstein–Barr virus (EBV) Cytomegalovirus (CMV) The drug is predominantly active against HSV and, to a lesser extent, VZV. It is only of limited efficacy against EBV and CMV. However, valaciclovir has been shown to lower or eliminate the presence of the Epstein–Barr virus in subjects afflicted with acute mononucleosis, leading to a significant decrease in the severity of symptoms. Valaciclovir and acyclovir act by inhibiting viral DNA replication, but as of 2016 there was little evidence that they are effective against Epstein–Barr virus. Acyclovir therapy does prevent viral latency, but has not proven effective at eradicating latent viruses in nerve ganglia. As of 2005, resistance to valaciclovir has not been significant. Mechanisms of resistance in HSV include deficient viral thymidine kinase and mutations to viral thymidine kinase and/or DNA polymerase that alter substrate sensitivity. It also is used for herpes B virus postexposure prophylaxis. Chemistry Details of the synthesis of valaciclovir were first published by scientists from the Wellcome Foundation. Aciclovir was esterified with a carboxybenzyl protected valine, using dicyclohexylcarbodiimide as the dehydrating agent. In the final step, the protecting group was removed by hydrogenation using a palladium on alumina catalyst. History Valaciclovir was patented in 1987 and came into medical use in 1995. It is available as a generic medication. In 2022, it was the 113th most commonly prescribed medication in the United States, with more than 5million prescriptions. Society and culture Brand names It is marketed by GlaxoSmithKline under the brand names Valtrex and Zelitrex. Valaciclovir has been available as a generic drug in the US since November 2009. References Amino acid derivatives Anti-herpes virus drugs Drugs developed by GSK plc Carboxylate esters Ethers Prodrugs Purines Herpes World Health Organization essential medicines Wikipedia medicine articles ready to translate
Valaciclovir
[ "Chemistry" ]
1,367
[ "Functional groups", "Prodrugs", "Organic compounds", "Ethers", "Chemicals in medicine" ]
1,257,929
https://en.wikipedia.org/wiki/CHEMKIN
CHEMKIN is a proprietary software tool for solving complex chemical kinetics problems. It is used worldwide in the combustion, chemical processing, microelectronics and automotive industries, and also in atmospheric science. It was originally developed at Sandia National Laboratories and is now developed by a US company, Reaction Design. Reaction Design was acquired by ANSYS in 2014 so Chemkin and related products are now available through ANSYS. See also Autochem Chemical kinetics Cantera Chemical WorkBench Kinetic PreProcessor (KPP) External links CHEMKIN web page Sandia National Laboratory CHEMKIN Homepage, References Combustion Computational chemistry software
CHEMKIN
[ "Chemistry" ]
135
[ "Combustion", "Computational chemistry", "Computational chemistry software", "Chemistry software" ]
1,258,057
https://en.wikipedia.org/wiki/Peptide%20synthesis
In organic chemistry, peptide synthesis is the production of peptides, compounds where multiple amino acids are linked via amide bonds, also known as peptide bonds. Peptides are chemically synthesized by the condensation reaction of the carboxyl group of one amino acid to the amino group of another. Protecting group strategies are usually necessary to prevent undesirable side reactions with the various amino acid side chains. Chemical peptide synthesis most commonly starts at the carboxyl end of the peptide (C-terminus), and proceeds toward the amino-terminus (N-terminus). Protein biosynthesis (long peptides) in living organisms occurs in the opposite direction. The chemical synthesis of peptides can be carried out using classical solution-phase techniques, although these have been replaced in most research and development settings by solid-phase methods (see below). Solution-phase synthesis retains its usefulness in large-scale production of peptides for industrial purposes moreover. Chemical synthesis facilitates the production of peptides that are difficult to express in bacteria, the incorporation of unnatural amino acids, peptide/protein backbone modification, and the synthesis of D-proteins, which consist of D-amino acids. Solid-phase synthesis The established method for the production of synthetic peptides in the lab is known as solid phase peptide synthesis (SPPS). Pioneered by Robert Bruce Merrifield, SPPS allows the rapid assembly of a peptide chain through successive reactions of amino acid derivatives on a macroscopically insoluble solvent-swollen beaded resin support. The solid support consists of small, polymeric resin beads functionalized with reactive groups (such as amine or hydroxyl groups) that link to the nascent peptide chain. Since the peptide remains covalently attached to the support throughout the synthesis, excess reagents and side products can be removed by washing and filtration. This approach circumvents the comparatively time-consuming isolation of the product peptide from solution after each reaction step, which would be required when using conventional solution-phase synthesis. Each amino acid to be coupled to the peptide chain N-terminus must be protected on its N-terminus and side chain using appropriate protecting groups such as Boc (acid-labile) or Fmoc (base-labile), depending on the side chain and the protection strategy used (see below). The general SPPS procedure is one of repeated cycles of alternate N-terminal deprotection and coupling reactions. The resin can be washed between each steps. First an amino acid is coupled to the resin. Subsequently, the amine is deprotected, and then coupled with the activated carboxyl group of the next amino acid to be added. This cycle is repeated until the desired sequence has been synthesized. SPPS cycles may also include capping steps which block the ends of unreacted amino acids from reacting. At the end of the synthesis, the crude peptide is cleaved from the solid support while simultaneously removing all protecting groups using a reagent such as trifluoroacetic acid. The crude peptide can be precipitated from a non-polar solvent like diethyl ether in order to remove organic soluble byproducts. The crude peptide can be purified using reversed-phase HPLC. The purification process, especially of longer peptides can be challenging, because cumulative amounts of numerous minor byproducts, which have properties similar to the desired peptide product, have to be removed. For this reason so-called continuous chromatography processes such as MCSGP are increasingly being used in commercial settings to maximize the yield without sacrificing purity. SPPS is limited by reaction yields due to the exponential accumulation of by-products, and typically peptides and proteins in the range of 70 amino acids are pushing the limits of synthetic accessibility. Synthetic difficulty also is sequence dependent; typically aggregation-prone sequences such as amyloids are difficult to make. Longer lengths can be accessed by using ligation approaches such as native chemical ligation, where two shorter fully deprotected synthetic peptides can be joined in solution. Peptide coupling reagents An important feature that has enabled the broad application of SPPS is the generation of extremely high yields in the coupling step. Highly efficient amide bond-formation conditions are required. To illustrate the impact of suboptimal coupling yields for a given synthesis, consider the case where each coupling step were to have at least 99% yield: this would result in a 77% overall crude yield for a 26-amino acid peptide (assuming 100% yield in each deprotection); if each coupling were 95% efficient, the overall yield would be 25%. and adding an excess of each amino acid (between 2- and 10-fold). The minimization of amino acid racemization during coupling is also of vital importance to avoid epimerization in the final peptide product. Amide bond formation between an amine and carboxylic acid is slow, and as such usually requires 'coupling reagents' or 'activators'. A wide range of coupling reagents exist, due in part to their varying effectiveness for particular couplings, many of these reagents are commercially available. Carbodiimides Carbodiimides such as dicyclohexylcarbodiimide (DCC) and diisopropylcarbodiimide (DIC) are frequently used for amide bond formation. The reaction proceeds via the formation of a highly reactive O-acylisourea. This reactive intermediate is attacked by the peptide N-terminal amine, forming a peptide bond. Formation of the O-acylisourea proceeds fastest in non-polar solvents such as dichloromethane. DIC is particularly useful for SPPS since as a liquid it is easily dispensed, and the urea byproduct is easily washed away. Conversely, the related carbodiimide 1-Ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDC) is often used for solution-phase peptide couplings as its urea byproduct can be removed by washing during aqueous work-up. Carbodiimide activation opens the possibility for racemization of the activated amino acid. Racemization can be circumvented with 'racemization suppressing' additives such as the triazoles 1-hydroxy-benzotriazole (HOBt), and 1-hydroxy-7-aza-benzotriazole (HOAt). These reagents attack the O-acylisourea intermediate to form an active ester, which subsequently reacts with the peptide to form the desired peptide bond. Ethyl cyanohydroxyiminoacetate (Oxyma), an additive for carbodiimide coupling, acts as an alternative to HOAt. Amidinium and phosphonium salts To avoid epimerization through the O-acylisourea intermediate formed when using a carbodiimide reagent, an amidinium- or phosphonium-reagent can be employed These reagents have two parts: an electrophilic moiety which deoxygenates the carboxylic acid (blue) and masked nucleophilic moiety (red). Nucleophilic attack of the carboxylic acid on the electrophilic amidinium or phosphonium moiety leads to a short lived intermediate which is rapidly trapped by the unmasked nucleophile to form the activated ester intermediate and either a urea or phosphoramide by-product. These cationic reagents have non-coordinating counteranions such as a hexafluorophosphate or a tetrafluoroborate. The identity of this anion is typically indicated by the first letter in the reagent’s acronym, although the nomenclature can be inconsistent. For example HBTU is a hexafluorophosphate salt while TBTU is a tetrafluoroborate salt. In addition to HBTU and HATU other common reagents include HCTU (6-ClHOBt), TCFH (chloride) and COMU (ethyl cyano(hydroxyimino)acetate). Amidinium reagents incorporating hydroxybenzotriazole moieties can exist in an N-form (guanadinium) or an O-form (uronium), but the N-form is generally more stable. Phosphonium reagents include BOP (HOBt), PyBOP (HOBt) and PyAOP (HOAt). Although these reagents can lead to the same activated ester intermediates as a carbodiimide reagent, the rate of activation is higher due to the high electrophilicty of these cationic reagents. Amidinium reagents are capable of reacting with the peptide N-terminus to form an inactive guanidino by-product, whereas phosphonium reagents are not. Propanephosphonic acid anhydride Since late 2000s, propanephosphonic acid anhydride, sold commercially under various names such as "T3P", has become a useful reagent for amide bond formation in commercial applications. It converts the oxygen of the carboxylic acid into a leaving group, whose peptide-coupling byproducts are water-soluble and can be easily washed away. In a performance comparison between propanephosphonic acid anhydride and other peptide coupling reagents for the preparation of a nonapeptide drug, it was found that this reagent was superior to other reagents with regards to yield and low epimerization. Solid supports Solid supports for peptide synthesis are selected for physical stability, to permit the rapid filtration of liquids. Suitable supports are inert to reagents and solvents used during SPPS and allow for the attachment of the first amino acid. Swelling is of great importance because peptide synthesis takes place inside the swollen pores of the solid support. Three primary types of solid supports are: gel-type supports, surface-type supports, and composites. Improvements to solid supports used for peptide synthesis enhance their ability to withstand the repeated use of TFA during the deprotection step of SPPS. Two primary resins are used, based on whether a C-terminal carboxylic acid or amide is desired. The Wang resin was, , the most commonly used resin for peptides with C-terminal carboxylic acids. Protecting groups schemes As described above, the use of N-terminal and side chain protecting groups is essential during peptide synthesis to avoid undesirable side reactions, such as self-coupling of the activated amino acid leading to (polymerization). This would compete with the intended peptide coupling reaction, resulting in low yield or even complete failure to synthesize the desired peptide. Two principle protecting group schemes are typically used in solid phase peptide synthesis: so-called Boc/benzyl and Fmoc/tert-butyl approaches. The Boc/Bzl strategy utilizes TFA-labile N-terminal Boc protection alongside side chain protection that is removed using anhydrous hydrogen fluoride during the final cleavage step (with simultaneous cleavage of the peptide from the solid support). Fmoc/tBu SPPS uses base-labile Fmoc N-terminal protection, with side chain protection and a resin linkage that are acid-labile (final acidic cleavage is carried out via TFA treatment). Both approaches, including the advantages and disadvantages of each, are outlined in more detail below. Boc/Bzl SPPS Before the advent of SPPS, solution methods for chemical peptide synthesis relied on tert-butyloxycarbonyl (abbreviated 'Boc') as a temporary N-terminal α-amino protecting group. The Boc group is removed with acid, such as trifluoroacetic acid (TFA). This forms a positively charged amino group in the presence of excess TFA (note that the amino group is not protonated in the image on the right), which is neutralized and coupled to the incoming activated amino acid. Neutralization can either occur prior to coupling or in situ during the basic coupling reaction. The Boc/Bzl approach retains its usefulness in reducing peptide aggregation during synthesis. In addition, Boc/benzyl SPPS may be preferred over the Fmoc/tert-butyl approach when synthesizing peptides containing base-sensitive moieties (such as depsipeptides or thioester moeities), as treatment with base is required during the Fmoc deprotection step (see below). Permanent side-chain protecting groups used during Boc/benzyl SPPS are typically benzyl or benzyl-based groups. Final removal of the peptide from the solid support occurs simultaneously with side chain deprotection using anhydrous hydrogen fluoride via hydrolytic cleavage. The final product is a fluoride salt which is relatively easy to solubilize. Scavengers such as cresol must be added to the HF in order to prevent reactive cations from generating undesired byproducts. Fmoc/tBu SPPS The use of N-terminal Fmoc protection allows for a milder deprotection scheme than used for Boc/Bzl SPPS, and this protection scheme is truly orthogonal under SPPS conditions. Fmoc deprotection utilizes a base, typically 20–50% piperidine in DMF. The exposed amine is therefore neutral, and consequently no neutralization of the peptide-resin is required, as in the case of the Boc/Bzl approach. The lack of electrostatic repulsion between the peptide chains can lead to increased risk of aggregation with Fmoc/tBu SPPS however. Because the liberated fluorenyl group is a chromophore, Fmoc deprotection can be monitored by UV absorbance of the reaction mixture, a strategy which is employed in automated peptide synthesizers. The ability of the Fmoc group to be cleaved under relatively mild basic conditions while being stable to acid allows the use of side chain protecting groups such as Boc and tBu that can be removed in milder acidic final cleavage conditions (TFA) than those used for final cleavage in Boc/Bzl SPPS (HF). Scavengers such as water and triisopropylsilane (TIPS) are most commonly added during the final cleavage in order to prevent side reactions with reactive cationic species released as a result of side chain deprotection. Nevertheless, many other scavenger compounds could be used as well. The resulting crude peptide is obtained as a TFA salt, which is potentially more difficult to solubilize than the fluoride salts generated in Boc SPPS. Fmoc/tBu SPPS is less atom-economical, as the fluorenyl group is much larger than the Boc group. Accordingly, prices for Fmoc amino acids were high until the large-scale piloting of one of the first synthesized peptide drugs, enfuvirtide, began in the 1990s, when market demand adjusted the relative prices of Fmoc- vs Boc- amino acids. Other protecting groups Benzyloxy-carbonyl The (Z) group is another carbamate-type amine protecting group, discovered by Leonidas Zervas in the early 1930s and usually added via reaction with benzyl chloroformate. It is removed under harsh conditions using HBr in acetic acid, or milder conditions of catalytic hydrogenation. This methodology was first used in the synthesis of oligopeptides by Zervas and Max Bergmann in 1932. Hence, this became known as the Bergmann-Zervas synthesis, which was characterised "epoch-making" and helped establish synthetic peptide chemistry as a distinct field. It constituted the first useful lab method for controlled peptide synthesis, enabling the synthesis of previously unattainable peptides with reactive side-chains, while Z-protected amino acids are also prevented form undergoing racemization. The use of the Bergmann-Zervas method remained the standard practice in peptide chemistry for two full decades after its publication, superseded by newer methods (such as the Boc protecting group) in the early 1950s. Nowadays, while it has been used periodically for α-amine protection, it is much more commonly used for side chain protection. Alloc and miscellaneous groups The allyloxycarbonyl (alloc) protecting group is sometimes used to protect an amino group (or carboxylic acid or alcohol group) when an orthogonal deprotection scheme is required. It is also sometimes used when conducting on-resin cyclic peptide formation, where the peptide is linked to the resin by a side-chain functional group. The Alloc group can be removed using tetrakis(triphenylphosphine)palladium(0). For special applications like synthetic steps involving protein microarrays, protecting groups sometimes termed "lithographic" are used, which are amenable to photochemistry at a particular wavelength of light, and so which can be removed during lithographic types of operations. Regioselective disulfide bond formation The formation of multiple native disulfides remains challenging of native peptide synthesis by solid-phase methods. Random chain combination typically results in several products with nonnative disulfide bonds. Stepwise formation of disulfide bonds is typically the preferred method, and performed with thiol protecting groups. Different thiol protecting groups provide multiple dimensions of orthogonal protection. These orthogonally protected cysteines are incorporated during the solid-phase synthesis of the peptide. Successive removal of these groups, to allow for selective exposure of free thiol groups, leads to disulfide formation in a stepwise manner. The order of removal of the groups must be considered so that only one group is removed at a time. Thiol protecting groups used in peptide synthesis requiring later regioselective disulfide bond formation must possess multiple characteristics. First, they must be reversible with conditions that do not affect the unprotected side chains. Second, the protecting group must be able to withstand the conditions of solid-phase synthesis. Third, the removal of the thiol protecting group must be such that it leaves intact other thiol protecting groups, if orthogonal protection is desired. That is, the removal of PG A should not affect PG B. Some of the thiol protecting groups commonly used include the acetamidomethyl (Acm), tert-butyl (But), 3-nitro-2-pyridine sulfenyl (NPYS), 2-pyridine-sulfenyl (Pyr), and trityl (Trt) groups. Importantly, the NPYS group can replace the Acm PG to yield an activated thiol. Using this method, Kiso and coworkers reported the first total synthesis of insulin in 1993. In this work, the A-chain of insulin was prepared with following protecting groups in place on its cysteines: CysA6(But), CysA7(Acm), and CysA11(But), leaving CysA20 unprotected. Microwave-assisted peptide synthesis Microwave-assisted peptide synthesis has been used to complete long peptide sequences with high degrees of yield and low degrees of racemization. Continuous flow solid-phase peptide synthesis The first article relating to continuous flow peptide synthesis was published in 1986, but due to technical limitations, it was not until the early 2010's when more academic groups started using continuous flow for the rapid synthesis of peptides. The advantages of continuous flow over traditional batch methods is the ability to heat reagents with good temperature control, allowing the speed of reaction kinetics while minimising side reactions. cycles times vary from 30 seconds, up to 6 minutes, depending on reaction conditions and excess of reagent. Thanks to inline analytics, such as UV/Vis spectroscopy and the use of Variable Bed Flow reactor (VBFR) that monitor the resin volume, on-resin aggregation can be identified and coupling efficiency can be evaluated. Synthesizing long peptides Stepwise elongation, in which the amino acids are connected step-by-step in turn, is ideal for small peptides containing between 2 and 100 amino acid residues. Another method is fragment condensation, in which peptide fragments are coupled. Although the former can elongate the peptide chain without racemization, the yield drops if only it is used in the creation of long or highly polar peptides. Fragment condensation is better than stepwise elongation for synthesizing sophisticated long peptides, but its use must be restricted in order to protect against racemization. Fragment condensation is also undesirable since the coupled fragment must be in gross excess, which may be a limitation depending on the length of the fragment. A new development for producing longer peptide chains is chemical ligation: unprotected peptide chains react chemoselectively in aqueous solution. A first kinetically controlled product rearranges to form the amide bond. The most common form of native chemical ligation uses a peptide thioester that reacts with a terminal cysteine residue. Other methods applicable for covalently linking polypeptides in aqueous solution include the use of split inteins, spontaneous isopeptide bond formation and sortase ligation. In order to optimize synthesis of long peptides, a method was developed in Medicon Valley for converting peptide sequences. The simple pre-sequence (e.g. Lysine (Lysn); Glutamic Acid (Glun); (LysGlu)n) that is incorporated at the C-terminus of the peptide to induce an alpha-helix-like structure. This can potentially increase biological half-life, improve peptide stability and inhibit enzymatic degradation without altering pharmacological activity or profile of action. Cyclic peptides On resin cyclization Peptides can be cyclized on a solid support. A variety of cyclization reagents can be used such as HBTU/HOBt/DIEA, PyBop/DIEA, PyClock/DIEA. Head-to-tail peptides can be made on the solid support. The deprotection of the C-terminus at some suitable point allows on-resin cyclization by amide bond formation with the deprotected N-terminus. Once cyclization has taken place, the peptide is cleaved from resin by acidolysis and purified. The strategy for the solid-phase synthesis of cyclic peptides is not limited to attachment through Asp, Glu or Lys side chains. Cysteine has a very reactive sulfhydryl group on its side chain. A disulfide bridge is created when a sulfur atom from one Cysteine forms a single covalent bond with another sulfur atom from a second cysteine in a different part of the protein. These bridges help to stabilize proteins, especially those secreted from cells. Some researchers use modified cysteines using S-acetomidomethyl (Acm) to block the formation of the disulfide bond but preserve the cysteine and the protein's original primary structure. Off-resin cyclization Off-resin cyclization is a solid-phase synthesis of key intermediates, followed by the key cyclization in solution phase, the final deprotection of any masked side chains is also carried out in solution phase. This has the disadvantages that the efficiencies of solid-phase synthesis are lost in the solution phase steps, that purification from by-products, reagents and unconverted material is required, and that undesired oligomers can be formed if macrocycle formation is involved. The use of pentafluorophenyl esters (FDPP, PFPOH) and BOP-Cl are useful for cyclising peptides. History The first protected peptide was synthesised by Theodor Curtius in 1882 and the first free peptide was synthesised by Emil Fischer in 1901. See also Oligonucleotide synthesis Clicked peptide polymer Bailey peptide synthesis References Further reading External links Chemical synthesis Peptides Peptide coupling reagents Biochemistry methods Biochemistry Amide synthesis reactions
Peptide synthesis
[ "Chemistry", "Biology" ]
5,115
[ "Biochemistry methods", "Biomolecules by chemical classification", "Biochemistry", "Peptide coupling reagents", "Reagents for organic chemistry", "Organic reactions", "Amide synthesis reactions", "nan", "Molecular biology", "Chemical synthesis", "Reagents for biochemistry", "Peptides" ]
1,258,079
https://en.wikipedia.org/wiki/Voltage-gated%20ion%20channel
Voltage-gated ion channels are a class of transmembrane proteins that form ion channels that are activated by changes in a cell's electrical membrane potential near the channel. The membrane potential alters the conformation of the channel proteins, regulating their opening and closing. Cell membranes are generally impermeable to ions, thus they must diffuse through the membrane through transmembrane protein channels. Voltage-gated ion channels have a crucial role in excitable cells such as neuronal and muscle tissues, allowing a rapid and co-ordinated depolarization in response to triggering voltage change. Found along the axon and at the synapse, voltage-gated ion channels directionally propagate electrical signals. Voltage-gated ion-channels are usually ion-specific, and channels specific to sodium (Na+), potassium (K+), calcium (Ca2+), and chloride (Cl−) ions have been identified. The opening and closing of the channels are triggered by changing ion concentration, and hence charge gradient, between the sides of the cell membrane. Structure Voltage-gated ion channels are generally composed of several subunits arranged in such a way that there is a central pore through which ions can travel down their electrochemical gradients. The channels tend to be ion-specific, although similarly sized and charged ions may sometimes travel through them. The functionality of voltage-gated ion channels is attributed to its three main discrete units: the voltage sensor, the pore or conducting pathway, and the gate. Na+, K+, and Ca2+ channels are composed of four transmembrane domains arranged around a central pore; these four domains are part of a single α-subunit in the case of most Na+ and Ca2+ channels, whereas there are four α-subunits, each contributing one transmembrane domain, in most K+ channels. The membrane-spanning segments, designated S1-S6, all take the form of alpha helices with specialized functions. The fifth and sixth transmembrane segments (S5 and S6) and pore loop serve the principal role of ion conduction, comprising the gate and pore of the channel, while S1-S4 serve as the voltage-sensing region. The four subunits may be identical, or different from one another. In addition to the four central α-subunits, there are also regulatory β-subunits, with oxidoreductase activity, which are located on the inner surface of the cell membrane and do not cross the membrane, and which are coassembled with the α-subunits in the endoplasmic reticulum. Mechanism Crystallographic structural studies of a potassium channel have shown that, when a potential difference is introduced over the membrane, the associated electric field induces a conformational change in the potassium channel. The conformational change distorts the shape of the channel proteins sufficiently such that the cavity, or channel, opens to allow influx or efflux to occur across the membrane. This movement of ions down their concentration gradients subsequently generates an electric current sufficient to depolarize the cell membrane. Voltage-gated sodium channels and calcium channels are made up of a single polypeptide with four homologous domains. Each domain contains 6 membrane spanning alpha helices. One of these helices, S4, is the voltage sensing helix. The S4 segment contains many positive charges such that a high positive charge outside the cell repels the helix, keeping the channel in its closed state. In general, the voltage sensing portion of the ion channel is responsible for the detection of changes in transmembrane potential that trigger the opening or closing of the channel. The S1-4 alpha helices are generally thought to serve this role. In potassium and sodium channels, voltage-sensing S4 helices contain positively-charged lysine or arginine residues in repeated motifs. In its resting state, half of each S4 helix is in contact with the cell cytosol. Upon depolarization, the positively-charged residues on the S4 domains move toward the exoplasmic surface of the membrane. It is thought that the first 4 arginines account for the gating current, moving toward the extracellular solvent upon channel activation in response to membrane depolarization. The movement of 10–12 of these protein-bound positive charges triggers a conformational change that opens the channel. The exact mechanism by which this movement occurs is not currently agreed upon, however the canonical, transporter, paddle, and twisted models are examples of current theories. Movement of the voltage-sensor triggers a conformational change of the gate of the conducting pathway, controlling the flow of ions through the channel. The main functional part of the voltage-sensitive protein domain of these channels generally contains a region composed of S3b and S4 helices, known as the "paddle" due to its shape, which appears to be a conserved sequence, interchangeable across a wide variety of cells and species. A similar voltage sensor paddle has also been found in a family of voltage sensitive phosphatases in various species. Genetic engineering of the paddle region from a species of volcano-dwelling archaebacteria into rat brain potassium channels results in a fully functional ion channel, as long as the whole intact paddle is replaced. This "modularity" allows use of simple and inexpensive model systems to study the function of this region, its role in disease, and pharmaceutical control of its behavior rather than being limited to poorly characterized, expensive, and/or difficult to study preparations. Although voltage-gated ion channels are typically activated by membrane depolarization, some channels, such as inward-rectifier potassium ion channels, are activated instead by hyperpolarization. The gate is thought to be coupled to the voltage sensing regions of the channels and appears to contain a mechanical obstruction to ion flow. While the S6 domain has been agreed upon as the segment acting as this obstruction, its exact mechanism is unknown. Possible explanations include: the S6 segment makes a scissor-like movement allowing ions to flow through, the S6 segment breaks into two segments allowing of passing of ions through the channel, or the S6 channel serving as the gate itself. The mechanism by which the movement of the S4 segment affects that of S6 is still unknown, however it is theorized that there is a S4-S5 linker whose movement allows the opening of S6. Inactivation of ion channels occurs within milliseconds after opening. Inactivation is thought to be mediated by an intracellular gate that controls the opening of the pore on the inside of the cell. This gate is modeled as a ball tethered to a flexible chain. During inactivation, the chain folds in on itself and the ball blocks the flow of ions through the channel. Fast inactivation is directly linked to the activation caused by intramembrane movements of the S4 segments, though the mechanism linking movement of S4 and the engagement of the inactivation gate is unknown. Different types Sodium (Na+) channels Sodium channels have similar functional properties across many different cell types. While ten human genes encoding for sodium channels have been identified, their function is typically conserved between species and different cell types. Calcium (Ca2+) channels With sixteen different identified genes for human calcium channels, this type of channel differs in function between cell types. Ca2+ channels produce action potentials similarly to Na+ channels in some neurons. They also play a role in neurotransmitter release in pre-synaptic nerve endings. In most cells, Ca2+ channels regulate a wide variety of biochemical processes due to their role in controlling intracellular Ca2+ concentrations. Potassium (K+) channels Potassium channels are the largest and most diverse class of voltage-gated channels, with over 100 encoding human genes. These types of channels differ significantly in their gating properties; some inactivating extremely slowly and others inactivating extremely quickly. This difference in activation time influences the duration and rate of action potential firing, which has a significant effect on electrical conduction along an axon as well as synaptic transmission. Potassium channels differ in structure from the other channels in that they contain four separate polypeptide subunits, while the other channels contain four homologous domain but on a single polypeptide unit. Chloride (Cl−) channels Chloride channels are present in all types of neurons. With the chief responsibility of controlling excitability, chloride channels contribute to the maintenance of cell resting potential and help to regulate cell volume. Proton (H+) channels Voltage-gated proton channels carry currents mediated by hydrogen ions in the form of hydronium, and are activated by depolarization in a pH-dependent manner. They function to remove acid from cells. Phylogenetics Phylogenetic studies of proteins expressed in bacteria revealed the existence of a superfamily of voltage-gated sodium channels. Subsequent studies have shown that a variety of other ion channels and transporters are phylogenetically related to the voltage-gated ion channels, including: inwardly rectifying K+ channels, ryanodine-inositol 1,4,5-triphosphate receptor Ca2+ channels, transient receptor potential Ca2+ channels, polycystin cation channels, glutamate-gated ion channels, calcium-dependent chloride channels, monovalent cation:proton antiporters, type 1, and potassium transporters. See also Potassium channel Catecholaminergic polymorphic ventricular tachycardia References External links IUPHAR-DB Voltage-gated ion channel subunits The IUPHAR Compendium of Voltage-gated Ion Channels 2005 Ion channels Electrophysiology Integral membrane proteins
Voltage-gated ion channel
[ "Chemistry" ]
2,023
[ "Neurochemistry", "Ion channels" ]
2,606,518
https://en.wikipedia.org/wiki/Macromolecular%20docking
Macromolecular docking is the computational modelling of the quaternary structure of complexes formed by two or more interacting biological macromolecules. Protein–protein complexes are the most commonly attempted targets of such modelling, followed by protein–nucleic acid complexes. The ultimate goal of docking is the prediction of the three-dimensional structure of the macromolecular complex of interest as it would occur in a living organism. Docking itself only produces plausible candidate structures. These candidates must be ranked using methods such as scoring functions to identify structures that are most likely to occur in nature. The term "docking" originated in the late 1970s, with a more restricted meaning; then, "docking" meant refining a model of a complex structure by optimizing the separation between the interactors but keeping their relative orientations fixed. Later, the relative orientations of the interacting partners in the modelling was allowed to vary, but the internal geometry of each of the partners was held fixed. This type of modelling is sometimes referred to as "rigid docking". With further increases in computational power, it became possible to model changes in internal geometry of the interacting partners that may occur when a complex is formed. This type of modelling is referred to as "flexible docking". Background The biological roles of most proteins, as characterized by which other macromolecules they interact with, are known at best incompletely. Even those proteins that participate in a well-studied biological process (e.g., the Krebs cycle) may have unexpected interaction partners or functions which are unrelated to that process. In cases of known protein–protein interactions, other questions arise. Genetic diseases (e.g., cystic fibrosis) are known to be caused by misfolded or mutated proteins, and there is a desire to understand what, if any, anomalous protein–protein interactions a given mutation can cause. In the distant future, proteins may be designed to perform biological functions, and a determination of the potential interactions of such proteins will be essential. For any given set of proteins, the following questions may be of interest, from the point of view of technology or natural history: Do these proteins bind in vivo? If they do bind, What is the spatial configuration which they adopt in their bound state? How strong or weak is their interaction? If they do not bind, Can they be made to bind by inducing a mutation? Protein–protein docking is ultimately envisaged to address all these issues. Furthermore, since docking methods can be based on purely physical principles, even proteins of unknown function (or which have been studied relatively little) may be docked. The only prerequisite is that their molecular structure has been either determined experimentally, or can be estimated by a protein structure prediction technique. Protein–nucleic acid interactions feature prominently in the living cell. Transcription factors, which regulate gene expression, and polymerases, which catalyse replication, are composed of proteins, and the genetic material they interact with is composed of nucleic acids. Modeling protein–nucleic acid complexes presents some unique challenges, as described below. History In the 1970s, complex modelling revolved around manually identifying features on the surfaces of the interactors, and interpreting the consequences for binding, function and activity; any computer programmes were typically used at the end of the modelling process, to discriminate between the relatively few configurations which remained after all the heuristic constraints had been imposed. The first use of computers was in a study on hemoglobin interaction in sickle-cell fibres. This was followed in 1978 by work on the trypsin-BPTI complex. Computers discriminated between good and bad models using a scoring function which rewarded large interface area, and pairs of molecules in contact but not occupying the same space. The computer used a simplified representation of the interacting proteins, with one interaction centre for each residue. Favorable electrostatic interactions, including hydrogen bonds, were identified by hand. In the early 1990s, more structures of complexes were determined, and available computational power had increased substantially. With the emergence of bioinformatics, the focus moved towards developing generalized techniques which could be applied to an arbitrary set of complexes at acceptable computational cost. The new methods were envisaged to apply even in the absence of phylogenetic or experimental clues; any specific prior knowledge could still be introduced at the stage of choosing between the highest ranking output models, or be framed as input if the algorithm catered for it. 1992 saw the publication of the correlation method, an algorithm which used the fast Fourier transform to give a vastly improved scalability for evaluating coarse shape complementarity on rigid-body models. This was extended in 1997 to cover coarse electrostatics. In 1996 the results of the first blind trial were published, in which six research groups attempted to predict the complexed structure of TEM-1 Beta-lactamase with Beta-lactamase inhibitor protein (BLIP). The exercise brought into focus the necessity of accommodating conformational change and the difficulty of discriminating between conformers. It also served as the prototype for the CAPRI assessment series, which debuted in 2001. Rigid-body docking vs. flexible docking If the bond angles, bond lengths and torsion angles of the components are not modified at any stage of complex generation, it is known as rigid body docking. A subject of speculation is whether or not rigid-body docking is sufficiently good for most docking. When substantial conformational change occurs within the components at the time of complex formation, rigid-body docking is inadequate. However, scoring all possible conformational changes is prohibitively expensive in computer time. Docking procedures which permit conformational change, or flexible docking procedures, must intelligently select small subset of possible conformational changes for consideration. Methods Successful docking requires two criteria: Generating a set of configurations which reliably includes at least one nearly correct one. Reliably distinguishing nearly correct configurations from the others. For many interactions, the binding site is known on one or more of the proteins to be docked. This is the case for antibodies and for competitive inhibitors. In other cases, a binding site may be strongly suggested by mutagenic or phylogenetic evidence. Configurations where the proteins interpenetrate severely may also be ruled out a priori. After making exclusions based on prior knowledge or stereochemical clash, the remaining space of possible complexed structures must be sampled exhaustively, evenly and with a sufficient coverage to guarantee a near hit. Each configuration must be scored with a measure that is capable of ranking a nearly correct structure above at least 100,000 alternatives. This is a computationally intensive task, and a variety of strategies have been developed. Reciprocal space methods Each of the proteins may be represented as a simple cubic lattice. Then, for the class of scores which are discrete convolutions, configurations related to each other by translation of one protein by an exact lattice vector can all be scored almost simultaneously by applying the convolution theorem. It is possible to construct reasonable, if approximate, convolution-like scoring functions representing both stereochemical and electrostatic fitness. Reciprocal space methods have been used extensively for their ability to evaluate enormous numbers of configurations. They lose their speed advantage if torsional changes are introduced. Another drawback is that it is impossible to make efficient use of prior knowledge. The question also remains whether convolutions are too limited a class of scoring function to identify the best complex reliably. Monte Carlo methods In Monte Carlo, an initial configuration is refined by taking random steps which are accepted or rejected based on their induced improvement in score (see the Metropolis criterion), until a certain number of steps have been tried. The assumption is that convergence to the best structure should occur from a large class of initial configurations, only one of which needs to be considered. Initial configurations may be sampled coarsely, and much computation time can be saved. Because of the difficulty of finding a scoring function which is both highly discriminating for the correct configuration and also converges to the correct configuration from a distance, the use of two levels of refinement, with different scoring functions, has been proposed. Torsion can be introduced naturally to Monte Carlo as an additional property of each random move. Monte Carlo methods are not guaranteed to search exhaustively, so that the best configuration may be missed even using a scoring function which would in theory identify it. How severe a problem this is for docking has not been firmly established. Evaluation Scoring functions To find a score which forms a consistent basis for selecting the best configuration, studies are carried out on a standard benchmark (see below) of protein–protein interaction cases. Scoring functions are assessed on the rank they assign to the best structure (ideally the best structure should be ranked 1), and on their coverage (the proportion of the benchmark cases for which they achieve an acceptable result). Types of scores studied include: Heuristic scores based on residue contacts. Shape complementarity of molecular surfaces ("stereochemistry"). Free energies, estimated using parameters from molecular mechanics force fields such as CHARMM or AMBER. Phylogenetic desirability of the interacting regions. Clustering coefficients. Information based cues. It is usual to create hybrid scores by combining one or more categories above in a weighted sum whose weights are optimized on cases from the benchmark. To avoid bias, the benchmark cases used to optimize the weights must not overlap with the cases used to make the final test of the score. The ultimate goal in protein–protein docking is to select the ideal ranking solution according to a scoring scheme that would also give an insight into the affinity of the complex. Such a development would drive in silico protein engineering, computer-aided drug design and/or high-throughput annotation of which proteins bind or not (annotation of interactome). Several scoring functions have been proposed for binding affinity / free energy prediction. However the correlation between experimentally determined binding affinities and the predictions of nine commonly used scoring functions have been found to be nearly orthogonal (R2 ~ 0). It was also observed that some components of the scoring algorithms may display better correlation to the experimental binding energies than the full score, suggesting that a significantly better performance might be obtained by combining the appropriate contributions from different scoring algorithms. Experimental methods for the determination of binding affinities are: surface plasmon resonance (SPR), Förster resonance energy transfer, radioligand-based techniques, isothermal titration calorimetry (ITC), microscale thermophoresis (MST) or spectroscopic measurements and other fluorescence techniques. Textual information from scientific articles can provide useful cues for scoring. Benchmarks A benchmark of 84 protein–protein interactions with known complexed structures has been developed for testing docking methods. The set is chosen to cover a wide range of interaction types, and to avoid repeated features, such as the profile of interactors' structural families according to the SCOP database. Benchmark elements are classified into three levels of difficulty (the most difficult containing the largest change in backbone conformation). The protein–protein docking benchmark contains examples of enzyme-inhibitor, antigen-antibody and homomultimeric complexes. The latest version of protein-protein docking benchmark consists of 230 complexes. A protein-DNA docking benchmark consists of 47 test cases. A protein-RNA docking benchmark was curated as a dataset of 45 non-redundant test cases with complexes solved by X-ray crystallography only as well as an extended dataset of 71 test cases with structures derived from homology modelling as well. The protein-RNA benchmark has been updated to include more structures solved by X-ray crystallography and now it consists of 126 test cases. The benchmarks have a combined dataset of 209 complexes. A binding affinity benchmark has been based on the protein–protein docking benchmark. 81 protein–protein complexes with known experimental affinities are included; these complexes span over 11 orders of magnitude in terms of affinity. Each entry of the benchmark includes several biochemical parameters associated with the experimental data, along with the method used to determine the affinity. This benchmark was used to assess the extent to which scoring functions could also predict affinities of macromolecular complexes. This Benchmark was post-peer reviewed and significantly expanded. The new set is diverse in terms of the biological functions it represents, with complexes that involve G-proteins and receptor extracellular domains, as well as antigen/antibody, enzyme/inhibitor, and enzyme/substrate complexes. It is also diverse in terms of the partners' affinity for each other, with Kd ranging between 10−5 and 10−14 M. Nine pairs of entries represent closely related complexes that have a similar structure, but a very different affinity, each pair comprising a cognate and a noncognate assembly. The unbound structures of the component proteins being available, conformation changes can be assessed. They are significant in most of the complexes, and large movements or disorder-to-order transitions are frequently observed. The set may be used to benchmark biophysical models aiming to relate affinity to structure in protein–protein interactions, taking into account the reactants and the conformation changes that accompany the association reaction, instead of just the final product. The CAPRI assessment The Critical Assessment of PRediction of Interactions is an ongoing series of events in which researchers throughout the community try to dock the same proteins, as provided by the assessors. Rounds take place approximately every 6 months. Each round contains between one and six target protein–protein complexes whose structures have been recently determined experimentally. The coordinates and are held privately by the assessors, with the cooperation of the structural biologists who determined them. The assessment of submissions is double blind. CAPRI attracts a high level of participation (37 groups participated worldwide in round seven) and a high level of interest from the biological community in general. Although CAPRI results are of little statistical significance owing to the small number of targets in each round, the role of CAPRI in stimulating discourse is significant. (The CASP assessment is a similar exercise in the field of protein structure prediction). See also Biomolecular complex – any biological complex of protein, RNA, DNA (sometimes has lipids and carbohydrates) Docking (molecular) – small molecule docking to proteins References Protein structure Bioinformatics Molecular physics Molecular modelling
Macromolecular docking
[ "Physics", "Chemistry", "Engineering", "Biology" ]
2,945
[ "Biological engineering", "Molecular physics", "Bioinformatics", "Theoretical chemistry", "Molecular modelling", " molecular", "Structural biology", "nan", "Atomic", "Protein structure", " and optical physics" ]
15,734,342
https://en.wikipedia.org/wiki/Single%20buoy%20mooring
A Single buoy mooring (SrM) (also known as single-point mooring or SPM) is a loading buoy anchored offshore, that serves as a mooring point and interconnect for tankers loading or offloading gas or liquid products. SPMs are the link between geostatic subsea manifold connections and weathervaning tankers. They are capable of handling any tonnage ship, even very large crude carriers (VLCC) where no alternative facility is available. In shallow water SPMs are used to load and unload crude oil and refined products from inshore and offshore oilfields or refineries, usually through some form of storage system. These buoys are usually suitable for use by all types of oil tanker. In deep water oil fields, SPMs are usually used to load crude oil direct from the production platforms, where there are economic reasons not to run a pipeline to the shore. These moorings usually supply to dedicated tankers which can moor without assistance. Several types of single point mooring are in use. Parts There are four groups of parts in the total mooring system: the body of the buoy, mooring and anchoring elements, product transfer system and other components. Buoy body The buoy body may be supported on static legs attached to the seabed, with a rotating part above water level connected to the (off)loading tanker. The two sections are linked by a roller bearing, referred to as the "main bearing". Alternatively the buoy body may be held in place by multiple radiating anchor chains. The moored tanker can freely weather vane around the buoy and find a stable position due to this arrangement. Mooring and anchoring parts Moorings fix the buoy to the sea bed. Buoy design must account for the behaviour of the buoy given applicable wind, wave and current conditions and tanker tonnages. This determines the optimum mooring arrangement and size of the various mooring leg components. Anchoring points are greatly dependent on local soil condition. Mooring components Anchors or piles - To connect the mooring to the seabed Sinker or anchor chain joint to buoy (SPM) Anchor chain Chain stoppers - To connect the chains to the buoy Hawser arrangement A tanker is moored to a buoy by means of a hawser arrangement. Oil Companies International Marine Forum (OCIMF) standards are available for mooring systems. The hawser arrangement usually consist of nylon rope, which is shackled to an integrated mooring uni-joint on the buoy deck. At the tanker end of the hawser, a chafe chain is connected to prevent damage from the tanker fairlead. A load pin can be applied to the mooring uni-joint on the buoy deck to measure hawser loads. Hawser systems use either one or two ropes depending on the largest tonnage of vessel which would be moored to the buoy. The ropes would either be single-leg or grommet leg type ropes. These are usually connected to an OCIMF chafe chain on the export tanker side (either type A or B depending on the maximum tonnage of the tanker and the mooring loads). This chafe chain would then be held in the chain stopper on board the export tanker. A basic hawser system would consist of the following (working from the buoy outwards): Buoy-side shackle and bridle assembly for connection to the padeye on the buoy; Mooring hawser shackle; Mooring hawser; Chafe chain assembly; Support buoy; Pick-up / messenger lines; Marker buoy for retrieval from the water. Under OCIMF recommendations, the hawser arrangement would normally be purchased as a full assembly from a manufacturer. Product transfer system The heart of each buoy is the product transfer system. From a geostatic location, e.g. a pipeline end manifold (PLEM) located on the seabed, this system transfers products to the offtake tanker. The basic product transfer system components are: Flexible subsea hoses, generally referred to as “risers” Floating hose string(s) Marine Breakaway Coupling Product swivel, valves and piping Risers The risers are flexible hoses that connect the subsea piping to the buoy. Configuration of these risers can vary depending on water depth, sea conditions, buoy motions, etc. Floating hose string Floating hose string(s) connect the buoy to the offloading tanker. The hose string can be equipped with a breakaway coupling to prevent rupture of hoses/hawser and subsequent oil spills. Product swivel The product swivel is the connection between the geostatic and the rotating parts of the buoy. The swivel enables an offloading tanker to rotate with respect to the mooring buoy. Product swivels range in size depending on the capacity of attached piping and risers. Product swivels can provide one or several independent paths for fluids, gases, electrical signals or power. Swivels are equipped with a multiple seal arrangement to minimise the possibility of leakage of product into the environment. Other components Other possible components of SPMs are: A boat landing, providing access to the buoy deck, Fendering to protect the buoy, Lifting and handling equipment to aid materials handling, Navigational aids for maritime visibility, and fog horn to keep moving vessel alert. An electrical subsystem to enable valve operation and to power navigation aids or other equipment. Configurations Catenary anchor leg mooring A commonly used configuration is the catenary anchor leg mooring (CALM), which can be capable of handling very large crude carriers. This configuration uses six or eight heavy anchor chains placed radially around the buoy, of a tonnage to suit the designed load, each about long, and attached to an anchor or pile to provide the required holding power. The anchor chains are pre-tensioned to ensure that the buoy is held in position above the PLEM. As the load from the tanker is applied, the heavy chains on the far side straighten and lift off the seabed to apply the balancing load. Under full design load there is still some of chain lying on the bottom. The flexible hose riser may be in one of three basic configurations, all designed to accommodate tidal depth variation and lateral displacement due to mooring loads. In all cases the hose curvature changes to accommodate lateral and vertical movement of the buoy, and the hoses are supported at near neutral buoyancy by floats along the length. These are: Chinese lantern, in which two to four mirror symmetrical hoses connect the PLEM with the buoy, with the convexity of the curve facing radially outwards. Lazy-S, in which the riser hose leaves the PLEM at a steep angle, then flattens out before gradually curving upwards to meet the buoy approximately vertically, in a flattened S-curve. Steep-S, in which the hose first rises roughly vertically to a submerged float, before making a sharp bend downwards followed by a slow curve through horizontal to a vertical attachment to the buoy. Other configurations Less commonly used configurations include: Single anchor leg mooring (SALM), which can be used in both shallow and deep water. see Thistle SALM as an example. Vertical anchor leg mooring, which is seldom used. Two types of single point mooring tower: Jacket type, which has a jacket piled to the seabed with a turntable on top which carries the mooring gear and pipework Spring pile type, which has steel pipe risers in the structure Exposed location single buoy mooring (ELSBM). This configuration stores the mooring line and cargo hose on drums when not in use. Suitable for use in rough conditions Articulated loading platform (ALP). Also suited for rough conditions. See also References External links International Association of Lighthouse Authorities (IALA) official site Petroleum technology Water transport infrastructure Port infrastructure no:Offshore lastesystemer#Leddlagrede tårn
Single buoy mooring
[ "Chemistry", "Engineering" ]
1,667
[ "Petroleum engineering", "Petroleum technology" ]
15,734,676
https://en.wikipedia.org/wiki/Barium%20ferrate
Barium ferrate is the chemical compound of formula BaFeO4. This is a rare compound containing iron in the +6 oxidation state. The ferrate(VI) ion has two unpaired electrons, making it paramagnetic. It is isostructural with BaSO4, and contains the tetrahedral [FeO4]2− anion. Structure The ferrate(VI) anion is paramagnetic due to its two unpaired electrons and it has a tetrahedral molecular geometry. X-ray diffraction has been used to determine the orthorhombic unit cell structure (lattice vectors a ≠ b ≠ c, interaxial angles α=β=γ=90°) of nanocrystalline BaFeO4. It crystallized in the Pnma space group (point group: D2h) with lattice parameters a = 0.8880 nm, b = 0.5512 nm and c = 0.7214 nm. The accuracy of the X-Ray diffraction data has been verified by the lattice fringe intervals from High-Resolution Transmission Electron Microscopy (HRTEM) and cell parameters calculated from Selected Area Diffraction (SAED). Characterization Infrared absorbance peaks of barium ferrate are observed at 870, 812, 780 cm−1. BaFeO4 follows the Curie–Weiss law and has a magnetic moment of (2.92 ± 0.03) × 10−23 A m2 (3.45 ± 0.1 BM) with a Weiss constant of −89 K. Preparation and chemistry Barium ferrate(VI) can be prepared by both wet and dry synthetic methods. Dry synthesis is usually performed using a thermal technique, such as by heating barium hydroxide and iron(II) hydroxide in the presence of oxygen to about 800 to 900 °C. + + → + 2 Wet methods employ both chemical and electrochemical techniques. For example, the ferrate anion forms when a suitable iron salt is placed in alkaline conditions and a strong oxidising agent, such as sodium hypochlorite, is added. 2 + 3 + 4 → 2 + 5 + 3 Barium ferrate is then precipitated from solution by adding a solution of a barium(II) salt. Addition of a soluble barium salt to an alkali metal ferrate solution produces a maroon precipitate of barium ferrate, a crystal which has the same structure as barium chromate and has approximately the same solubility. Barium ferrate has also been prepared by adding barium oxide to a mixture sodium hypochlorite and ferric nitrate at room temperature (or 0 °C). The purity of the product can be improved by carrying out the reaction at low temperature in the absence of carbon dioxide and by rapidly filtering and drying the precipitate, reducing the coprecipitation of barium hydroxide and barium carbonate as impurities. Uses Barium ferrate is an oxidizing agent and is used as an oxidizing reagent in organic syntheses. Its other applications include removal of color, removal of cyanide, killing bacteria and contaminated and waste water treatment. Salts of ferrate(VI) are energetic cathode materials in "super-iron" batteries. Cathodes containing ferrate(VI) compounds are referred to as "super-iron" cathodes due to their highly oxidized iron basis, multiple electron transfer, and high intrinsic energy. Among all ferrate(VI) salts, barium ferrate sustains unusually facile charge transfer, which is important for the high power domain of alkaline batteries. Reactions Barium ferrate is the most stable of the ferrate(VI) compounds. It can be prepared in its purest state and has the most definite composition. Barium ferrate can be easily decomposed by all soluble acids, including carbonic acid. If carbon dioxide is passed through water on which hydrated barium ferrate is suspended, barium ferrate will decompose completely to form barium carbonate, ferric hydroxide and oxygen gas. Alkaline sulfates decompose barium ferrate that has not been dried, forming barium sulfate, ferric hydroxide and oxygen gas. See also Potassium ferrate Ferrate(VI) Barium References Barium compounds Ferrates iron compounds
Barium ferrate
[ "Chemistry" ]
939
[ "Ferrates", "Salts" ]
15,735,043
https://en.wikipedia.org/wiki/Time-utility%20function
A Time/Utility Function (TUF), née Time/Value Function, specifies the application-specific utility that an action (e.g., computational task, mechanical movement) yields depending on its completion time. TUFs and their utility interpretations (semantics), scales, and values are derived from application domain-specific subject matter knowledge. An example (but not the only) interpretation of utility is an action's relative importance, which otherwise is independent of its timeliness. The traditional deadline represented as a TUF is a special case—a downward step of utility from 1 to 0 at the deadline time—e.g., timeliness without importance. A TUF is more general—it has a critical time, with application-specific shapes and utility values on each side, after which it does not increase. The various researcher and practitioner definitions of firm and soft real-time can also be represented as special cases of the TUF model. The optimality criterion for scheduling multiple TUF-constrained actions has historically in the literature been only maximal utility accrual (UA)—e.g., a (perhaps expected) weighted sum of the individual actions' completion utilities. This thus takes into account timeliness with respect to critical times. Additional criteria (e.g., energy, predictability), constraints (e.g., dependencies), system models, scheduling algorithms, and assurances have been added as the TUF/UA paradigm and its use cases have evolved. More expressively, TUF/UA allows accrued utility, timeliness, predictability, and other scheduling criteria and constraints to be traded off against one another for the schedule to yield situational application QoS—as opposed to only timeliness per se. Instances of the TUF/UA paradigm have been employed in a wide variety of application domains, most frequently in military systems. Time/Utility Functions The TUF/UA paradigm was originally created to address certain action timeliness, predictability of timeliness, and application QoS-based scheduling needs of various military applications for which traditional real-time concepts and practices are not sufficiently expressive (e.g., for dynamically timeliness-critical systems not having deadlines) and load resilience (e.g., for systems subject to routine action overloads). An important common example class of such applications is missile defense (notionally). Subsequently, numerous variations on the original TUF model, the TUF/UA paradigm's system model, and thus scheduling techniques and algorithms, have been studied in the academic literature—e.g.,—and applied in civilian contexts. Some examples of the latter include: cyber-physical systems, AI, multi-robot systems, drone scheduling, autonomous robots, intelligent vehicle-to-cloud data transfers, industrial process control, transaction systems, high performance computing, cloud systems, heterogeneous clusters, service-oriented computing, networking, and memory management for real and virtual machines. A steel mill example is briefly described in the Introduction of Clark's Ph.D. thesis. TUFs and their utility interpretations (semantics), scales, and values are derived from domain-specific subject matter knowledge. A historically frequent interpretation of utility is actions' relative importance. A framework for á priori assigning static utility values subject to strong constraints on system models has been devised, but subsequent (like prior) TUF/UA research and development have preferred to depend on exploiting application-specificity rather than attempting to create more general frameworks. However, such frameworks and tools remain an important research topic. By traditional convention, a TUF is a concave function, including linear ones. See the depiction of some example TUFs. TUF/UA papers in the research literature, with few exceptions, e.g., are for only either linear or piecewise linear (including conventional deadline-based) TUFs because they are easier to specify and schedule. In many cases, the TUFs are only monotonically decreasing. A constant function represents an action's utility that is not related to the action's completion time—for example, the action's constant relative importance. This allows both time-dependent and time-independent actions to be scheduled coherently. A TUF has a global critical time, after which its utility does not increase. If a TUF never decreases, its global critical time is the first time when its maximum utility is reached. A constant TUF has an arbitrary critical time for the purpose of scheduling—such as the action's release time, or the TUF's termination time. The global critical time may be followed by local critical times—for example, consider a TUF having a sequence of downward steps, perhaps to approximate a smooth downward curve. TUF utility values are usually either integers or rational numbers. TUF utility may include negative values. (A TUF that has negative values in its range is not necessarily dropped from scheduling consideration or aborted during its operation—that decision depends on the scheduling algorithm.) A conventional deadline time (d) represented as a TUF is a special case—a downward step TUF having a unit penalty (i.e., having utility values 1 before and 0 after its critical time). More generally, a TUF allows downward (and upward) step functions to have any pre- and post-critical time utilities. Tardiness represented as a TUF is a special case whose non-zero utility is the linear function C - d, where C is the action's completion time—either current, expected, or believed. More generally, a TUF allows non-zero earliness and tardiness to be non-linear—e.g., increasing tardiness may result in non-linearly decreasing utility, such as when detecting a threat. Thus, TUFs provide a rich generalization of traditional action completion time constraints in real-time computing. Alternatively, the TUF/UA paradigm can be employed to use timeliness with respect to the global critical time as a means to a utility accrual end—i.e., application-level Quality of Service (QoS)—instead of timeliness per se being an end in itself . A TUF (its shape and values) may be dynamically adapted by an application or its operational environment, independently for any actions currently either waiting or operating. These adaptations ordinarily occur at discrete events—e.g., at an application mode change such as for ballistic missile flight phases. Alternatively, these adaptations may occur continuously, such as for actions whose operational durations and TUFs are application-specific functions of when those actions are either released or begin operation. The operation durations may increase or decrease or both, and may be non-monotonic. This continuous case is called time-dependent scheduling. Time-dependent scheduling was introduced for (but is not limited to) certain real-time military applications, such as radar tracking systems. Utility Accrual Scheduling Multiple actions in a system may contend for access to sequentially exclusively shared resources—physical ones such as processors, networks, exogenous application devices (sensors, actuators, etc.)—and logical ones such as synchronizers, data. The TUF/UA paradigm resolves each instance of this contention using an application-specific algorithmic technique that creates (or updates) a schedule at scheduling events—e.g., times (such as action arrival or completion) or states. The instance's contending actions are dispatched for resource access sequentially in order from the front of the schedule. Thus, action UA sequencing is not greedy. The algorithmic technique creates a schedule based on one or more application-specific objectives (i.e., optimality criteria). The primary objective for scheduling actions having TUFs is maximal utility accrual (UA). The accrued utility is an application-specific polynomial sum of the schedule's completed actions' utilities. When actions have one or more stochastic parameters (e.g., operation duration), the accrued utility is also stochastic (i.e., an expected polynomial sum). Utility and accrued utility are generic, their interpretations (semantics) and scales are application-specific. An action's operation duration may be fixed and known at system configuration time. More generally, it may be either fixed or stochastic but not known (either with certainty or in expectation) until it either arrives or is released. An operation duration may be an application-specific function of the action's operation starting time—it may increase or decrease or both, and may be non-monotonic. This case is called time-dependent scheduling. Notes References External links Real-Time for the Real World. 2006-2009, Systems Software Research Group, Binoy Ravindran, ECE, Virginia Tech. Michael L. Pindo, Scheduling: Theory, Algorithms, and Systems, 5th ed., 2015. Stanislaw Gawiejnowicz, Models and Algorithms of Time-Dependent Scheduling, 2nd ed., eBook , Springer, 2020. Chris N. Potts and Vitaly A. Strusevich, Fifty Years of Scheduling: A Survey of Milestones (2009) Journal of scheduling. Multidisciplinary International Conference on Scheduling. International Workshop on Dynamic Scheduling Problems. Real-time computing Optimal scheduling Quality of service Software performance management
Time-utility function
[ "Technology", "Engineering" ]
1,925
[ "Optimal scheduling", "Real-time computing", "Industrial engineering" ]
15,735,983
https://en.wikipedia.org/wiki/Fog%20desert
A fog desert is a type of desert where fog drip supplies the majority of moisture needed by animal and plant life. Examples of fog deserts include the Atacama Desert of coastal Chile and Peru; the Baja California desert of Mexico; the Namib Desert in Namibia; the Arabian Peninsula coastal fog desert; and a manmade instance within Biosphere 2, an artificial closed ecosphere in Arizona. Formation Humidity in foggy air is above 95%. One way for fog to form in deserts is through the interaction of hot humid air (such as is formed above warm bodies of water) with a cooler object, such as a mountain. When warm air hits cooler objects, fog is generated by the condensation of vaporized water. Another way fog forms in deserts occurs when a desert is close to an ocean which has a cold current. When air is heated over desert land and blows towards the cool water in the ocean, it condenses and fog is formed. The cool fog is then blown inland by the ocean breeze. Fog is mainly formed in the early morning or after sunset. Drastic changes in elevation such as mountain ranges allow for maritime winds to settle in specific geographic areas, which is a common characteristic in fog deserts. The Andes mountain range which runs along the Pacific coast of South America divides Chile and Peru into inland and coastal regions, and its proximity to the sea coupled with the steep change in elevation (and thus surface temperature) allow for fog to form along the Pacific coast and supply moisture to the otherwise arid desert. The Atacama Desert, the driest desert in the world, features lomas, areas in which fog condenses against mountain slopes near the sea and creates "fog oases" with an abundant biodiversity of plant and animal species. Plant coverage varies greatly, from coverage as high as 50% in the foggiest areas to a near-total absence of plant life above the fog line. Within arid fog deserts with low precipitation levels, fog drip provides the moisture needed for agricultural development. The variation in humidity throughout the day and seasons within a fog desert region encourage the development of a comparable diversity of plant types and habits such as succulents, deciduous species, and woody shrub. Ecology The species prevalent in fog deserts depend almost entirely on water contained in the fog for their survival. This has led to the development of various structural and behavioral adaptations by organisms to collect water. The Stenocara beetle, which lives in the Namib desert, climbs sand dunes when the humid wind is blowing from the ocean to access the ambient water. An example of plants adapting to the fog desert's climate is the genus Welwitschia which also grows in the Namib desert and grows only two leaves through its life. The leaves have large pores to help it absorb water from the fog forming on them. People living in the Namib depend on techniques to collect water from fog. A lot of technologies are being developed to help extract water from desert air. Fog harvesters are undergoing improvements based on observations of the adaptations of some of the beings in fog deserts like the Stenocara beetle. Devices being developed to extract water from air desert use metal-organic framework crystals to capture and hold water molecules when exposed to air flow at night. In the morning, air flow is cut and the water collected previous night is then vaporized by exposure to sunlight and then condensed into liquid water when it hits the cooler condenser. References Deserts Fog Habitats Ecosystems
Fog desert
[ "Physics", "Biology" ]
702
[ "Visibility", "Symbiosis", "Fog", "Physical quantities", "Ecosystems", "Deserts" ]
22,336,018
https://en.wikipedia.org/wiki/PHydrion
pHydrion is the trademarked name for a popular line of chemical test products, marketed by Micro Essential Laboratory, Inc., the original manufacturer of Hydrion and pHydrion products. The trademarked pHydrion product line comprises chemical test papers, chemical indicators, chemical test kits, chemical indicator kits, pH indicator pencils, chemical buffers, buffer salts, buffer preservatives, dispensers, color charts, and testing products, for use in testing, detecting, identifying, measuring, and indicating levels of pH, of sanitizers, and of other substances. References Phydrion United States Patent and Trademark Office Retrieved on 2009-Apr-08 External links Micro Essential Laboratory, Inc. website PH indicators
PHydrion
[ "Chemistry", "Materials_science" ]
150
[ "Titration", "PH indicators", "Chromism", "Chemical tests", "Equilibrium chemistry" ]
22,343,496
https://en.wikipedia.org/wiki/Silverquant
Silverquant is a labeling and detection method for DNA microarrays or protein microarrays. A synonym is colorimetric detection. In contrast to the classical signal detection on microarrays by using fluorescence, the colorimetric detection is more sensitive and ozone-stable. Chemical reaction The probe to be detected is labeled with some biotin-molecules. After incubation with a gold-coupled anti-biotin conjugate, silver nitrate and a reducing agent are added. The reaction starts whereas the gold particle serves as a starting point for the silver precipitation. The reaction needs to be stopped after a specific time. The constant reaction time is essential to obtain comparable results. Detection The silver-stained spots on the microarray are clearly visible. By using a transmission microarray scanner, the signals are transformed into digital values which are finally available as an image file. References Alexandre I et al. Anal Biochem. 2001 Aug 1;295(1):1-8. Gene expression Bioinformatics DNA Microarrays
Silverquant
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
218
[ "Biochemistry methods", "Genetics techniques", "Biological engineering", "Microtechnology", "Microarrays", "Gene expression", "Bioinformatics", "Molecular biology techniques", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
3,549,136
https://en.wikipedia.org/wiki/Village-level%20operation%20and%20maintenance%20%28pumps%29
Village Level Operation and Maintenance (VLOM) is an unofficial classification given to handpumps used in developing countries that require minimal maintenance or that can be done "at the village level." Not all maintenance and repair needs to be done by the villagers for a pump to be classed as a VLOM pump. VLOMM, or Village Level Operation and Management of Maintenance is often used synonymously. This addition emphasizes the role of users as the managers of maintenance able choose to use someone from outside the village to assist with more complicated repairs. History During the first UN decade on water boreholes, hand-dug wells and tubewells were constructed and water pumps were provided to developing countries by various NGOs. Unfortunately this top down approach led to the installation of pumps, notable the India Mark II, that were difficult to maintain. VLOM pumps were designed to allow remote villages to maintain pumps themselves as part of a larger strategy to reduce the dependency of villages on government and donor agencies and provide more sustainable access to drinking water. Common VLOM Pumps Implementation The concept of Village Level Operation and Maintenance Management in relation to communal handpumps has gained wide acceptance in the rural water sector. Project and pump designs based on VLOM principles are now commonplace. However, implementation of handpump programs in accordance with VLOM criteria have been only partially successful and the VLOM approach to maintenance has been very difficult to realize in the field, especially in Africa. It was assumed that the private sector would take care of the distribution of spare parts, but most parts had to be imported and were difficult to get. Low profit margins on spares did not encourage the private sector to take up the role of importing and distributing spare parts. As a result, VLOM technology is increasingly seen as one amongst many components needed for the sustainable provision of village water supplies. Difficulties with the introduction of VLOM have called into question a number of inherent assumptions in the concept relating to the user community, the supporting environment and technology choice. Of particular importance is the assumption that introducing and supporting VLOM is an easier task for government than running a centralized maintenance service. VLOM has undoubtedly brought the answer to sustainability a little closer; however, the goal of easy maintenance remains elusive. Perhaps the greatest lesson is that there are currently no ‘off-the-shelf’ solutions which can bypass the need for effective government institutional community water point support. Wherever this problem is unresolved, and where there are no NGOs or other agencies to fill the gap, sustainability will always be in doubt. Recently there have been attempts to involve the private sector, not only in selling spares and handpump repairs, but also in local pump sales and installation. This is called the "BlueZone" approach where a handpump dealer has its own region to take care of. Due to economies of scale, this would raise a more interesting business case and keep the handpump dealer interested to maintain this service while the communities have a reliable source of water with a local back-up. Unfortunately there are no simple one-fit-all solution on the horizon for sub-Saharan Africa, which experiences these problems most acutely. References External links VLOM pumps Information on VLOM/ rope pumps Pumps Water supply Appropriate technology
Village-level operation and maintenance (pumps)
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
655
[ "Pumps", "Hydrology", "Turbomachinery", "Physical systems", "Hydraulics", "Environmental engineering", "Water supply" ]
3,551,251
https://en.wikipedia.org/wiki/Oxygen%E2%80%93hemoglobin%20dissociation%20curve
The oxygen–hemoglobin dissociation curve, also called the oxyhemoglobin dissociation curve or oxygen dissociation curve (ODC), is a curve that plots the proportion of hemoglobin in its saturated (oxygen-laden) form on the vertical axis against the prevailing oxygen tension on the horizontal axis. This curve is an important tool for understanding how our blood carries and releases oxygen. Specifically, the oxyhemoglobin dissociation curve relates oxygen saturation (SO2) and partial pressure of oxygen in the blood (PO2), and is determined by what is called "hemoglobin affinity for oxygen"; that is, how readily hemoglobin acquires and releases oxygen molecules into the fluid that surrounds it. Background Hemoglobin (Hb) is the primary vehicle for transporting oxygen in the blood. Each hemoglobin molecule has the capacity to carry four oxygen molecules. These molecules of oxygen bind to the globin chain of the heme prosthetic group. When hemoglobin has no bound oxygen, nor bound carbon dioxide, it has the unbound conformation (shape). The binding of the first oxygen molecule induces change in the shape of the hemoglobin that increases its ability to bind to the other three oxygen molecules. In the presence of dissolved carbon dioxide, the pH of the blood changes; this causes another change in the shape of hemoglobin, which increases its ability to bind carbon dioxide and decreases its ability to bind oxygen. With the loss of the first oxygen molecule, and the binding of the first carbon dioxide molecule, yet another change in shape occurs, which further decreases the ability to bind oxygen, and increases the ability to bind carbon dioxide. The oxygen bound to the hemoglobin is released into the blood's plasma and absorbed into the tissues, and the carbon dioxide in the tissues is bound to the hemoglobin. In the lungs the reverse of this process takes place. With the loss of the first carbon dioxide molecule the shape again changes and makes it easier to release the other three carbon dioxides. Oxygen is also carried dissolved in the blood's plasma, but to a much lesser degree. Hemoglobin is contained in red blood cells. Hemoglobin releases the bound oxygen when carbonic acid is present, as it is in the tissues. In the capillaries, where carbon dioxide is produced, oxygen bound to the hemoglobin is released into the blood's plasma and absorbed into the tissues. How much of that capacity is filled by oxygen at any time is called the oxygen saturation. Expressed as a percentage, the oxygen saturation is the ratio of the amount of oxygen bound to the hemoglobin, to the oxygen-carrying capacity of the hemoglobin. The oxygen-carrying capacity of hemoglobin is determined by the type of hemoglobin present in the blood. The amount of oxygen bound to the hemoglobin at any time is related, in large part, to the partial pressure of oxygen to which the hemoglobin is exposed. In the lungs, at the alveolar–capillary interface, the partial pressure of oxygen is typically high, and therefore the oxygen binds readily to hemoglobin that is present. As the blood circulates to other body tissue in which the partial pressure of oxygen is less, the hemoglobin releases the oxygen into the tissue because the hemoglobin cannot maintain its full bound capacity of oxygen in the presence of lower oxygen partial pressures. Sigmoid shape The curve is usually best described by a sigmoid plot, using a formula of the kind: A hemoglobin molecule can bind up to four oxygen molecules in a reversible method. The shape of the curve results from the interaction of bound oxygen molecules with incoming molecules. The binding of the first molecule is difficult. However, this facilitates the binding of the second, third and fourth, this is due to the induced conformational change in the structure of the hemoglobin molecule induced by the binding of an oxygen molecule. In its simplest form, the oxyhemoglobin dissociation curve describes the relation between the partial pressure of oxygen (x axis) and the oxygen saturation (y axis). Hemoglobin's affinity for oxygen increases as successive molecules of oxygen bind. More molecules bind as the oxygen partial pressure increases until the maximum amount that can be bound is reached. As this limit is approached, very little additional binding occurs and the curve levels out as the hemoglobin becomes saturated with oxygen. Hence the curve has a sigmoidal or S-shape. At pressures above about 60 mmHg, the standard dissociation curve is relatively flat, which means that the oxygen content of the blood does not change significantly even with large increases in the oxygen partial pressure. To get more oxygen to the tissue would require blood transfusions to increase the hemoglobin count (and hence the oxygen-carrying capacity), or supplemental oxygen that would increase the oxygen dissolved in plasma. Although binding of oxygen to hemoglobin continues to some extent for pressures about 50 mmHg, as oxygen partial pressures decrease in this steep area of the curve, the oxygen is unloaded to peripheral tissue readily as the hemoglobin's affinity diminishes. The partial pressure of oxygen in the blood at which the hemoglobin is 50% saturated, typically about 26.6 mmHg (3.5 kPa) for a healthy person, is known as the P50. The P50 is a conventional measure of hemoglobin affinity for oxygen. In the presence of disease or other conditions that change the hemoglobin oxygen affinity and, consequently, shift the curve to the right or left, the P50 changes accordingly. An increased P50 indicates a rightward shift of the standard curve, which means that a larger partial pressure is necessary to maintain a 50% oxygen saturation. This indicates a decreased affinity. Conversely, a lower P50 indicates a leftward shift and a higher affinity. The 'plateau' portion of the oxyhemoglobin dissociation curve is the range that exists at the pulmonary capillaries (minimal reduction of oxygen transported until the p(O2) falls 50 mmHg). The 'steep' portion of the oxyhemoglobin dissociation curve is the range that exists at the systemic capillaries (a small drop in systemic capillary p(O2) can result in the release of large amounts of oxygen for the metabolically active cells). To see the relative affinities of each successive oxygen as you remove/add oxygen from/to the hemoglobin from the curve compare the relative increase/decrease in p(O2) needed for the corresponding increase/decrease in s(O2). Factors that affect the standard dissociation curve The strength with which oxygen binds to hemoglobin is affected by several factors. These factors shift or reshape the oxyhemoglobin dissociation curve. A shift to right indicates that the hemoglobin under study has a decreased affinity for oxygen. This makes it more difficult for hemoglobin to bind to oxygen (requiring a higher partial pressure of oxygen to achieve the same oxygen saturation), but it makes it easier for the hemoglobin to release oxygen bound to it. The effect of this shift of the curve increases the partial pressure of oxygen in the tissues when it is most needed, such as during exercise, or hemorrhagic shock. In contrast, the curve is shifted to the left by the opposite of these conditions. This shift indicates that the hemoglobin under study has an increased affinity for oxygen so that hemoglobin binds oxygen more easily, but unloads it more reluctantly. Left shift of the curve is a sign of hemoglobin's increased affinity for oxygen (e.g. at the lungs). Similarly, right shift shows decreased affinity, as would appear with an increase in either body temperature, hydrogen ions, 2,3-bisphosphoglycerate (2,3-BPG) concentration or carbon dioxide concentration. Note: Left shift: higher O2 affinity Right shift: lower O2 affinity fetal hemoglobin has higher O2 affinity than adult hemoglobin; primarily due to much-reduced affinity to 2,3-bisphosphoglycerate. The causes of shift to right can be remembered using the mnemonic, "CADET, face Right!" for CO2, Acid, 2,3-DPG, Exercise and Temperature. Factors that move the oxygen dissociation curve to the right are those physiological states where tissues need more oxygen. For example, during exercise, muscles have a higher metabolic rate, and consequently need more oxygen, produce more carbon dioxide and lactic acid, and their temperature rises. pH A decrease in pH (increase in ion concentration) shifts the standard curve to the right, while an increase shifts it to the left. This occurs because at greater ion concentration, various amino acid residues, such as Histidine 146 exist predominantly in their protonated form allowing them to form ion pairs that stabilize deoxyhemoglobin in the T state. The T state has a lower affinity for oxygen than the R state, so with increased acidity, the hemoglobin binds less O2 for a given PO2 (and more H+). This is known as the Bohr effect. A reduction in the total binding capacity of hemoglobin to oxygen (i.e. shifting the curve down, not just to the right) due to reduced pH is called the root effect. This is seen in bony fish. The binding affinity of hemoglobin to O2 is greatest under a relatively high pH. Carbon dioxide Carbon dioxide affects the curve in two ways. First, CO2 accumulation causes carbamino compounds to be generated through chemical interactions, which bind to hemoglobin forming carbaminohemoglobin. CO2 is considered an Allosteric regulation as the inhibition happens not at the binding site of hemoglobin. Second, it influences intracellular pH due to formation of bicarbonate ion. Formation of carbaminohemoglobin stabilizes T state hemoglobin by formation of ion pairs. Only about 5–10% of the total CO2 content of blood is transported as carbamino compounds, whereas (80–90%) is transported as bicarbonate ions and a small amount is dissolved in the plasma. The formation of a bicarbonate ion will release a proton into the plasma, decreasing pH (increased acidity), which also shifts the curve to the right as discussed above; low CO2 levels in the blood stream results in a high pH, and thus provides more optimal binding conditions for hemoglobin and O2. This is a physiologically favored mechanism, since hemoglobin will drop off more oxygen as the concentration of carbon dioxide increases dramatically where tissue respiration is happening rapidly and oxygen is in need. 2,3-BPG 2,3-Bisphosphoglycerate or 2,3-BPG (formerly named 2,3-diphosphoglycerate or 2,3-DPG) is an organophosphate formed in red blood cells during glycolysis and is the conjugate base of 2,3-bisphosphoglyceric acid. The production of 2,3-BPG is likely an important adaptive mechanism, because the production increases for several conditions in the presence of diminished peripheral tissue O2 availability, such as hypoxemia, chronic lung disease, anemia, and congestive heart failure, among others, which necessitate easier oxygen unloading in the peripheral tissue. High levels of 2,3-BPG shift the curve to the right (as in childhood), while low levels of 2,3-BPG cause a leftward shift, seen in states such as septic shock, and hypophosphataemia. In the absence of 2,3-BPG, hemoglobin's affinity for oxygen increases. 2,3-BPG acts as a heteroallosteric effector of hemoglobin, lowering hemoglobin's affinity for oxygen by binding preferentially to deoxyhemoglobin. An increased concentration of BPG in red blood cells favours formation of the T (taut or tense), low-affinity state of hemoglobin and so the oxygen-binding curve will shift to the right. Temperature Increase in temperature shifts the oxygen dissociation curve to the right. When temperature is increased keeping the oxygen concentration constant, oxygen saturation decreases as the bond between oxygen and iron gets denatured. Additionally, with increased temperature, the partial pressure of oxygen increases as well. So, one will have a lesser amount of hemoglobin saturated for the same oxygen concentration but at a higher partial pressure of oxygen. Thus, any point in the curve will shift rightwards (due to increased partial pressure of oxygen) and downwards (due to weakened Hb-O2bond), hence, the rightward shift of the curve. Carbon monoxide Hemoglobin binds with carbon monoxide 210 times more readily than with oxygen. Because of this higher affinity of hemoglobin for carbon monoxide than for oxygen, carbon monoxide is a highly successful competitor that will displace oxygen even at minuscule partial pressures. The reaction HbO2 + CO → HbCO + O2 almost irreversibly displaces the oxygen molecules forming carboxyhemoglobin; the binding of the carbon monoxide to the iron centre of hemoglobin is much stronger than that of oxygen, and the binding site remains blocked for the remainder of the life cycle of that affected red blood cell. With an increased level of carbon monoxide, a person can suffer from severe tissue hypoxia while maintaining a normal pO2 because carboxyhemoglobin does not carry oxygen to the tissues. Effects of methemoglobinaemia Methemoglobinaemia is a form of abnormal hemoglobin where the iron centre has been oxidised from the ferrous +2 oxidation state (the normal form, which on binding with oxygen changes to the ferric state) to the ferric +3 state. This causes a leftward shift in the oxygen hemoglobin dissociation curve, as any residual heme with oxygenated ferrous iron (+2 state) is unable to unload its bound oxygen into tissues (because 3+ iron impairs hemoglobin's cooperativity), thereby increasing its affinity with oxygen. However, methemoglobin has increased affinity for cyanide, and is therefore useful in the treatment of cyanide poisoning. In cases of accidental ingestion, administration of a nitrite (such as amyl nitrite) can be used to deliberately oxidise hemoglobin and raise methemoglobin levels, restoring the functioning of cytochrome oxidase. The nitrite also acts as a vasodilator, promoting the cellular supply of oxygen, and the addition of an iron salt provides for competitive binding of the free cyanide as the biochemically inert hexacyanoferrate(III) ion, [Fe(CN)6]3−. An alternative approach involves administering thiosulfate, thereby converting cyanide to thiocyanate, SCN−, which is excreted via the kidneys. Methemoglobin is also formed in small quantities when the dissociation of oxyhemoglobin results in the formation of methemoglobin and superoxide, O2−, instead of the usual products. Superoxide is a free radical and causes biochemical damage, but is neutralised by the action of the enzyme superoxide dismutase. Effects of ITPP Myo-inositol trispyrophosphate (ITPP), also known as OXY111A, is an inositol phosphate that causes a rightward shift in the oxygen hemoglobin dissociation curve through allosteric modulation of hemoglobin within red blood cells. It is an experimental drug intended to reduce tissue hypoxia. The effects appear to last roughly as long as the affected red blood cells remain in circulation. Fetal hemoglobin Fetal hemoglobin (HbF) is structurally different from normal adult hemoglobin (HbA), giving HbF a higher affinity for oxygen than HbA. HbF is composed of two alpha and two gamma chains whereas HbA is composed of two alpha and two beta chains. The fetal dissociation curve is shifted to the left relative to the curve for the normal adult because of these structural differences: In adult hemoglobin, the binding of 2,3-bisphosphoglycerate (2,3-BPG) primarily occurs with the beta chains, preventing the binding of oxygen with haemoglobin. This binding is crucial for stabilizing the deoxygenated state of hemoglobin, promoting the efficient release of oxygen to body tissues. In fetal hemoglobin, which possesses a gamma chain instead of a beta chain, the interaction with 2,3-BPG differes because 2,3 - -BPG not binds with gamma chain as it has lower to no affinity with gamma chain.This distinction contributes to fetal hemoglobin having a higher affinity for oxygen. Typically, fetal arterial oxygen pressures are lower than adult arterial oxygen pressures. Hence higher affinity to bind oxygen is required at lower levels of partial pressure in the fetus to allow diffusion of oxygen across the placenta. At the placenta, there is a higher concentration of 2,3-BPG formed, and 2,3-BPG binds readily to beta chains rather than to alpha chains. As a result, 2,3-BPG binds more strongly to adult hemoglobin, causing HbA to release more oxygen for uptake by the fetus, whose HbF is unaffected by the 2,3-BPG. HbF then delivers that bound oxygen to tissues that have even lower partial pressures where it can be released. See also Automated analyzer Bohr effect Notes References External links The Interactive Oxyhemoglobin Dissociation Curve Simulation of the parameters CO2, pH and temperature on the oxygen–hemoglobin dissociation curve (left or right shift) Respiratory physiology Chemical pathology Hematology Oxygen
Oxygen–hemoglobin dissociation curve
[ "Chemistry", "Biology" ]
3,882
[ "Biochemistry", "Chemical pathology" ]
3,555,416
https://en.wikipedia.org/wiki/Weak%20solution
In mathematics, a weak solution (also called a generalized solution) to an ordinary or partial differential equation is a function for which the derivatives may not all exist but which is nonetheless deemed to satisfy the equation in some precisely defined sense. There are many different definitions of weak solution, appropriate for different classes of equations. One of the most important is based on the notion of distributions. Avoiding the language of distributions, one starts with a differential equation and rewrites it in such a way that no derivatives of the solution of the equation show up (the new form is called the weak formulation, and the solutions to it are called weak solutions). Somewhat surprisingly, a differential equation may have solutions that are not differentiable, and the weak formulation allows one to find such solutions. Weak solutions are important because many differential equations encountered in modelling real-world phenomena do not admit of sufficiently smooth solutions, and the only way of solving such equations is using the weak formulation. Even in situations where an equation does have differentiable solutions, it is often convenient to first prove the existence of weak solutions and only later show that those solutions are in fact smooth enough. A concrete example As an illustration of the concept, consider the first-order wave equation: where u = u(t, x) is a function of two real variables. To indirectly probe the properties of a possible solution u, one integrates it against an arbitrary smooth function of compact support, known as a test function, taking For example, if is a smooth probability distribution concentrated near a point , the integral is approximately . Notice that while the integrals go from to , they are essentially over a finite box where is non-zero. Thus, assume a solution u is continuously differentiable on the Euclidean space R2, multiply the equation () by a test function (smooth of compact support), and integrate: Using Fubini's theorem, which allows one to interchange the order of integration, as well as integration by parts (in t for the first term and in x for the second term) this equation becomes: (Boundary terms vanish since is zero outside a finite box.) We have shown that equation () implies equation () as long as u is continuously differentiable. The key to the concept of weak solution is that there exist functions u that satisfy equation () for any , but such u may not be differentiable and so cannot satisfy equation (). An example is , as one may check by splitting the integrals over regions and , where u is smooth, and reversing the above computation using integration by parts. A weak solution of equation () means any solution u of equation () over all test functions . General case The general idea that follows from this example is that, when solving a differential equation in u, one can rewrite it using a test function , such that whatever derivatives in u show up in the equation, they are "transferred" via integration by parts to , resulting in an equation without derivatives of u. This new equation generalizes the original equation to include solutions that are not necessarily differentiable. The approach illustrated above works in great generality. Indeed, consider a linear differential operator in an open set W in Rn: where the multi-index varies over some finite set in Nn and the coefficients are smooth enough functions of x in Rn. The differential equation can, after being multiplied by a smooth test function with compact support in W and integrated by parts, be written as where the differential operator Q(x, ∂) is given by the formula The number shows up because one needs α1 + α2 + ⋯ + αn integrations by parts to transfer all the partial derivatives from u to in each term of the differential equation, and each integration by parts entails a multiplication by −1. The differential operator is the formal adjoint of (cf. adjoint of an operator). In summary, if the original (strong) problem was to find an -times differentiable function u defined on the open set W such that (a so-called strong solution), then an integrable function u would be said to be a weak solution if for every smooth function with compact support in W. Other kinds of weak solution The notion of weak solution based on distributions is sometimes inadequate. In the case of hyperbolic systems, the notion of weak solution based on distributions does not guarantee uniqueness, and it is necessary to supplement it with entropy conditions or some other selection criterion. In fully nonlinear PDE such as the Hamilton–Jacobi equation, there is a very different definition of weak solution called viscosity solution. References Differential equations Generalized functions Schwartz distributions
Weak solution
[ "Mathematics" ]
941
[ "Mathematical objects", "Differential equations", "Equations" ]
18,616,290
https://en.wikipedia.org/wiki/Gamma%20ray
A gamma ray, also known as gamma radiation (symbol ), is a penetrating form of electromagnetic radiation arising from the radioactive decay of atomic nuclei. It consists of the shortest wavelength electromagnetic waves, typically shorter than those of X-rays. With frequencies above 30 exahertz () and wavelengths less than 10 picometers (), gamma ray photons have the highest photon energy of any form of electromagnetic radiation. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900 while studying radiation emitted by radium. In 1903, Ernest Rutherford named this radiation gamma rays based on their relatively strong penetration of matter; in 1900, he had already named two less penetrating types of decay radiation (discovered by Henri Becquerel) alpha rays and beta rays in ascending order of penetrating power. Gamma rays from radioactive decay are in the energy range from a few kiloelectronvolts (keV) to approximately 8 megaelectronvolts (MeV), corresponding to the typical energy levels in nuclei with reasonably long lifetimes. The energy spectrum of gamma rays can be used to identify the decaying radionuclides using gamma spectroscopy. Very-high-energy gamma rays in the 100–1000 teraelectronvolt (TeV) range have been observed from astronomical sources such as the Cygnus X-3 microquasar. Natural sources of gamma rays originating on Earth are mostly a result of radioactive decay and secondary radiation from atmospheric interactions with cosmic ray particles. However, there are other rare natural sources, such as terrestrial gamma-ray flashes, which produce gamma rays from electron action upon the nucleus. Notable artificial sources of gamma rays include fission, such as that which occurs in nuclear reactors, and high energy physics experiments, such as neutral pion decay and nuclear fusion. The energy ranges of gamma rays and X-rays overlap in the electromagnetic spectrum, so the terminology for these electromagnetic waves varies between scientific disciplines. In some fields of physics, they are distinguished by their origin: gamma rays are created by nuclear decay while X-rays originate outside the nucleus. In astrophysics, gamma rays are conventionally defined as having photon energies above 100 keV and are the subject of gamma-ray astronomy, while radiation below 100 keV is classified as X-rays and is the subject of X-ray astronomy. Gamma rays are ionizing radiation and are thus hazardous to life. They can cause DNA mutations, cancer and tumors, and at high doses burns and radiation sickness. Due to their high penetration power, they can damage bone marrow and internal organs. Unlike alpha and beta rays, they easily pass through the body and thus pose a formidable radiation protection challenge, requiring shielding made from dense materials such as lead or concrete. On Earth, the magnetosphere protects life from most types of lethal cosmic radiation other than gamma rays. History of discovery The first gamma ray source to be discovered was the radioactive decay process called gamma decay. In this type of decay, an excited nucleus emits a gamma ray almost immediately upon formation. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900, while studying radiation emitted from radium. Villard knew that his described radiation was more powerful than previously described types of rays from radium, which included beta rays, first noted as "radioactivity" by Henri Becquerel in 1896, and alpha rays, discovered as a less penetrating form of radiation by Rutherford, in 1899. However, Villard did not consider naming them as a different fundamental type. Later, in 1903, Villard's radiation was recognized as being of a type fundamentally different from previously named rays by Ernest Rutherford, who named Villard's rays "gamma rays" by analogy with the beta and alpha rays that Rutherford had differentiated in 1899. The "rays" emitted by radioactive elements were named in order of their power to penetrate various materials, using the first three letters of the Greek alphabet: alpha rays as the least penetrating, followed by beta rays, followed by gamma rays as the most penetrating. Rutherford also noted that gamma rays were not deflected (or at least, not deflected) by a magnetic field, another property making them unlike alpha and beta rays. Gamma rays were first thought to be particles with mass, like alpha and beta rays. Rutherford initially believed that they might be extremely fast beta particles, but their failure to be deflected by a magnetic field indicated that they had no charge. In 1914, gamma rays were observed to be reflected from crystal surfaces, proving that they were electromagnetic radiation. Rutherford and his co-worker Edward Andrade measured the wavelengths of gamma rays from radium, and found they were similar to X-rays, but with shorter wavelengths and thus, higher frequency. This was eventually recognized as giving them more energy per photon, as soon as the latter term became generally accepted. A gamma decay was then understood to usually emit a gamma photon. Sources Natural sources of gamma rays on Earth include gamma decay from naturally occurring radioisotopes such as potassium-40, and also as a secondary radiation from various atmospheric interactions with cosmic ray particles. Natural terrestrial sources that produce gamma rays include lightning strikes and terrestrial gamma-ray flashes, which produce high energy emissions from natural high-energy voltages. Gamma rays are produced by a number of astronomical processes in which very high-energy electrons are produced. Such electrons produce secondary gamma rays by the mechanisms of bremsstrahlung, inverse Compton scattering and synchrotron radiation. A large fraction of such astronomical gamma rays are screened by Earth's atmosphere. Notable artificial sources of gamma rays include fission, such as occurs in nuclear reactors, as well as high energy physics experiments, such as neutral pion decay and nuclear fusion. A sample of gamma ray-emitting material that is used for irradiating or imaging is known as a gamma source. It is also called a radioactive source, isotope source, or radiation source, though these more general terms also apply to alpha and beta-emitting devices. Gamma sources are usually sealed to prevent radioactive contamination, and transported in heavy shielding. Radioactive decay (gamma decay) Gamma rays are produced during gamma decay, which normally occurs after other forms of decay occur, such as alpha or beta decay. A radioactive nucleus can decay by the emission of an or particle. The daughter nucleus that results is usually left in an excited state. It can then decay to a lower energy state by emitting a gamma ray photon, in a process called gamma decay. The emission of a gamma ray from an excited nucleus typically requires only 10−12 seconds. Gamma decay may also follow nuclear reactions such as neutron capture, nuclear fission, or nuclear fusion. Gamma decay is also a mode of relaxation of many excited states of atomic nuclei following other types of radioactive decay, such as beta decay, so long as these states possess the necessary component of nuclear spin. When high-energy gamma rays, electrons, or protons bombard materials, the excited atoms emit characteristic "secondary" gamma rays, which are products of the creation of excited nuclear states in the bombarded atoms. Such transitions, a form of nuclear gamma fluorescence, form a topic in nuclear physics called gamma spectroscopy. Formation of fluorescent gamma rays are a rapid subtype of radioactive gamma decay. In certain cases, the excited nuclear state that follows the emission of a beta particle or other type of excitation, may be more stable than average, and is termed a metastable excited state, if its decay takes (at least) 100 to 1000 times longer than the average 10−12 seconds. Such relatively long-lived excited nuclei are termed nuclear isomers, and their decays are termed isomeric transitions. Such nuclei have half-lifes that are more easily measurable, and rare nuclear isomers are able to stay in their excited state for minutes, hours, days, or occasionally far longer, before emitting a gamma ray. The process of isomeric transition is therefore similar to any gamma emission, but differs in that it involves the intermediate metastable excited state(s) of the nuclei. Metastable states are often characterized by high nuclear spin, requiring a change in spin of several units or more with gamma decay, instead of a single unit transition that occurs in only 10−12 seconds. The rate of gamma decay is also slowed when the energy of excitation of the nucleus is small. An emitted gamma ray from any type of excited state may transfer its energy directly to any electrons, but most probably to one of the K shell electrons of the atom, causing it to be ejected from that atom, in a process generally termed the photoelectric effect (external gamma rays and ultraviolet rays may also cause this effect). The photoelectric effect should not be confused with the internal conversion process, in which a gamma ray photon is not produced as an intermediate particle (rather, a "virtual gamma ray" may be thought to mediate the process). Decay schemes One example of gamma ray production due to radionuclide decay is the decay scheme for cobalt-60, as illustrated in the accompanying diagram. First, decays to excited by beta decay emission of an electron of . Then the excited decays to the ground state (see nuclear shell model) by emitting gamma rays in succession of 1.17 MeV followed by . This path is followed 99.88% of the time: :{| border="0" |- style="height:2em;" | ||→ || ||+ || ||+ || ||+ || ||+ || |- style="height:2em;" | ||→ || || || || || ||+ || ||+ || |} Another example is the alpha decay of to form ; which is followed by gamma emission. In some cases, the gamma emission spectrum of the daughter nucleus is quite simple, (e.g. /) while in other cases, such as with (/ and /), the gamma emission spectrum is complex, revealing that a series of nuclear energy levels exist. Particle physics Gamma rays are produced in many processes of particle physics. Typically, gamma rays are the products of neutral systems which decay through electromagnetic interactions (rather than a weak or strong interaction). For example, in an electron–positron annihilation, the usual products are two gamma ray photons. If the annihilating electron and positron are at rest, each of the resulting gamma rays has an energy of ~ 511 keV and frequency of ~ . Similarly, a neutral pion most often decays into two photons. Many other hadrons and massive bosons also decay electromagnetically. High energy physics experiments, such as the Large Hadron Collider, accordingly employ substantial radiation shielding. Because subatomic particles mostly have far shorter wavelengths than atomic nuclei, particle physics gamma rays are generally several orders of magnitude more energetic than nuclear decay gamma rays. Since gamma rays are at the top of the electromagnetic spectrum in terms of energy, all extremely high-energy photons are gamma rays; for example, a photon having the Planck energy would be a gamma ray. Other sources A few gamma rays in astronomy are known to arise from gamma decay (see discussion of SN1987A), but most do not. Photons from astrophysical sources that carry energy in the gamma radiation range are often explicitly called gamma-radiation. In addition to nuclear emissions, they are often produced by sub-atomic particle and particle-photon interactions. Those include electron-positron annihilation, neutral pion decay, bremsstrahlung, inverse Compton scattering, and synchrotron radiation. Laboratory sources In October 2017, scientists from various European universities proposed a means for sources of GeV photons using lasers as exciters through a controlled interplay between the cascade and anomalous radiative trapping. Terrestrial thunderstorms Thunderstorms can produce a brief pulse of gamma radiation called a terrestrial gamma-ray flash. These gamma rays are thought to be produced by high intensity static electric fields accelerating electrons, which then produce gamma rays by bremsstrahlung as they collide with and are slowed by atoms in the atmosphere. Gamma rays up to 100 MeV can be emitted by terrestrial thunderstorms, and were discovered by space-borne observatories. This raises the possibility of health risks to passengers and crew on aircraft flying in or near thunderclouds. Solar flares The most effusive solar flares emit across the entire EM spectrum, including γ-rays. The first confident observation occurred in 1972. Cosmic rays Extraterrestrial, high energy gamma rays include the gamma ray background produced when cosmic rays (either high speed electrons or protons) collide with ordinary matter, producing pair-production gamma rays at 511 keV. Alternatively, bremsstrahlung are produced at energies of tens of MeV or more when cosmic ray electrons interact with nuclei of sufficiently high atomic number (see gamma ray image of the Moon near the end of this article, for illustration). Pulsars and magnetars The gamma ray sky (see illustration at right) is dominated by the more common and longer-term production of gamma rays that emanate from pulsars within the Milky Way. Sources from the rest of the sky are mostly quasars. Pulsars are thought to be neutron stars with magnetic fields that produce focused beams of radiation, and are far less energetic, more common, and much nearer sources (typically seen only in our own galaxy) than are quasars or the rarer gamma-ray burst sources of gamma rays. Pulsars have relatively long-lived magnetic fields that produce focused beams of relativistic speed charged particles, which emit gamma rays (bremsstrahlung) when those strike gas or dust in their nearby medium, and are decelerated. This is a similar mechanism to the production of high-energy photons in megavoltage radiation therapy machines (see bremsstrahlung). Inverse Compton scattering, in which charged particles (usually electrons) impart energy to low-energy photons boosting them to higher energy photons. Such impacts of photons on relativistic charged particle beams is another possible mechanism of gamma ray production. Neutron stars with a very high magnetic field (magnetars), thought to produce astronomical soft gamma repeaters, are another relatively long-lived star-powered source of gamma radiation. Quasars and active galaxies More powerful gamma rays from very distant quasars and closer active galaxies are thought to have a gamma ray production source similar to a particle accelerator. High energy electrons produced by the quasar, and subjected to inverse Compton scattering, synchrotron radiation, or bremsstrahlung, are the likely source of the gamma rays from those objects. It is thought that a supermassive black hole at the center of such galaxies provides the power source that intermittently destroys stars and focuses the resulting charged particles into beams that emerge from their rotational poles. When those beams interact with gas, dust, and lower energy photons they produce X-rays and gamma rays. These sources are known to fluctuate with durations of a few weeks, suggesting their relatively small size (less than a few light-weeks across). Such sources of gamma and X-rays are the most commonly visible high intensity sources outside the Milky Way galaxy. They shine not in bursts (see illustration), but relatively continuously when viewed with gamma ray telescopes. The power of a typical quasar is about 1040 watts, a small fraction of which is gamma radiation. Much of the rest is emitted as electromagnetic waves of all frequencies, including radio waves. Gamma-ray bursts The most intense sources of gamma rays are also the most intense sources of any type of electromagnetic radiation presently known. They are the "long duration burst" sources of gamma rays in astronomy ("long" in this context, meaning a few tens of seconds), and they are rare compared with the sources discussed above. By contrast, "short" gamma-ray bursts of two seconds or less, which are not associated with supernovae, are thought to produce gamma rays during the collision of pairs of neutron stars, or a neutron star and a black hole. The so-called long-duration gamma-ray bursts produce a total energy output of about 1044 joules (as much energy as the Sun will produce in its entire life-time) but in a period of only 20 to 40 seconds. Gamma rays are approximately 50% of the total energy output. The leading hypotheses for the mechanism of production of these highest-known intensity beams of radiation, are inverse Compton scattering and synchrotron radiation from high-energy charged particles. These processes occur as relativistic charged particles leave the region of the event horizon of a newly formed black hole created during supernova explosion. The beam of particles moving at relativistic speeds are focused for a few tens of seconds by the magnetic field of the exploding hypernova. The fusion explosion of the hypernova drives the energetics of the process. If the narrowly directed beam happens to be pointed toward the Earth, it shines at gamma ray frequencies with such intensity, that it can be detected even at distances of up to 10 billion light years, which is close to the edge of the visible universe. Properties Penetration of matter Due to their penetrating nature, gamma rays require large amounts of shielding mass to reduce them to levels which are not harmful to living cells, in contrast to alpha particles, which can be stopped by paper or skin, and beta particles, which can be shielded by thin aluminium. Gamma rays are best absorbed by materials with high atomic numbers (Z) and high density, which contribute to the total stopping power. Because of this, a lead (high Z) shield is 20–30% better as a gamma shield than an equal mass of another low-Z shielding material, such as aluminium, concrete, water, or soil; lead's major advantage is not in lower weight, but rather its compactness due to its higher density. Protective clothing, goggles and respirators can protect from internal contact with or ingestion of alpha or beta emitting particles, but provide no protection from gamma radiation from external sources. The higher the energy of the gamma rays, the thicker the shielding made from the same shielding material is required. Materials for shielding gamma rays are typically measured by the thickness required to reduce the intensity of the gamma rays by one half (the half-value layer or HVL). For example, gamma rays that require 1 cm (0.4 inch) of lead to reduce their intensity by 50% will also have their intensity reduced in half by of granite rock, 6 cm (2.5 inches) of concrete, or 9 cm (3.5 inches) of packed soil. However, the mass of this much concrete or soil is only 20–30% greater than that of lead with the same absorption capability. Depleted uranium is sometimes used for shielding in portable gamma ray sources, due to the smaller half-value layer when compared to lead (around 0.6 times the thickness for common gamma ray sources, i.e. Iridium-192 and Cobalt-60) and cheaper cost compared to tungsten. In a nuclear power plant, shielding can be provided by steel and concrete in the pressure and particle containment vessel, while water provides a radiation shielding of fuel rods during storage or transport into the reactor core. The loss of water or removal of a "hot" fuel assembly into the air would result in much higher radiation levels than when kept under water. Matter interaction When a gamma ray passes through matter, the probability for absorption is proportional to the thickness of the layer, the density of the material, and the absorption cross section of the material. The total absorption shows an exponential decrease of intensity with distance from the incident surface: where x is the thickness of the material from the incident surface, μ= nσ is the absorption coefficient, measured in cm−1, n the number of atoms per cm3 of the material (atomic density) and σ the absorption cross section in cm2. As it passes through matter, gamma radiation ionizes via three processes: The photoelectric effect: This describes the case in which a gamma photon interacts with and transfers its energy to an atomic electron, causing the ejection of that electron from the atom. The kinetic energy of the resulting photoelectron is equal to the energy of the incident gamma photon minus the energy that originally bound the electron to the atom (binding energy). The photoelectric effect is the dominant energy transfer mechanism for X-ray and gamma ray photons with energies below 50 keV (thousand electronvolts), but it is much less important at higher energies. Compton scattering: This is an interaction in which an incident gamma photon loses enough energy to an atomic electron to cause its ejection, with the remainder of the original photon's energy emitted as a new, lower energy gamma photon whose emission direction is different from that of the incident gamma photon, hence the term "scattering". The probability of Compton scattering decreases with increasing photon energy. It is thought to be the principal absorption mechanism for gamma rays in the intermediate energy range 100 keV to 10 MeV. It is relatively independent of the atomic number of the absorbing material, which is why very dense materials like lead are only modestly better shields, on a per weight basis, than are less dense materials. Pair production: This becomes possible with gamma energies exceeding 1.02 MeV, and becomes important as an absorption mechanism at energies over 5 MeV (see illustration at right, for lead). By interaction with the electric field of a nucleus, the energy of the incident photon is converted into the mass of an electron-positron pair. Any gamma energy in excess of the equivalent rest mass of the two particles (totaling at least 1.02 MeV) appears as the kinetic energy of the pair and in the recoil of the emitting nucleus. At the end of the positron's range, it combines with a free electron, and the two annihilate, and the entire mass of these two is then converted into two gamma photons of at least 0.51 MeV energy each (or higher according to the kinetic energy of the annihilated particles). The secondary electrons (and/or positrons) produced in any of these three processes frequently have enough energy to produce much ionization themselves. Additionally, gamma rays, particularly high energy ones, can interact with atomic nuclei resulting in ejection of particles in photodisintegration, or in some cases, even nuclear fission (photofission). Light interaction High-energy (from 80 GeV to ~10 TeV) gamma rays arriving from far-distant quasars are used to estimate the extragalactic background light in the universe: The highest-energy rays interact more readily with the background light photons and thus the density of the background light may be estimated by analyzing the incoming gamma ray spectra. Gamma spectroscopy Gamma spectroscopy is the study of the energetic transitions in atomic nuclei, which are generally associated with the absorption or emission of gamma rays. As in optical spectroscopy (see Franck–Condon effect) the absorption of gamma rays by a nucleus is especially likely (i.e., peaks in a "resonance") when the energy of the gamma ray is the same as that of an energy transition in the nucleus. In the case of gamma rays, such a resonance is seen in the technique of Mössbauer spectroscopy. In the Mössbauer effect the narrow resonance absorption for nuclear gamma absorption can be successfully attained by physically immobilizing atomic nuclei in a crystal. The immobilization of nuclei at both ends of a gamma resonance interaction is required so that no gamma energy is lost to the kinetic energy of recoiling nuclei at either the emitting or absorbing end of a gamma transition. Such loss of energy causes gamma ray resonance absorption to fail. However, when emitted gamma rays carry essentially all of the energy of the atomic nuclear de-excitation that produces them, this energy is also sufficient to excite the same energy state in a second immobilized nucleus of the same type. Applications Gamma rays provide information about some of the most energetic phenomena in the universe; however, they are largely absorbed by the Earth's atmosphere. Instruments aboard high-altitude balloons and satellites missions, such as the Fermi Gamma-ray Space Telescope, provide our only view of the universe in gamma rays. Gamma-induced molecular changes can also be used to alter the properties of semi-precious stones, and is often used to change white topaz into blue topaz. Non-contact industrial sensors commonly use sources of gamma radiation in refining, mining, chemicals, food, soaps and detergents, and pulp and paper industries, for the measurement of levels, density, and thicknesses. Gamma-ray sensors are also used for measuring the fluid levels in water and oil industries. Typically, these use Co-60 or Cs-137 isotopes as the radiation source. In the US, gamma ray detectors are beginning to be used as part of the Container Security Initiative (CSI). These machines are advertised to be able to scan 30 containers per hour. Gamma radiation is often used to kill living organisms, in a process called irradiation. Applications of this include the sterilization of medical equipment (as an alternative to autoclaves or chemical means), the removal of decay-causing bacteria from many foods and the prevention of the sprouting of fruit and vegetables to maintain freshness and flavor. Despite their cancer-causing properties, gamma rays are also used to treat some types of cancer, since the rays also kill cancer cells. In the procedure called gamma-knife surgery, multiple concentrated beams of gamma rays are directed to the growth in order to kill the cancerous cells. The beams are aimed from different angles to concentrate the radiation on the growth while minimizing damage to surrounding tissues. Gamma rays are also used for diagnostic purposes in nuclear medicine in imaging techniques. A number of different gamma-emitting radioisotopes are used. For example, in a PET scan a radiolabeled sugar called fluorodeoxyglucose emits positrons that are annihilated by electrons, producing pairs of gamma rays that highlight cancer as the cancer often has a higher metabolic rate than the surrounding tissues. The most common gamma emitter used in medical applications is the nuclear isomer technetium-99m which emits gamma rays in the same energy range as diagnostic X-rays. When this radionuclide tracer is administered to a patient, a gamma camera can be used to form an image of the radioisotope's distribution by detecting the gamma radiation emitted (see also SPECT). Depending on which molecule has been labeled with the tracer, such techniques can be employed to diagnose a wide range of conditions (for example, the spread of cancer to the bones via bone scan). Health effects Gamma rays cause damage at a cellular level and are penetrating, causing diffuse damage throughout the body. However, they are less ionising than alpha or beta particles, which are less penetrating. Low levels of gamma rays cause a stochastic health risk, which for radiation dose assessment is defined as the probability of cancer induction and genetic damage. The International Commission on Radiological Protection says "In the low dose range, below about 100 mSv, it is scientifically plausible to assume that the incidence of cancer or heritable effects will rise in direct proportion to an increase in the equivalent dose in the relevant organs and tissues" High doses produce deterministic effects, which is the severity of acute tissue damage that is certain to happen. These effects are compared to the physical quantity absorbed dose measured by the unit gray (Gy). Effects and body response When gamma radiation breaks DNA molecules, a cell may be able to repair the damaged genetic material, within limits. However, a study of Rothkamm and Lobrich has shown that this repair process works well after high-dose exposure but is much slower in the case of a low-dose exposure. Studies have shown low-dose gamma radiation may be enough to cause cancer. In a study of mice, they were given human-relevant low-dose gamma radiation, with genotoxic effects 45 days after continuous low-dose gamma radiation, with significant increases of chromosomal damage, DNA lesions and phenotypic mutations in blood cells of irradiated animals, covering the three types of genotoxic activity. Another study studied the effects of acute ionizing gamma radiation in rats, up to 10 Gy, and who ended up showing acute oxidative protein damage, DNA damage, cardiac troponin T carbonylation, and long-term cardiomyopathy. Risk assessment The natural outdoor exposure in the United Kingdom ranges from 0.1 to 0.5 μSv/h with significant increase around known nuclear and contaminated sites. Natural exposure to gamma rays is about 1 to 2 mSv per year, and the average total amount of radiation received in one year per inhabitant in the USA is 3.6 mSv. There is a small increase in the dose, due to naturally occurring gamma radiation, around small particles of high atomic number materials in the human body caused by the photoelectric effect. By comparison, the radiation dose from chest radiography (about 0.06 mSv) is a fraction of the annual naturally occurring background radiation dose. A chest CT delivers 5 to 8 mSv. A whole-body PET/CT scan can deliver 14 to 32 mSv depending on the protocol. The dose from fluoroscopy of the stomach is much higher, approximately 50 mSv (14 times the annual background). An acute full-body equivalent single exposure dose of 1 Sv (1000 mSv), or 1 Gy, will cause mild symptoms of acute radiation sickness, such as nausea and vomiting; and a dose of 2.0–3.5 Sv (2.0–3.5 Gy) causes more severe symptoms (i.e. nausea, diarrhea, hair loss, hemorrhaging, and inability to fight infections), and will cause death in a sizable number of cases—about 10% to 35% without medical treatment. A dose of 3–5 Sv (3–5 Gy) is considered approximately the LD50 (or the lethal dose for 50% of exposed population) for an acute exposure to radiation even with standard medical treatment. A dose higher than 5 Sv (5 Gy) brings an increasing chance of death above 50%. Above 7.5–10 Sv (7.5–10 Gy) to the entire body, even extraordinary treatment, such as bone-marrow transplants, will not prevent the death of the individual exposed (see radiation poisoning). (Doses much larger than this may, however, be delivered to selected parts of the body in the course of radiation therapy.) For low-dose exposure, for example among nuclear workers, who receive an average yearly radiation dose of 19 mSv, the risk of dying from cancer (excluding leukemia) increases by 2 percent. For a dose of 100 mSv, the risk increase is 10 percent. By comparison, risk of dying from cancer was increased by 32 percent for the survivors of the atomic bombing of Hiroshima and Nagasaki. Units of measurement and exposure The following table shows radiation quantities in SI and non-SI units: The measure of the ionizing effect of gamma and X-rays in dry air is called the exposure, for which a legacy unit, the röntgen, was used from 1928. This has been replaced by kerma, now mainly used for instrument calibration purposes but not for received dose effect. The effect of gamma and other ionizing radiation on living tissue is more closely related to the amount of energy deposited in tissue rather than the ionisation of air, and replacement radiometric units and quantities for radiation protection have been defined and developed from 1953 onwards. These are: The gray (Gy), is the SI unit of absorbed dose, which is the amount of radiation energy deposited in the irradiated material. For gamma radiation this is numerically equivalent to equivalent dose measured by the sievert, which indicates the stochastic biological effect of low levels of radiation on human tissue. The radiation weighting conversion factor from absorbed dose to equivalent dose is 1 for gamma, whereas alpha particles have a factor of 20, reflecting their greater ionising effect on tissue. The rad is the deprecated CGS unit for absorbed dose and the rem is the deprecated CGS unit of equivalent dose, used mainly in the USA. Distinction from X-rays The conventional distinction between X-rays and gamma rays has changed over time. Originally, the electromagnetic radiation emitted by X-ray tubes almost invariably had a longer wavelength than the radiation (gamma rays) emitted by radioactive nuclei. Older literature distinguished between X- and gamma radiation on the basis of wavelength, with radiation shorter than some arbitrary wavelength, such as 10−11 m, defined as gamma rays. Since the energy of photons is proportional to their frequency and inversely proportional to wavelength, this past distinction between X-rays and gamma rays can also be thought of in terms of its energy, with gamma rays considered to be higher energy electromagnetic radiation than are X-rays. However, since current artificial sources are now able to duplicate any electromagnetic radiation that originates in the nucleus, as well as far higher energies, the wavelengths characteristic of radioactive gamma ray sources vs. other types now completely overlap. Thus, gamma rays are now usually distinguished by their origin: X-rays are emitted by definition by electrons outside the nucleus, while gamma rays are emitted by the nucleus. Exceptions to this convention occur in astronomy, where gamma decay is seen in the afterglow of certain supernovas, but radiation from high energy processes known to involve other radiation sources than radioactive decay is still classed as gamma radiation. For example, modern high-energy X-rays produced by linear accelerators for megavoltage treatment in cancer often have higher energy (4 to 25 MeV) than do most classical gamma rays produced by nuclear gamma decay. One of the most common gamma ray emitting isotopes used in diagnostic nuclear medicine, technetium-99m, produces gamma radiation of the same energy (140 keV) as that produced by diagnostic X-ray machines, but of significantly lower energy than therapeutic photons from linear particle accelerators. In the medical community today, the convention that radiation produced by nuclear decay is the only type referred to as "gamma" radiation is still respected. Due to this broad overlap in energy ranges, in physics the two types of electromagnetic radiation are now often defined by their origin: X-rays are emitted by electrons (either in orbitals outside of the nucleus, or while being accelerated to produce bremsstrahlung-type radiation), while gamma rays are emitted by the nucleus or by means of other particle decays or annihilation events. There is no lower limit to the energy of photons produced by nuclear reactions, and thus ultraviolet or lower energy photons produced by these processes would also be defined as "gamma rays" (indeed, this happens for the isomeric transition of the extremely low-energy isomer 229mTh). The only naming-convention that is still universally respected is the rule that electromagnetic radiation that is known to be of atomic nuclear origin is always referred to as "gamma rays", and never as X-rays. However, in physics and astronomy, the converse convention (that all gamma rays are considered to be of nuclear origin) is frequently violated. In astronomy, higher energy gamma and X-rays are defined by energy, since the processes that produce them may be uncertain and photon energy, not origin, determines the required astronomical detectors needed. High-energy photons occur in nature that are known to be produced by processes other than nuclear decay but are still referred to as gamma radiation. An example is "gamma rays" from lightning discharges at 10 to 20 MeV, and known to be produced by the bremsstrahlung mechanism. Another example is gamma-ray bursts, now known to be produced from processes too powerful to involve simple collections of atoms undergoing radioactive decay. This is part and parcel of the general realization that many gamma rays produced in astronomical processes result not from radioactive decay or particle annihilation, but rather in non-radioactive processes similar to X-rays. Although the gamma rays of astronomy often come from non-radioactive events, a few gamma rays in astronomy are specifically known to originate from gamma decay of nuclei (as demonstrated by their spectra and emission half life). A classic example is that of supernova SN 1987A, which emits an "afterglow" of gamma-ray photons from the decay of newly made radioactive nickel-56 and cobalt-56. Most gamma rays in astronomy, however, arise by other mechanisms. See also Annihilation Galactic Center GeV excess Gaseous ionization detectors Very-high-energy gamma ray Ultra-high-energy gamma ray Explanatory notes References External links Basic reference on several types of radiation Radiation Q & A GCSE information Radiation information Gamma-ray bursts The Lund/LBNL Nuclear Data Search – Contains information on gamma-ray energies from isotopes. Mapping soils with airborne detectors The LIVEChart of Nuclides – IAEA with filter on gamma-ray energy Health Physics Society Public Education Website Articles containing video clips Electromagnetic spectrum IARC Group 1 carcinogens Nuclear physics Radiation Radioactivity
Gamma ray
[ "Physics", "Chemistry" ]
7,709
[ "Transport phenomena", "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Waves", "Radiation", "Gamma rays", "Nuclear physics", "Radioactivity" ]
23,679,414
https://en.wikipedia.org/wiki/Real-time%20Neutron%20Monitor%20Database
The Real-time Neutron Monitor Database (or NMDB) is a worldwide network of standardized neutron monitors, used to record variations of the primary cosmic rays. The measurements complement space-based cosmic ray measurements. Unlike data from satellite experiments, neutron monitor data has never been available in high resolution from many stations in real-time. The data is often only available from the individual stations website, in varying formats, and not in real-time. To overcome this deficit, the European Commission is supporting the Real-time Neutron Monitor Database (NMDB) as an e-Infrastructures project in the Seventh Framework Programme in the Capacities section. Stations that do not have 1-minute resolution will be supported by the development of an affordable standard registration system that will submit the measurements to the database via the internet in real-time. This resolves the problem of different data formats and for the first time allows to use real-time cosmic ray measurements for space weather predictions (Steigies, Klein et al.) Besides creating a database and developing applications working with this data, a part of the project is dedicated to create a public outreach website to inform about cosmic rays and possible effects on humans, technological systems, and the environment (Mavromichalaki et al.) See also Altitude SEE Test European Platform (ASTEP) References External links NMDB Homepage Cosmic-ray experiments Real-time technology Astronomical databases
Real-time Neutron Monitor Database
[ "Astronomy", "Technology" ]
283
[ "Astronomical databases", "Works about astronomy", "nan" ]
23,681,458
https://en.wikipedia.org/wiki/Recrystallization%20%28chemistry%29
Recrystallization is a method used to purify chemicals by dissolving a mixture of a compound and its impurities, in an appropriate solvent, prior to heating the solution. Following the dissolution of crude product, the mixture will passively cool, yielding a crystallized compound and its impurities as separate entities. The newly formed crystals can then be subjected to x-ray anaylsis for purity assessment. Methods Single-solvent recrystallization The solvent utilized in single-solvent recrystallization must dissolve the crude reaction mixture only when it is heated to reflux. The heated solution is then passively cooled, yielding a crystallized product absent of impurities. The solid crystals are then collected utilizing a filtration apparatus and the filtrate is discarded. Product purity can then be assessed via NMR spectroscopy. Multi-solvent recrystallization Multi-solvent recrystallization relies on the crude product being soluble in one solvent, when it is heated to reflux, while being insoluble in a secondary solvent, regardless of the solvent's temperature. The volume ratio between the first and second solvent is critical. A higher ratio of first to second solvent will lead to permanent dissolution of the desired product, while a low ratio will lead to minimal pure crystal recovery. The terms first and second are in reference to crude product soluble and crude product insoluble solvents respectively. Typically, the second solvent, following the dissolution of the impure solid in the first solvent, is added slowly until the desired product begins to crystallize from solution. The solution is then cooled to further induce recrystallization. Hot filtration recrystallization Hot filtration recrystallization can be used to separate a pure compound from both impurities and some insoluble matter, which may be anything from a third party impurity to fragments of broken glass. The technique makes use of the single solvent system, outlined above, by dissolving a crude reaction mixture, in a minimum amount of hot solvent, before gravity filtering the saturated solution to remove insoluble matter. The saturated solution will then passively cool, yielding pure crystals. X-ray analysis Recrystallized products are often subject to X-ray crystallography for purity assessment. The technique requires crystallized products to be singular, and absent of clumps. Several approaches to this phenomenon are listed below. Slow evaporation of a single solvent - typically the compound is dissolved in a suitable solvent and the solvent is allowed to slowly evaporate. Once the solution is saturated crystals can be formed. Slow evaporation of a multi-solvent system - the same as above, however as the solvent composition changes due to evaporation of the more volatile solvent. The compound is more soluble in the volatile solvent, and so the compound becomes increasingly insoluble in solution and crystallizes. Slow diffusion - similar to the above. However, a second solvent is allowed to evaporate from one container into a container holding the compound solution (gas diffusion). As the solvent composition changes due to an increase in the solvent that has gas diffused into the solution, the compound becomes increasingly insoluble in the solution and crystallizes. Interface/slow mixing (often performed in an NMR tube). Similar to the above, but instead of one solvent gas-diffusing into another, the two solvents mix (diffuse) by liquid-liquid diffusion. Typically a second solvent is "layered" carefully on top of the solution containing the compound. Over time the two solution mix. As the solvent composition changes due to diffusion, the compound becomes increasingly insoluble in solution and crystallizes, usually at the interface. Additionally, it is better to use a denser solvent as the lower layer, and/or a hotter solvent as the upper layer because this results in the slower mixing of the solvents. Specialized equipment can be used in the shape of an "H" to perform the above, where one of the vertical lines of the "H" is a tube containing a solution of the compound, and the other vertical line of the "H" is a tube containing a solvent which the compound is not soluble in, and the horizontal line of the "H" is a tube which joins the two vertical tubes, which also has a fine glass sinter that restricts the mixing of the two solvents. Once single perfect crystals have been obtained, it is recommended that the crystals are kept in a sealed vessel with some of the liquid of crystallization to prevent the crystal from 'drying out'. Single perfect crystals may contain solvent of crystallization in the crystal lattice. Loss of this internal solvent from the crystals can result in the crystal lattice breaking down, and the crystals turning to powder. See also Crystal structure Fractional crystallization (chemistry) Crystal structure Structure Analysis and Structured Design References Chemical processes Laboratory techniques Phase transitions Separation processes Semiconductor growth
Recrystallization (chemistry)
[ "Physics", "Chemistry" ]
1,006
[ "Physical phenomena", "Phase transitions", "Separation processes", "Critical phenomena", "Phases of matter", "Chemical processes", "nan", "Chemical process engineering", "Statistical mechanics", "Matter" ]
23,682,260
https://en.wikipedia.org/wiki/National%20Law%20Enforcement%20System
The National Law Enforcement System, better known as the Wanganui Computer, was a database set up in 1976 by the State Services Commission in Wanganui, New Zealand. It held information which could be accessed by New Zealand Police, Land Transport Safety Authority and the justice department. The Wanganui computer was a Sperry mainframe computer built to hold records such as criminal convictions and car and gun licences. At the time it was deemed ground-breaking, with Minister of Police, Allan McCready, describing it as "probably the most significant crime-fighting weapon ever brought to bear against lawlessness in this country". Seen by many as a Big Brother initiative, the database was controversial, attracting numerous protests from libertarians with concerns over privacy. The most notable event was in 1982, when self-described anarchist punk Neil Roberts, aged 22, detonated a home-made gelignite bomb upon his person at the gates of the centre, making him New Zealand's highest-profile suicide bomber. The blast was large enough to be heard around Wanganui, and Roberts was killed instantly, being later identified by his unique chest tattoo bearing the words "This punk won't see 23. No future." The centre survived this and other protests until the 1990s when the operation was transferred to Auckland, although this new system has retained its Wanganui moniker. The original database, having lasted 30 years and growing increasingly outdated, was finally shut down in June 2005, with the responsibility being successfully handed over to Auckland at the National Intelligence Application (also known as NIA). The building, known as 'Wairere House' was later occupied by the National Library of New Zealand and contained newspaper archives. See also INCIS References Law enforcement in New Zealand 2005 disestablishments in New Zealand Computer systems
National Law Enforcement System
[ "Technology", "Engineering" ]
369
[ "Computer science", "Computers", "Computer engineering", "Computer systems" ]
23,682,291
https://en.wikipedia.org/wiki/Cam%20follower
A cam follower, also known as a track follower, is a specialized type of roller or needle bearing designed to follow cam lobe profiles. Cam followers come in a vast array of different configurations, however the most defining characteristic is how the cam follower mounts to its mating part; stud style cam followers use a stud while the yoke style has a hole through the middle. Construction The modern stud type follower was invented and patented in 1937 by Thomas L. Robinson of the McGill Manufacturing Company. It replaced using a standard bearing and bolt. The new cam followers were easier to use because the stud was already included and they could also handle higher loads. While roller cam followers are similar to roller bearings, there are quite a few differences. Standard ball and roller bearings are designed to be pressed into a rigid housing, which provides circumferential support. This keeps the outer race from deforming, so the race cross-section is relatively thin. In the case of cam followers the outer race is loaded at a single point, so the outer race needs a thicker cross-section to reduce deformation. However, in order to facilitate this the roller diameter must be decreased, which also decreases the dynamic bearing capacity. End plates are used to contain the needles or bearing axially. On stud style followers one of the end plates is integrated into the inner race/stud; the other is pressed onto the stud up to a shoulder on the inner race. The inner race is induction hardened so that the stud remains soft if modifications need to be made. On yoke style followers the end plates are peened or pressed onto the inner race or liquid metal injected onto the inner race. The inner race is either induction hardened or through hardened. Another difference is that a lubrication hole is provided to relubricate the follower periodically. A hole is provided at both ends of the stud for lubrication. They also usually have a black oxide finish to help reduce corrosion. Types There are many different types of cam followers available. Anti-friction element The most common anti-friction element employed is a full complement of needle rollers. This design can withstand high radial loads but no thrust loads. A similar design is the caged needle roller design, which also uses needle rollers, but uses a cage to keep them separated. This design allows for higher speeds but decreases the load capacity. The cage also increases internal space so it can hold more lubrication, which increases the time between relubrications. Depending on the exact design sometimes two rollers are put in each pocket of the cage, using a cage design originated by RBC Bearings in 1971. For heavy-duty applications a roller design can be used. This employs two rows of rollers of larger diameter than used in needle roller cam followers to increase the dynamic load capacity and provide some thrust capabilities. This design can support higher speeds than the full complement design. For light-duty applications a bushing type follower can be used. Instead of using a type of a roller a plastic bushing is used to reduce friction, which provides a maintenance free follower. The disadvantage is that it can only support light loads, slow speeds, no thrust loads, and the temperature limit is . A bushing type stud follower can only support approximately 25% of the load of a roller type stud follower, while the heavy and yoke followers can handle 50%. Also all-metallic heavy-duty bushing type followers exist. Shape The outer diameter (OD) of the cam follower (stud or yoke) can be the standard cylindrical shape or be crowned. Crowned cam followers are used to keep the load evenly distributed if it deflects or if there is any misalignment between the follower and the followed surface. They are also used in turntable type applications to reduce skidding. Crowned followers can compensate for up to 0.5° of misalignment, while a cylindrical OD can only tolerate 0.06°. The only disadvantage is that they cannot bear as much load because of higher stresses. Stud Stud style cam followers usually have a standard sized stud, but a heavy stud is available for increased static load capacity. Drives The standard driving system for a stud type cam follower is a slot, for use with a flat head screwdriver. However, hex sockets are available for higher torquing ability, which is especially useful for eccentric cam followers and those used in blind holes. Hex socket cam followers from most manufacturers eliminate the relubrication capability on that end of the cam follower. RBC Bearings' Hexlube cam followers feature a relubrication fitting at the bottom of the hex socket. Eccentricity Stud type cam followers are available with an eccentric stud. The stud has a bushing pushed onto it that has an eccentric outer diameter. This allows for adjustability during installation to eliminate any backlash. The adjustable range for an eccentric bearing is twice that of the eccentricity. Yoke YOKE type cam followers are usually used in applications where minimal deflection is required, as they can be supported on both sides. They can support the same static load as a heavy stud follower. Track followers All cam followers can be track followers, but not all track followers are cam followers. Some track followers have specially shaped outer diameters (OD) to follow tracks. For example, track followers are available with a V-groove for following a V-track, or the OD can have a flange to follow the lip of the track. Specialized track followers are also designed to withstand thrust loads so the anti-friction elements are usually bearing balls or of a tapered roller bearing construction. See also Tappet Reciprocating motion References Bearings (mechanical) Mechanical engineering
Cam follower
[ "Physics", "Engineering" ]
1,157
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
23,686,222
https://en.wikipedia.org/wiki/Industrial%20shredder
An industrial shredder is a machine used to break down materials for various applications such as recycling, volume reduction, and product destruction. Industrial shredders come in many different sizes and design variations based on what particle size is needed for final shredded product. The main categories of designs used today are as follows: low speed, high torque shear type shredders of single, dual, triple and quad shaft design, single shaft grinders of single or dual shaft design, granulators, knife hogs, raspers, maulers, flails, crackermills, and refining mills. Industrial shredder components include a rotor, counter blades, housing, motor, transmission system, power system and electrical control system. Some examples of materials that are commonly shredded are: tires, metals, construction and demolition debris, wood, plastics, leathers, papers and garbage, such as commercial and mixed waste. The industrial shredder is commonly used to process materials into different sizes for separation or to reduce the cost of transport. Waste materials such as municipal solid waste, radioactive waste, medical waste, and hazardous waste are shredded in treatment and disposal systems. Because the hardness of materials differs, the blades on shredders are also slightly different. An industrial shredder is any shredder that can be used in an industrial application (rather than a consumer application). They can be equipped with different types of cutting systems: horizontal shaft design, vertical shaft design, single-shaft, two-shaft, three-shaft and four-shaft cutting systems. These shredders are slow speed or high speed, and are not restricted in being classified as an industrial shredder by their speed or horsepower. Small, low-cost portable shredders have been developed; these are often suitable for personal use as well as for small scale industry. The largest scrap metal shredder in the world was designed with 10,000 hp by the Schnitzer steel group of Portland, Oregon in 1980. The Lynxs at the Sims Metal Management plant at the mouth of the River Usk in Newport, Wales has access by road, rail and sea. It can process 450 cars per hour. See also Ramial chipped wood References External links Industrial machinery
Industrial shredder
[ "Engineering" ]
456
[ "Industrial machinery" ]
23,687,229
https://en.wikipedia.org/wiki/Journal%20of%20Applied%20Electrochemistry
The Journal of Applied Electrochemistry is a peer-reviewed scientific journal published by Springer Science+Business Media, which focuses on the technological applications of electrochemistry. Subjects covered are energy conversion, conservation, and storage, industrial synthesis, environmental remediation, electrochemical engineering, supercapacitors, fuel and solar cells, cell design, corrosion, hydrometallurgy, surface finishing, electroplating, electrodeposition, and other applications of electrochemical research. The journal is affiliated with the International Society of Electrochemistry. History The Journal of Applied Electrochemistry was established in 1971 under founding editor Douglas Inman to supplement existing journals that focused on research into fundamental electrochemical science. The journal's current editor is Gerardine G. Botte. Abstracting and indexing The journal is abstracted and indexed in the Science Citation Index Expanded, Scopus, Inspec, and Current Contents/Physical, Chemical and Earth Sciences. According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.925. References External links Springer Science+Business Media academic journals Academic journals established in 1971 Monthly journals Electrochemistry journals
Journal of Applied Electrochemistry
[ "Chemistry" ]
234
[ "Electrochemistry journals", "Electrochemistry", "Electrochemistry stubs", "Physical chemistry journals", "Physical chemistry stubs" ]
10,629,767
https://en.wikipedia.org/wiki/Tir%20%28receptor%29
Tir (translocated intimin receptor) is an essential component in the adherence of the enteropathogenic Escherichia coli (EPEC) and enterohemorraghic Escherichia coli (EHEC) to the cells lining the small intestine. To aid attachment, both EPEC and EHEC possess the ability to reorganise the host cell actin cytoskeleton via the secretion of virulence factors. These factors are secreted directly into the cells using a Type three secretion system. One of the virulence factors secreted is the Translocated Intimin Receptor (Tir). Tir is a receptor protein encoded by the espE gene which is located on the locus of enterocyte effacement (LEE) pathogenicity island in EPEC strains. It is secreted into the host cell membranes and acts as a receptor for intimin which is found on the bacterial surface. Once Tir binds intimin, the bacterium is attached to the enterocyte surface. Tir is also a receptor tyrosine kinase (RTK) that initiates its intimate adherence by inserting a hairpin orientation in the intestinal cell membrane to enable tight binding to intimin on the bacterial cell outer membrane. Upon phosphorylation, Tir activates condensation and polymerization of actin filaments under the bacterial cell to form a pedestal-like structure. References Enterobacteria Tyrosine kinase receptors
Tir (receptor)
[ "Chemistry" ]
315
[ "Tyrosine kinase receptors", "Signal transduction" ]
10,630,682
https://en.wikipedia.org/wiki/Coffin%20corner%20%28aerodynamics%29
Coffin corner (also known as the aerodynamic ceiling or Q corner) is the region of flight where a fast but subsonic fixed-wing aircraft's stall speed is near the critical Mach number, at a given gross weight and G-force loading. In this region of flight, it is very difficult to keep an airplane in stable flight. Because the stall speed is the minimum speed required to maintain level flight, any reduction in speed will cause the airplane to stall and lose altitude. Because the critical Mach number is the maximum speed at which air can travel over the wings without losing lift due to flow separation and shock waves, any increase in speed will cause the airplane to lose lift, or to pitch heavily nose-down, and lose altitude. The "corner" refers to the triangular shape at the top of a flight envelope chart where the stall speed and critical Mach number are within a few knots of each other. The "coffin" refers to the possible death in these kinds of stalls. The speed where they meet is the ceiling of the aircraft. This is distinct from the same term used for helicopters when outside the auto-rotation envelope as seen in the height-velocity diagram. Aerodynamic basis Consideration of statics shows that when a fixed-wing aircraft is in straight, level flight at constant-airspeed, the lift on the main wing plus the force (in the negative sense if downward) on the horizontal stabilizer is equal to the aircraft's weight and its thrust is equal to its drag. In most circumstances this equilibrium can occur at a range of airspeeds. The minimum such speed is the stall speed, or VSO. The indicated airspeed at which a fixed-wing aircraft stalls varies with the weight of the aircraft but does not vary significantly with altitude. At speeds close to the stall speed the aircraft's wings are at a high angle of attack. At higher altitudes, the air density is lower than at sea level. Because of the progressive reduction in air density, as the aircraft's altitude increases, its true airspeed is progressively greater than its indicated airspeed. For example, the indicated airspeed at which an aircraft stalls can be considered constant, but the true airspeed at which it stalls increases with altitude. Air conducts sound at a certain speed, the "speed of sound". This becomes slower as the air becomes cooler. Because the temperature of the atmosphere generally decreases with altitude (until the tropopause), the speed of sound also decreases with altitude. (See the International Standard Atmosphere for more on temperature as a function of altitude.) A given airspeed, divided by the speed of sound in that air, gives a ratio known as the Mach number. A Mach number of 1.0 indicates an airspeed equal to the speed of sound in that air. Because the speed of sound increases with air temperature, and air temperature generally decreases with altitude, the true airspeed for a given Mach number generally decreases with altitude. As an airplane moves through the air faster, the airflow over parts of the wing will reach speeds that approach Mach 1.0. At such speeds, shock waves form in the air passing over the wings, drastically increasing the drag due to drag divergence, causing Mach buffet, or drastically changing the center of pressure, resulting in a nose-down moment called "mach tuck". The aircraft Mach number at which these effects appear is known as its critical Mach number, or MCRIT. The true airspeed corresponding to the critical Mach number generally decreases with altitude. The flight envelope is a plot of various curves representing the limits of the aircraft's true airspeed and altitude. Generally, the top-left boundary of the envelope is the curve representing stall speed, which increases as altitude increases. The top-right boundary of the envelope is the curve representing critical Mach number in true airspeed terms, which decreases as altitude increases. These curves typically intersect at some altitude higher than the maximum permitted altitude for the aircraft. This intersection is the coffin corner, or more formally the Q corner. The above explanation is based on level, constant speed, flight with a given gross weight and load factor of 1.0 G. The specific altitudes and speeds of the coffin corner will differ depending on weight, and the load factor increases caused by banking and pitching maneuvers. Similarly, the specific altitudes at which the stall speed meets the critical Mach number will differ depending on the actual atmospheric temperature. Consequences When an aircraft slows to below its stall speed, it is unable to generate enough lift in order to cancel out the forces that act on the aircraft (such as weight and centripetal force). This will cause the aircraft to drop in altitude. The drop in altitude may cause the pilot to increase the angle of attack by pulling back on the stick, because normally increasing the angle of attack puts the aircraft in a climb. However, when the wing exceeds its critical angle of attack, an increase in angle of attack will lead to a loss of lift and a further loss of airspeed – the wing stalls. The reason why the wing stalls when it exceeds its critical angle of attack is that the airflow over the top of the wing separates. When the airplane exceeds its critical Mach number (such as during stall prevention or recovery), then drag increases or Mach tuck occurs, which can cause the aircraft to upset, lose control, and lose altitude. In either case, as the airplane falls, it could gain speed and then structural failure could occur, typically due to excessive g forces during the pullout phase of the recovery. As an airplane approaches its coffin corner, the margin between stall speed and critical Mach number becomes smaller and smaller. Small changes could put one wing or the other above or below the limits. For instance, a turn causes the inner wing to have a lower airspeed, and the outer wing, a higher airspeed. The aircraft could exceed both limits at once. Or, turbulence could cause the airspeed to change suddenly, to beyond the limits. Some aircraft, such as the Lockheed U-2, routinely operate in the "coffin corner". In the case of the U-2, the aircraft was equipped with an autopilot, though it was unreliable. The U-2's speed margin, at high altitude, between 1-g stall warning buffet and Mach buffet can be as small as 5 knots. Aircraft capable of flying close to their critical Mach number usually carry a machmeter, an instrument which indicates speed in Mach number terms. As part of certifying aircraft in the United States of America, the Federal Aviation Administration (FAA) certifies a maximum operational velocity in terms of Mach number, or MMO. Following a series of crashes of high performance aircraft operating at high altitudes to which no definite cause could be attributed, as the aircraft involved suffered near total destruction, the FAA published an Advisory Circular establishing guidelines for improved aircrew training in high altitude operations in high performance aircraft. The circular includes a comprehensive explanation of aerodynamic effects of, and operations near coffin corner. Due to the effects of greater Mach number at high-altitude flight, the expected flight characteristics of a given configuration can change significantly. This was pointed out by a report describing the effect of ice crystals on pitot-tube airspeed indications at high altitude: See also Height-velocity diagram for helicopters References External links Aviation risks Aerodynamics
Coffin corner (aerodynamics)
[ "Chemistry", "Engineering" ]
1,484
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
10,636,131
https://en.wikipedia.org/wiki/Mathematical%20markup%20language
A mathematical markup language is a computer notation for representing mathematical formulae, based on mathematical notation. Specialized markup languages are necessary because computers normally deal with linear text and more limited character sets (although increasing support for Unicode is obsoleting very simple uses). A formally standardized syntax also allows a computer to interpret otherwise ambiguous content, for rendering or even evaluating. For computer-interpretable syntaxes, the most popular are TeX/LaTeX, MathML (Mathematical Markup Language), OpenMath and OMDoc. Notations for human input Popular languages for input by humans and interpretation by computers include TeX/LaTeX and eqn. Computer algebra systems such as Macsyma, Mathematica (Wolfram Language), Maple, and MATLAB each have their own syntax. When the purpose is informal communication with other humans, syntax is often ad hoc, sometimes called "ASCII math notation". Academics sometimes use syntax based on TeX due to familiarity with it from writing papers. Those used to programming languages may also use shorthands like "!" for . Web pages may also use a limited amount of HTML to mark up a small subset, for example superscripting. Ad hoc syntax requires context to interpret ambiguous syntax, for example "<=" could be "is implied by" or "less than or equal to", and "dy/dx" is likely to denote a derivative, but strictly speaking could also mean a finite quantity dy divided by dx. Unicode improves the support for mathematics, compared to ASCII only. Examples {| class="wikitable" |- !TeX !eqn !ad hoc ASCII !ad hoc Unicode !formula |- |$a^2$ |a sup 2 |a^2 |a² | |- |$\sum_{k=1}^N k^2 $ |sum from { k = 1 } to N { k sup 2 } |sum_{k=1}^N k^2 |Σ_{k=1}^N k² | |- |$\neg (a > 2) \Rightarrow a \le 2$ |neg (a > 2) drarrow a <= 2 |!(a > 2) => a <= 2 |¬(a > 2) ⇒ a ≤ 2 | |} Markup languages for computer interchange Markup languages optimized for computer-to-computer communication include MathML, OpenMath, and OMDoc. These are designed for clarity, parseability and to minimize ambiguity, at the price of verbosity. However, the verbosity makes them clumsier for humans to type directly. Conversion Many input, rendering, and conversion tools exist. Microsoft Word included Equation Editor, a limited version of MathType, until 2007. These allow entering formulae using a graphical user interface, and converting to standard markup languages such as MathML. With Microsoft's release of Microsoft Office 2007 and the Office Open XML file formats, they introduced a new equation editor which uses a new format, "Office Math Markup Language" (OMML). The lack of compatibility led some prestigious scientific journals to refuse to accept manuscripts which had been produced using Microsoft Office 2007. SciWriter is another GUI that can generate MathML and LaTeX. ASCIIMathML, a JavaScript program, can convert ad hoc ASCII notation to MathML. See also Proof assistant Formal proof List of document markup languages Comparison of document markup languages References External links MathML official website Markup languages
Mathematical markup language
[ "Mathematics" ]
745
[ "Mathematical markup languages" ]
10,637,440
https://en.wikipedia.org/wiki/Deconica%20inquilina
Deconica inquilina is a species of mushroom in the family Strophariaceae. Formerly a member of the genus Psilocybe (well known for its psilocybin containing members), this species belonged to the non-blueing (non-hallucinogenic) clade and was consequently moved to Deconica in 2009. Habitat and distribution Deconica inquilina is found growing on decaying grass. It is very widely distributed, reported from North America, South America and Europe. References Strophariaceae Fungus species Taxa named by Elias Magnus Fries
Deconica inquilina
[ "Biology" ]
116
[ "Fungi", "Fungus species" ]
10,639,465
https://en.wikipedia.org/wiki/321%20kinematic%20structure
321 kinematic structure is a design method for robotic arms (serial manipulators), invented by Donald L. Pieper and used in most commercially produced robotic arms. The inverse kinematics of serial manipulators with six revolute joints, and with three consecutive joints intersecting, can be solved in closed form, i.e. a set of equations can be written that give the joint positions required to place the end of the arm in a particular position and orientation. An arm design that does not follow these design rules typically requires an iterative algorithm to solve the inverse kinematics problem. The 321 design is an example of a 6R wrist-partitioned manipulator: the three wrist joints intersect, the two shoulder and elbow joints are parallel, the first joint intersects the first shoulder joint orthogonally (at a right angle). Many other industrial robots, such as the PUMA, have a kinematic structure that deviates a little bit from the 321 structure. The offsets move the singular positions of the robot away from places in the workspace where they are likely to cause problems. References External links 321 Kinematic Structure Robot kinematics
321 kinematic structure
[ "Engineering" ]
242
[ "Robotics engineering", "Robot kinematics" ]
20,852,148
https://en.wikipedia.org/wiki/Tully%20Lake
Tully Lake, of Royalston, Massachusetts, is a reservoir and flood control project constructed by the United States Army Corps of Engineers (USACE) in 1949 for 1.6 million dollars. The project prevents flooding of the greater Connecticut River and Millers River valleys and provides a variety of recreational opportunities, including a campground operated by The Trustees of Reservations. Tully Lake is an important link in the Tully Trail. Flood control As of 2007, the USACE reported that Tully Lake had prevented an estimated $26 million in flood damages. The dam's capacity is of water; it can contain 7.72 inches of rainfall runoff and has a downstream channel capacity of . The closest the lake has come to capacity was in 1987, when it rose to 62%. Recreation Tully Lake is open to fishing, small boats, hiking, cross country skiing, mountain biking, picnicking, hunting (in season), and swimming. Motor vehicles are not allowed on the property. The USACE maintains a disc golf course, a mountain bike trail, and a picnic area. The Trustees of Reservations, a non-profit conservation organization, operates a 35 site tent-only camping facility on Tully Lake, open seasonally. The Tully Trail, a recreational trail cooperatively managed by the USACE, TTOR, the Commonwealth of Massachusetts, the National Park Service, and the Mount Grace Land Conservation Trust, runs along the northern shore of Tully Lake. The trail passes by Doane's Falls, Jacobs Hill, and Royalston Falls, and traverses the summit of Tully Mountain. References External links Tully Lake U.S. Army Corps of Engineers Tully Lake Park Map U.S. Army Corps of Engineers Tully Lake Campground The Trustees of Reservations Campground map The Trustees of Reservations Tully Trail The Trustees of Reservations Tully Trail map The Trustees of Reservations Lakes of Worcester County, Massachusetts United States Army Corps of Engineers The Trustees of Reservations Reservoirs in Massachusetts Buildings and structures in Worcester County, Massachusetts Sports in Worcester County, Massachusetts Parks in Worcester County, Massachusetts Royalston, Massachusetts 1949 establishments in Massachusetts
Tully Lake
[ "Engineering" ]
412
[ "Engineering units and formations", "United States Army Corps of Engineers" ]
20,852,803
https://en.wikipedia.org/wiki/Salpinx%20in%20anatomy
In anatomical contexts, salpinx is used to refer to a type of tube. Per Terminologia Anatomica, the Latin term "tuba" is usually used to describe most tubes (after the Roman tuba, not the modern tuba), but the term "salpinges" and its adjectival derivatives are still sometimes used to describe the following two "tubes": Fallopian tube (as in mesosalpinx, salpingitis and hydrosalpinx) Eustachian tube (as in salpingopalatine fold, salpingopharyngeal fold, and salpingopharyngeus muscle) References Anatomy
Salpinx in anatomy
[ "Biology" ]
139
[ "Anatomy" ]
20,853,872
https://en.wikipedia.org/wiki/Health%20action%20process%20approach
The health action process approach (HAPA) is a psychological theory of health behavior change, developed by Ralf Schwarzer, Professor of Psychology at the Freie University Berlin of Berlin, Germany and SWPS University of Social Sciences and Humanities, Wroclaw, Poland, first published in 1992. Health behavior change refers to a replacement of health-compromising behaviors (such as sedentary behavior) by health-enhancing behaviors (such as physical exercise). To describe, predict, and explain such processes, theories or models are being developed. Health behavioural change theories are designed to examine a set of psychological constructs that jointly aim at explaining what motivates people to change and how they take preventive action. HAPA is an open framework of various motivational and volitional constructs that are assumed to explain and predict individual changes in health behaviors such as quitting smoking or drinking, and improving physical activity levels, dental hygiene, seat belt use, breast self-examination, dietary behaviors, and avoiding drunk driving. HAPA suggests that the adoption, initiation, and maintenance of health behaviors should be conceived of as a structured process including a motivation phase and a volition phase. The former describes the intention formation while the latter refers to planning, and action (initiative, maintenance, recovery). The model emphasizes the particular role of perceived self-efficacy at different stages of health behavior change. Background Models that describe health behavior change can be distinguished in terms of the assumption whether they are continuum-based or stage-based. A continuum (mediator) model claims that change is a continuous process that leads from lack of motivation via action readiness either to successful change or final disengagement. Research on such mediator models are reflected by path diagrams that include distal and proximal predictors of the target behavior. On the other hand, the stage approach assumes that change is non-linear and consists of several qualitative steps that reflect different mindsets of people. A two-layer framework that can be applied either as a continuum or as a stage model is HAPA. It includes self-efficacy, outcome expectancies, and risk perception as distal predictors, intention as a middle-level mediator, and volitional factors (such as action planning) as the most proximal predictors of behavior. (See Self-efficacy.) Good intentions are more likely to be translated into action when people plan when, where, and how to perform the desired behavior. Intentions foster planning, which in turn facilitates behavior change. Planning was found to mediate the intention-behavior relation. A distinction has been made between action planning and coping planning. Coping planning takes place when people imagine scenarios that hinder them to perform their intended behavior, and they develop one or more plans to cope with such a challenging situation. HAPA is designed as a sequence of two continuous self-regulatory processes, a goal-setting phase (motivation) and a goal-pursuit phase (volition). The second phase is subdivided into a pre-action phase and an action phase. Thus, one can superimpose these three phases (stages) on the continuum (mediator) model as a second layer, and regard the stages as moderators. This two-layer architecture allows to switch between the continuum model and the stage model, depending on the given research question. Five principles HAPA has five major principles that make it distinct from other models. Principle 1: Motivation and volition. The first principle suggests that one should divide the health behavior change process into two phases. There is a switch of mindsets when people move from deliberation to action. First comes the motivation phase in which people develop their intentions. Afterwards, they enter the volition phase. Principle 2: Two volitional phases. In the volition phase there are two groups of individuals: those who have not yet translated their intentions into action, and those who have. There are inactive as well as active persons in this phase. In other words, in the volitional phase one finds intenders as well as actors who are characterized by different psychological states. Thus, in addition to health behavior change as a continuous process, one can also create three categories of people with different mindsets depending on their current point of residence within the course of health behavior change: preintenders, intenders, and actors. The assessment of stages is done by behavior-specific stage algorithms. Principle 3: Postintentional planning. Intenders who are in the volitional preactional stage are motivated to change, but do not act because they might lack the right skills to translate their intention into action. Planning is a key strategy at this point. Planning serves as an operative mediator between intentions and behavior. Principle 4: Two kinds of mental simulation. Planning can be divided into action planning and coping planning. Action planning pertains to the when, where, and how of intended action. Coping planning includes the anticipation of barriers and the design of alternative actions that help to attain one's goals in spite of the impediments. The separation of the planning construct into two constructs, action planning and coping planning, has been found useful as studies have confirmed the discriminant validity of such a distinction. Action planning seems to be more important for the initiation of health behaviors, whereas coping planning is required for the initiation and maintenance of actions as well. Principle 5: Phase-specific self-efficacy. Perceived self-efficacy is required throughout the entire process. However, the nature of self-efficacy differs from phase to phase. This difference relates to the fact that there are different challenges as people progress from one phase to the next one. Goal setting, planning, initiation, action, and maintenance pose challenges that are not of the same nature. Therefore, one should distinguish between preactional self-efficacy, coping self-efficacy, and recovery self-efficacy. Sometimes the terms task self-efficacy instead of preaction self-efficacy, and maintenance self-efficacy instead of coping and recovery self-efficacy are preferred. Psychological interventions When it comes to the design of interventions, one can consider identifying individuals who reside either at the motivational stage or the volitional stage. Then, each group becomes the target of a specific treatment that is tailored to this group. Moreover, it is theoretically meaningful and has been found useful to subdivide further the volitional group into those who perform and those who only intend to perform. In the postintentional preactional stage, individuals are labeled "intenders", whereas in the actional stage they are labeled "actors". Thus, a suitable subdivision within the health behavior change process yields three groups: nonintenders, intenders, and actors. The term "stage" in this context was chosen to allude to the stage theories, but not in the strict definition that includes irreversibility and invariance. The terms "phase" or "mindset" may be equally suitable for this distinction. The basic idea is that individuals pass through different mindsets on their way to behavior change. Thus, interventions may be most efficient when tailored to these particular mindsets. For example, nonintenders are supposed to benefit from confrontation with outcome expectancies and some level of risk communication. They need to learn that the new behavior (e.g., becoming physically active) has positive outcomes (e.g., well-being, weight loss, fun) as opposed to the negative outcomes that accompany the current (sedentary) behavior (such as developing an illness or being unattractive). In contrast, intenders should not benefit from such a treatment because, after setting a goal, they have already moved beyond this mindset. Rather, they should benefit from planning to translate their intentions into action. Finally, actors do not need any treatment at all unless one wants to improve their relapse prevention skills. Then, they should be prepared for particular high-risk situations in which lapses are imminent. Preparation can be exercised by teaching them to anticipate such situations and by acquiring the necessary levels of perceived recovery self-efficacy. There are quite a few randomized controlled trials that have examined the notion of stage-matched interventions based on HAPA, for example in the context of dietary behaviors, physical activity, and dental hygiene. See also Behavioural change theories References Further reading Published online: 2 July 2014. Published online 17 July 2013. External links Official Homepage of Ralf Schwarzer (Explaining Hapa with pdf and Video) Human behavior Behaviorism Motivation
Health action process approach
[ "Biology" ]
1,741
[ "Behavior", "Motivation", "Behaviorism", "Ethology", "Human behavior" ]
20,854,281
https://en.wikipedia.org/wiki/Metallo-beta-lactamase%20protein%20fold
The metallo-β-lactamase (MBL) superfamily constitutes a group of proteins found in all domains of life that share a characteristic αββα fold with the ability to bind transition metal ions. Such metal binding sites may have divalent transition metal ions like Zn(II), Fe(II)/Fe(III) and Mn(II), and are located at the bottom of a wide cleft able to accommodate diverse substrates. The name was adopted after the first members of the superfamily to be studied experimentally: a group of zinc-dependent hydrolytic enzymes conferring bacterial resistance to β-lactam antibiotics. These zinc-β-lactamases (ZBLs) inactivate β-lactam antibiotics through hydrolysis of the β-lactam ring. Early studies on MBLs were conducted on the enzyme βLII isolated from strain 569/H/9 of Bacillus cereus. It was named βLII because it was the second β-lactamase shown to be produced by the bacterium; the first one, βLI, was a non-metallic β-lactamase, i.e., insensitive to inhibition with EDTA (βLII was renamed BcII over time). Low-resolution X-ray crystallographic analyses published in 1995, disclosed the new αββα fold that would become the hallmark of the MBL superfamily, along with a single Zn(II) ion bound to a three-histidine motif, resembling the active site typical of carbonic anhydrases. Thus, BcII and ZBLs in general were thought to use a single Zn(II) ion to activate a water molecule for hydrolysis, analogous to the mechanism by which carbonic anhydrases hydrate carbon dioxide into bicarbonate. This belief was soon debunked when the structure of Bacteroides fragilis ZBL, CcrA, was published, showing an additional Zn(II) ion next to the previous one. The second zinc was coordinated to nearby Asp, Cys and His residues. Besides, the second metal ion was later found in Bacillus cereus ZBL too, starting a decade-long controversy regarding the role of each zinc ion. Later on, it was found that monometallic ZBLs are rather exceptional and the antibiotic inactivation reaction requires two Zn(II) ions. Similarly, it has been demonstrated that zinc chelators can inhibit the hydrolytic activity of metallo-β-lactamases against β-lactam antibiotics, restoring the activity of the latter. Metallo-beta-lactamases are important enzymes because they are involved in the breakdown of antibiotics by antibiotic-resistant bacteria. It is unclear whether metallo-beta-lactamase activity evolved once or twice within the superfamily; if twice, this would suggest structural exaptation. Proteins belonging to the MBL superfamily usually combine at least one MBL domain with additional domains that provide different functions, such as substrate recognition or binding to other polypeptides, in a modular fashion. Thus, MBL superfamily members grasp the metal-assisted water-activation ability of the MBL domain in order to perform a wide variety of hydrolytic reactions. Such diversity is often expanded by mutations around the metal-binding site in order to bind different metal ions. Indeed, those MBLs that bind Fe(II)/Fe(III) are often redox active due to the ability to perform one-electron redox reactions. Early attempts to systematically classify all members of the MBL superfamily were conducted in 1999 by Aravind, who showed that many other proteins display the αββα typical of MBLs. These observations were updated in 2001 by Daiyasu et al. who defined at least 16 families within the MBL superfamily. These proteins include thiolesterases, members of the glyoxalase II family, that catalyse the hydrolysis of S-D-lactoyl-glutathione to form glutathione and D-lactic acid and a competence protein that is essential for natural transformation in Neisseria gonorrhoeae and could be a transporter involved in DNA uptake. Except for the competence protein these proteins bind two zinc ions per molecule as cofactor. Currently, at least one hundred proteins have been shown to contain an αββα domain using X-ray crystallography, whereas the whole MBL superfamily includes about half a million members. See also New Delhi metallo-beta-lactamase References Protein domains Protein superfamilies Protein folds
Metallo-beta-lactamase protein fold
[ "Biology" ]
961
[ "Protein superfamilies", "Protein domains", "Protein classification" ]
20,854,585
https://en.wikipedia.org/wiki/Direct%20repeat
Direct repeats are a type of genetic sequence that consists of two or more repeats of a specific sequence. In other words, the direct repeats are nucleotide sequences present in multiple copies in the genome. Generally, a direct repeat occurs when a sequence is repeated with the same pattern downstream. There is and no reverse complement associated with a direct repeat. It may or may not have intervening nucleotides. The nucleotide sequence written in bold characters signifies the repeated sequence. Linguistically, a typical direct repeat is comparable to saying "bye-bye". Types There are several types of repeated sequences: Interspersed (or dispersed) DNA repeats (interspersed repetitive sequences) are copies of transposable elements interspersed throughout the genome. Flanking (or terminal) repeats (terminal repeat sequences) are sequences that are repeated on both ends of a sequence, for example, the long terminal repeats (LTRs) on retroviruses. Direct terminal repeats are in the same direction and inverted terminal repeats are opposite to each other in direction. Tandem repeats (tandem repeat sequences) are repeated copies which lie adjacent to each other. These can also be direct or inverted repeats. The ribosomal RNA and transfer RNA genes belong to the class of middle repetitive DNA. Microsatellite DNA A tract of repetitive DNA in which a motif of a few base pairs is tandemly repeated numerous times (e.g. 5 to 50 times) is referred to as microsatellite DNA. Thus direct repeat tandem sequences are a form of microsattelite DNA. The process of DNA mismatch repair plays a prominent role in the formation of direct trinucleotide repeat expansions. Such repeat expansions underlie several neurological and developmental disorders in humans. Homologous recombination In directly repeated sequences of the tobacco plant genome, DNA double-strand breaks can be efficiently repaired by homologous recombination between the repeated sequences. See also Inverted repeat References Repetitive DNA sequences Genetics
Direct repeat
[ "Biology" ]
396
[ "Molecular genetics", "Repetitive DNA sequences", "Genetics" ]
6,251,151
https://en.wikipedia.org/wiki/The%20Rock%20%28Northwestern%20University%29
The Rock is a boulder on the campus of Northwestern University in Evanston, Illinois, United States, located in between University Hall and Harris Hall. It serves as a billboard for campus groups and events, and has been painted with different colors and messages over the years. History The Rock, a purple-and-white quartzite boulder, was transplanted from Devil's Lake, Wisconsin, as a gift of the class of 1902. That graduating class liked the idea of running water on campus "in some form or another" and rigged the Rock to make a fountain on the south end of campus. The original plumbing was later refitted into a water fountain. Over time, vandalism of the Rock gradually increased, particularly during the Vietnam War. With the first painting of the rock in the 1940s, it became a canvas for student art, opinions, advertising, messages, proposals, and jokes. By tradition, students who wish to paint something on the Rock often guard it from sunrise until the early morning hours before painting. The Rock is no longer one solid piece of quartzite. In 1989 the Rock was moved about 20 feet to accommodate new landscaping, and the work crew moving the Rock dropped it, splitting it up one side and crumbling part of the base. Scientists at McCormick School of Engineering and Applied Science provided an epoxy to patch the Rock together again. In popular culture Author Bob Wood discusses The Rock in the Northwestern chapter of his 1989 book Big Ten Country. See also List of individual rocks External links Webcam of The Rock Blog about The Rock updated daily with pictures The Rock, Northwestern University Archives, Evanston, Illinois https://findingaids.library.northwestern.edu/repositories/6/resources/1159 Northwestern University Landmarks in Chicago Stones
The Rock (Northwestern University)
[ "Physics" ]
361
[ "Stones", "Physical objects", "Matter" ]
6,252,231
https://en.wikipedia.org/wiki/Landau%20levels
In quantum mechanics, the energies of cyclotron orbits of charged particles in a uniform magnetic field are quantized to discrete values, thus known as Landau levels. These levels are degenerate, with the number of electrons per level directly proportional to the strength of the applied magnetic field. It is named after the Soviet physicist Lev Landau. Landau quantization contributes towards magnetic susceptibility of metals, known as Landau diamagnetism. Under strong magnetic fields, Landau quantization leads to oscillations in electronic properties of materials as a function of the applied magnetic field known as the De Haas–Van Alphen and Shubnikov–de Haas effects. Landau quantization is a key ingredient in explanation of the integer quantum Hall effect. Derivation Consider a system of non-interacting particles with charge and spin confined to an area in the plane. Apply a uniform magnetic field along the -axis. In SI units, the Hamiltonian of this system (here, the effects of spin are neglected) is Here, is the canonical momentum operator and is the operator for the electromagnetic vector potential (in position space ). The vector potential is related to the magnetic field by There is some gauge freedom in the choice of vector potential for a given magnetic field. The Hamiltonian is gauge invariant, which means that adding the gradient of a scalar field to changes the overall phase of the wave function by an amount corresponding to the scalar field. But physical properties are not influenced by the specific choice of gauge. In the Landau gauge From the possible solutions for A, a gauge fixing introduced by Lev Landau is often used for charged particles in a constant magnetic field. When then is a possible solution in the Landau gauge (not to be mixed up with the Landau gauge). In this gauge, the Hamiltonian is The operator commutes with this Hamiltonian, since the operator is absent for this choice of gauge. Thus the operator can be replaced by its eigenvalue . Since does not appear in the Hamiltonian and only the z-momentum appears in the kinetic energy, this motion along the z-direction is a free motion. The Hamiltonian can also be written more simply by noting that the cyclotron frequency is , giving This is exactly the Hamiltonian for the quantum harmonic oscillator, except with the minimum of the potential shifted in coordinate space by . To find the energies, note that translating the harmonic oscillator potential does not affect the energies. The energies of this system are thus identical to those of the standard quantum harmonic oscillator, The energy does not depend on the quantum number , so there will be a finite number of degeneracies (If the particle is placed in an unconfined space, this degeneracy will correspond to a continuous sequence of ). The value of is continuous if the particle is unconfined in the z-direction and discrete if the particle is bounded in the z-direction also. Each set of wave functions with the same value of is called a Landau level. For the wave functions, recall that commutes with the Hamiltonian. Then the wave function factors into a product of momentum eigenstates in the direction and harmonic oscillator eigenstates shifted by an amount in the direction: where . In sum, the state of the electron is characterized by the quantum numbers, , and . In the symmetric gauge The derivation treated and y as asymmetric. However, by the symmetry of the system, there is no physical quantity which distinguishes these coordinates. The same result could have been obtained with an appropriate interchange of and . A more adequate choice of gauge, is the symmetric gauge, which refers to the choice In terms of dimensionless lengths and energies, the Hamiltonian can be expressed as The correct units can be restored by introducing factors of and . Consider operators These operators follow certain commutation relations In terms of above operators the Hamiltonian can be written as where we reintroduced the units back. The Landau level index is the eigenvalue of the operator . The application of increases by one unit while preserving , whereas application simultaneously increase and decreases by one unit. The analogy to quantum harmonic oscillator provides solutions where and One may verify that the above states correspond to choosing wavefunctions proportional to where . In particular, the lowest Landau level consists of arbitrary analytic functions multiplying a Gaussian, . Degeneracy of the Landau levels In the Landau gauge The effects of Landau levels may only be observed when the mean thermal energy is smaller than the energy level separation, , meaning low temperatures and strong magnetic fields. Each Landau level is degenerate because of the second quantum number , which can take the values where is an integer. The allowed values of are further restricted by the condition that the center of force of the oscillator, , must physically lie within the system, . This gives the following range for , For particles with charge , the upper bound on can be simply written as a ratio of fluxes, where is the fundamental magnetic flux quantum and is the flux through the system (with area ). Thus, for particles with spin , the maximum number of particles per Landau level is which for electrons (where and ) gives , two available states for each flux quantum that penetrates the system. The above gives only a rough idea of the effects of finite-size geometry. Strictly speaking, using the standard solution of the harmonic oscillator is only valid for systems unbounded in the -direction (infinite strips). If the size is finite, boundary conditions in that direction give rise to non-standard quantization conditions on the magnetic field, involving (in principle) both solutions to the Hermite equation. The filling of these levels with many electrons is still an active area of research. In general, Landau levels are observed in electronic systems. As the magnetic field is increased, more and more electrons can fit into a given Landau level. The occupation of the highest Landau level ranges from completely full to entirely empty, leading to oscillations in various electronic properties (see De Haas–Van Alphen effect and Shubnikov–de Haas effect). If Zeeman splitting is included, each Landau level splits into a pair, one for spin up electrons and the other for spin down electrons. Then the occupation of each spin Landau level is just the ratio of fluxes . Zeeman splitting has a significant effect on the Landau levels because their energy scales are the same, . However, the Fermi energy and ground state energy stay roughly the same in a system with many filled levels, since pairs of split energy levels cancel each other out when summed. Moreover, the above derivation in the Landau gauge assumed an electron confined in the -direction, which is a relevant experimental situation — found in two-dimensional electron gases, for instance. Still, this assumption is not essential for the results. If electrons are free to move along the -direction, the wave function acquires an additional multiplicative term ; the energy corresponding to this free motion, , is added to the discussed. This term then fills in the separation in energy of the different Landau levels, blurring the effect of the quantization. Nevertheless, the motion in the --plane, perpendicular to the magnetic field, is still quantized. In the symmetric gauge Each Landau level has degenerate orbitals labeled by the quantum numbers in symmetric gauge. The degeneracy per unit area is the same in each Landau level. The z component of angular momentum is Exploiting the property we chose eigenfunctions which diagonalize and , The eigenvalue of is denoted by , where it is clear that in the th Landau level. However, it may be arbitrarily large, which is necessary to obtain the infinite degeneracy (or finite degeneracy per unit area) exhibited by the system. Relativistic case An electron following Dirac equation under a constant magnetic field, can be analytically solved. The energies are given by where c is the speed of light, the sign depends on the particle-antiparticle component and ν is a non-negative integer. Due to spin, all levels are degenerate except for the ground state at . The massless 2D case can be simulated in single-layer materials like graphene near the Dirac cones, where the eigenergies are given by where the speed of light has to be replaced with the Fermi speed vF of the material and the minus sign corresponds to electron holes. Magnetic susceptibility of a Fermi gas The Fermi gas (an ensemble of non-interacting fermions) is part of the basis for understanding of the thermodynamic properties of metals. In 1930 Landau derived an estimate for the magnetic susceptibility of a Fermi gas, known as Landau susceptibility, which is constant for small magnetic fields. Landau also noticed that the susceptibility oscillates with high frequency for large magnetic fields, this physical phenomenon is known as the De Haas–Van Alphen effect. Two-dimensional lattice The tight binding energy spectrum of charged particles in a two dimensional infinite lattice is known to be self-similar and fractal, as demonstrated in Hofstadter's butterfly. For an integer ratio of the magnetic flux quantum and the magnetic flux through a lattice cell, one recovers the Landau levels for large integers. Integer quantum Hall effect The energy spectrum of the semiconductor in a strong magnetic field forms Landau levels that can be labeled by integer indices. In addition, the Hall resistivity also exhibits discrete levels labeled by an integer . The fact that these two quantities are related can be shown in different ways, but most easily can be seen from Drude model: the Hall conductivity depends on the electron density as Since the resistivity plateau is given by the required density is which is exactly the density required to fill the Landau level. The gap between different Landau levels along with large degeneracy of each level renders the resistivity quantized. See also Laughlin wavefunction References External links Further reading Landau, L. D.; and Lifschitz, E. M.; (1977). Quantum Mechanics: Non-relativistic Theory. Course of Theoretical Physics. Vol. 3 (3rd ed. London: Pergamon Press). . Quantum mechanics Electric and magnetic fields in matter Lev Landau
Landau levels
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,129
[ "Theoretical physics", "Quantum mechanics", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics" ]
6,257,840
https://en.wikipedia.org/wiki/Gate%20fee
A gate fee (or tipping fee) is the charge levied upon a given quantity of waste received at a waste processing facility. In the case of a landfill it is generally levied to offset the cost of opening, maintaining and eventually closing the site. It may also include any landfill tax which is applicable in the region. The gate fee differs from the waste removal fee which is the charge levied on people in areas, such as Ireland, where waste collection is not covered as part of local taxes. With waste treatment facilities such as incinerators, mechanical biological treatment facilities or composting plants the fee offsets the operation, maintenance, labour costs, capital costs of the facility along with any profits and final disposal costs of any unusable residues. The fee can be charged per load, per tonne, or per item depending on the source and type of the waste. See also Waste legislation Waste management References Landfill Waste collection Waste treatment technology
Gate fee
[ "Chemistry", "Engineering" ]
192
[ "Water treatment", "Waste treatment technology", "Environmental engineering" ]
12,884,582
https://en.wikipedia.org/wiki/Technetium%20%2899mTc%29%20albumin%20aggregated
Technetium 99mTc albumin aggregated (99mTc-MAA) is an injectable radiopharmaceutical used in nuclear medicine. It consists of a sterile aqueous suspension of Technetium-99m (99mTc) labeled to human albumin aggregate particles. It is commonly used for lung perfusion scanning. It is also less commonly used to visualise a peritoneovenous shunt and for isotope venography. Preparation DraxImage MAA kits for preparing 99mTc-MAA are available in the United States from only a single manufacturer; Jubilant DraxImage Inc. The kits are delivered to nuclear pharmacies as lyophilized powders of non-radioactive ingredients sealed under nitrogen. A nuclear pharmacist adds anywhere from 50 - 100 mCi of Na[99mTcO4] to the reaction vial to make the final product, in the pH range of 3.8 to 8.0. After being allowed to react at room temperature for 15 minutes to ensure maximum labeling of the human albumin with 99mTc, the kit can then be diluted with sterile normal saline as needed. Once prepared the product will have a turbid white appearance. Quality control No less than 90% of MAA particles can be between 10 - 90 micrometres in size and no particles may exceed 150 micrometres due to the risk of pulmonary artery blockade. No less than 90% of the radioactivity present in the product must be tagged to albumin particles. Thus, no more than 10% soluble impurities may be present. Dosage and imaging The typical adult dose for a lung imaging study is 40-150 Megabecquerels (1-4 mCi) (containing between 100,000 - 200,000 albumin particles). The particle burden should be lowered for most pediatric patients and lowered to 50,000 for infants. The use of more than 250,000 particles in a dose is controversial as little extra data is acquired from such scans while there is an increased risk of toxicity. Patients with pulmonary hypertension should be administered a minimum number of particles to achieve a lung scan (i.e. 60,000). In any patient by administering a greater quantity of particles than necessary for the diagnostic procedure increases the risks of toxicity. Because of gravity effects, people administered 99mTc MAA should be in the supine position to ensure as even a distribution of particles throughout the lungs as possible. The total percentage of particles trapped in the lungs can be determined through a whole body scan after the administration of 99mTc MAA through the equation: . History The technetium tc 99m aggregated albumin kit was approved for use in the United States in December 1987. References Further reading Radiopharmaceuticals Technetium-99m
Technetium (99mTc) albumin aggregated
[ "Chemistry" ]
585
[ "Chemicals in medicine", "Radiopharmaceuticals", "Medicinal radiochemistry" ]
12,885,890
https://en.wikipedia.org/wiki/Toric%20manifold
In mathematics, a toric manifold is a topological analogue of toric variety in algebraic geometry. It is an even-dimensional manifold with an effective smooth action of an -dimensional compact torus which is locally standard with the orbit space a simple convex polytope. The aim is to do combinatorics on the quotient polytope and obtain information on the manifold above. For example, the Euler characteristic and the cohomology ring of the manifold can be described in terms of the polytope. The Atiyah and Guillemin-Sternberg theorem This theorem states that the image of the moment map of a Hamiltonian toric action is the convex hull of the set of moments of the points fixed by the action. In particular, this image is a convex polytope. References Structures on manifolds Manifolds Topology
Toric manifold
[ "Physics", "Mathematics" ]
172
[ "Space (mathematics)", "Topological spaces", "Topology", "Space", "Manifolds", "Geometry", "Spacetime" ]
12,885,977
https://en.wikipedia.org/wiki/Horizon%20%28general%20relativity%29
A horizon is a boundary in spacetime satisfying prescribed conditions. There are several types of horizons that play a role in Albert Einstein's theory of general relativity: Absolute horizon, a boundary in spacetime in general relativity inside of which events cannot affect an external observer Event horizon, a boundary in spacetime beyond which events cannot affect the observer, thus referring to a black hole's boundary and the boundary of an expanding universe Apparent horizon, a surface defined in general relativity Cauchy horizon, a surface found in the study of Cauchy problems Cosmological horizon, a limit of observability Killing horizon, a null surface on which there is a Killing vector field Particle horizon, the maximum distance from which particles can have travelled to an observer in the age of the universe See also Horizon (disambiguation) General relativity
Horizon (general relativity)
[ "Physics" ]
170
[ "General relativity", "Theory of relativity" ]
12,885,993
https://en.wikipedia.org/wiki/AIGO
The Australian International Gravitational Observatory (AIGO) is a research facility located near Gingin, north of Perth in Western Australia. It is part of a worldwide effort to directly detect gravitational waves. Note that these are a major prediction of the general theory of relativity and are not to be confused with gravity waves, a phenomenon studied in fluid mechanics. It is operated by the Australian International Gravitational Research Centre (AIGRC) through the University of Western Australia under the auspices of the Australian Consortium for Interferometric Gravitational Astronomy (ACIGA). The current aim of the facility is to develop advanced techniques for improving the sensitivity of interferometric gravitational wave detectors such as LIGO and VIRGO. A study of operational interferometric gravitational wave detectors shows that AIGO is situated in almost the ideal location to complement existing detectors in the Northern hemisphere. Current facilities Current facilities (AIGO Stage I) consist of an L-shaped ultra high vacuum system, measuring 80 m on each side forming an interferometer for detecting gravitational waves. LIGO-Australia LIGO-Australia was a proposed plan (AIGO Stage II) to install an Advanced LIGO interferometer at AIGO, forming a triangle of three Advanced LIGO detectors. It was to consist of an L-shaped interferometer, measuring 5 km on each side, with vacuum pipes about 700 mm in diameter. A 2010 developmental roadmap issued by the Gravitational Wave International Committee (GWIC) for the field of gravitational-wave astronomy recommended that an expansion of the global array of interferometric detectors be pursued as a highest priority. In its roadmap, GWIC identified the Southern Hemisphere as one of the key locations in which a gravitational-wave interferometer could most effectively complement existing detectors. The AIGO facility in Western Australia was well-located to work with the existing and planned components of the global network, and already possessed an active gravitational-wave community. The LIGO-Australia plan was approved by LIGO's US funding agency, the National Science Foundation, contingent on the understanding that it involved no increase in LIGO's total budget. The cost of building, operating and staffing the interferometer would have rested entirely with the Australian government. After a year-long effort, the LIGO Laboratory reluctantly acknowledged that the proposed relocation of an Advanced LIGO detector to Australia was not to occur. The Australian government had committed itself to a balanced budget and this precluded any new starts in science. The deadline for a response from Australia passed on 1 October 2011. The proposal was then moved to India, where the Indian Initiative in Gravitational-wave Observations obtained some government support to pursue a similar plan, named LIGO-India, as AIGO had attempted. India is not quite as good a location as Australia, but provides most of the benefit. Co-located facilities AIGO is on the same grounds as the Gravity Discovery Centre and the GDC Observatory, of which are educational and instructional facilities open to the general public. It is also the site of the Geoscience Australia Gingin Magnetic Observatory, one of a network of nine for monitoring the Earth's magnetic field. See also Interferometric gravitational-wave detector References External links LIGO-Australia Home Page ACIGA Home Page Gravity Discovery Center Home Page Interferometers Gravitational-wave telescopes Astronomical observatories in Western Australia Shire of Gingin
AIGO
[ "Technology", "Engineering" ]
686
[ "Interferometers", "Measuring instruments" ]
12,886,758
https://en.wikipedia.org/wiki/Generic%20property
In mathematics, properties that hold for "typical" examples are called generic properties. For instance, a generic property of a class of functions is one that is true of "almost all" of those functions, as in the statements, "A generic polynomial does not have a root at zero," or "A generic square matrix is invertible." As another example, a generic property of a space is a property that holds at "almost all" points of the space, as in the statement, "If is a smooth function between smooth manifolds, then a generic point of is not a critical value of ." (This is by Sard's theorem.) There are many different notions of "generic" (what is meant by "almost all") in mathematics, with corresponding dual notions of "almost none" (negligible set); the two main classes are: In measure theory, a generic property is one that holds almost everywhere, with the dual concept being null set, meaning "with probability 0". In topology and algebraic geometry, a generic property is one that holds on a dense open set, or more generally on a residual set, with the dual concept being a nowhere dense set, or more generally a meagre set. There are several natural examples where those notions are not equal. For instance, the set of Liouville numbers is generic in the topological sense, but has Lebesgue measure zero. In measure theory In measure theory, a generic property is one that holds almost everywhere. The dual concept is a null set, that is, a set of measure zero. In probability In probability, a generic property is an event that occurs almost surely, meaning that it occurs with probability 1. For example, the law of large numbers states that the sample mean converges almost surely to the population mean. This is the definition in the measure theory case specialized to a probability space. In discrete mathematics In discrete mathematics, one uses the term almost all to mean cofinite (all but finitely many), cocountable (all but countably many), for sufficiently large numbers, or, sometimes, asymptotically almost surely. The concept is particularly important in the study of random graphs. In topology In topology and algebraic geometry, a generic property is one that holds on a dense open set, or more generally on a residual set (a countable intersection of dense open sets), with the dual concept being a closed nowhere dense set, or more generally a meagre set (a countable union of nowhere dense closed sets). However, density alone is not sufficient to characterize a generic property. This can be seen even in the real numbers, where both the rational numbers and their complement, the irrational numbers, are dense. Since it does not make sense to say that both a set and its complement exhibit typical behavior, both the rationals and irrationals cannot be examples of sets large enough to be typical. Consequently, we rely on the stronger definition above which implies that the irrationals are typical and the rationals are not. For applications, if a property holds on a residual set, it may not hold for every point, but perturbing it slightly will generally land one inside the residual set (by nowhere density of the components of the meagre set), and these are thus the most important case to address in theorems and algorithms. In function spaces A property is generic in Cr if the set holding this property contains a residual subset in the Cr topology. Here Cr is the function space whose members are continuous functions with r continuous derivatives from a manifold M to a manifold N. The space Cr(M, N), of Cr mappings between M and N, is a Baire space, hence any residual set is dense. This property of the function space is what makes generic properties typical. In algebraic geometry Algebraic varieties A property of an irreducible algebraic variety X is said to be true generically if it holds except on a proper Zariski-closed subset of X, in other words, if it holds on a non-empty Zariski-open subset. This definition agrees with the topological one above, because for irreducible algebraic varieties any non-empty open set is dense. For example, by the Jacobian criterion for regularity, a generic point of a variety over a field of characteristic zero is smooth. (This statement is known as generic smoothness.) This is true because the Jacobian criterion can be used to find equations for the points which are not smooth: They are exactly the points where the Jacobian matrix of a point of X does not have full rank. In characteristic zero, these equations are non-trivial, so they cannot be true for every point in the variety. Consequently, the set of all non-regular points of X is a proper Zariski-closed subset of X. Here is another example. Let f : X → Y be a regular map between two algebraic varieties. For every point y of Y, consider the dimension of the fiber of f over y, that is, dim f−1(y). Generically, this number is constant. It is not necessarily constant everywhere. If, say, X is the blowup of Y at a point and f is the natural projection, then the relative dimension of f is zero except at the point which is blown up, where it is dim Y - 1. Some properties are said to hold very generically. Frequently this means that the ground field is uncountable and that the property is true except on a countable union of proper Zariski-closed subsets (i.e., the property holds on a dense Gδ set). For instance, this notion of very generic occurs when considering rational connectedness. However, other definitions of very generic can and do occur in other contexts. Generic point In algebraic geometry, a generic point of an algebraic variety is a point whose coordinates do not satisfy any other algebraic relation than those satisfied by every point of the variety. For example, a generic point of an affine space over a field is a point whose coordinates are algebraically independent over . In scheme theory, where the points are the sub varieties, a generic point of a variety is a point whose closure for the Zariski topology is the whole variety. A generic property is a property of the generic point. For any reasonable property, it turns out that the property is true generically on the subvariety (in the sense of being true on an open dense subset) if and only if the property is true at the generic point. Such results are frequently proved using the methods of limits of affine schemes developed in EGA IV 8. General position A related concept in algebraic geometry is general position, whose precise meaning depends on the context. For example, in the Euclidean plane, three points in general position are not collinear. This is because the property of not being collinear is a generic property of the configuration space of three points in R2. In computability In computability and algorithmic randomness, an infinite string of natural numbers is called 1-generic if, for every c.e. set , either has an initial segment in , or has an initial segment such that every extension is not in W. 1-generics are important in computability, as many constructions can be simplified by considering an appropriate 1-generic. Some key properties are: A 1-generic contains every natural number as an element; No 1-generic is computable (or even bounded by a computable function); All 1-generics are generalised low: . 1-genericity is connected to the topological notion of "generic", as follows. Baire space has a topology with basic open sets for every finite string of natural numbers . Then, an element is 1-generic if and only if it is not on the boundary of any open set. In particular, 1-generics are required to meet every dense open set (though this is a strictly weaker property, called weakly 1-generic). Genericity results Sard's theorem: If is a smooth function between smooth manifolds, then a generic point of N is not a critical value of f – critical values of f are a null set in N. Jacobian criterion / generic smoothness: A generic point of a variety over a field of characteristic zero is smooth. Controllability and observability of linear time-invariant systems are generic both in the topological and measure theory sense. References Singularity theory Algebraic geometry
Generic property
[ "Mathematics" ]
1,745
[ "Fields of abstract algebra", "Algebraic geometry" ]
23,853,677
https://en.wikipedia.org/wiki/Minimum%20viable%20product
A minimum viable product (MVP) is a version of a product with just enough features to be usable by early customers who can then provide feedback for future product development. A focus on releasing an MVP means that developers potentially avoid lengthy and (possibly) unnecessary work. Instead, they iterate on working versions and respond to feedback, challenging and validating assumptions about a product's requirements. The term was coined and defined in 2001 by Frank Robinson and then popularized by Steve Blank and Eric Ries. It may also involve carrying out market analysis beforehand. The MVP is analogous to experimentation in the scientific method applied in the context of validating business hypotheses. It is utilized so that prospective entrepreneurs would know whether a given business idea would actually be viable and profitable by testing the assumptions behind a product or business idea. The concept can be used to validate a market need for a product and for incremental developments of an existing product. As it tests a potential business model to customers to see how the market would react, it is especially useful for new/startup companies who are more concerned with finding out where potential business opportunities exist rather than executing a prefabricated, isolated business model. Description A minimum viable product has just enough core features to effectively deploy the product, and no more. Developers typically deploy the product to a subset of possible customers, such as early adopters who are thought to be more forgiving, more likely to give feedback, and able to grasp a product vision from an early prototype or marketing information. This strategy aims to avoid building products that customers do not want and seeks to maximize information about the customer with the least money spent. The technique falls under the Lean Startup methodology as MVPs aim to test business hypotheses and validated learning is one of the five principles of the Lean Startup method. It contrasts strongly with the traditional "stealth mode" method of product development where businesses make detailed business plans spanning a considerable time horizon. Steve Blank posited that the main principle of the Lean Startup approach rests in the validation of the hypotheses underlying the product by asking customers if they want the product or if the product meets their needs, and pivoting to another approach if the hypothesis turns out to be false. This approach to validating business ideas cheaply before substantial investment saves costs and limits risk as businesses that upon experimentation turn out to be commercially unfeasible can easily be terminated. It is especially important as the main cause of startup failure is the lack of market need; that is, many startups fail because their product isn't needed by many people, and so they cannot generate enough revenue to recoup the initial investment. Thus it can be said that utilizing an MVP would illuminate a prospective entrepreneur on the market demand for their products. For example, in 2015, specialists from the University of Sydney devised the Rippa robot to automate farm and weed management. Before it was released, the technical hypothesis – that the robot can distinguish weeds from farm plants – had already been proven. But the business hypothesis – that it would be a viable tool on a working farm – still needed to be proved. The application of the MVP method here is that the business hypothesis is tested, and only if it proves successful will further development be invested. "The minimum viable product is that version of a new product a team uses to collect the maximum amount of validated learning about customers with the least effort." The definition's use of the words maximum and minimum means it is not formulaic. It requires judgment to figure out, for any given context, what MVP makes sense. Due to this vagueness, the term MVP is commonly used, either deliberately or unwittingly, to refer to a much broader notion ranging from a rather prototype-like product to a fully-fledged and marketable product. An MVP can be part of a strategy and process directed toward making and selling a product to customers. It is a core artifact in an iterative process of idea generation, prototyping, presentation, data collection, analysis and learning. One seeks to minimize the total time spent on an iteration. The process is iterated until a desirable product/market fit is obtained, or until the product is deemed non-viable. Steve Blank typically refers to minimum viable product as minimum feature set. Purposes Test a product hypothesis with minimal resources Accelerate learning Reduce wasted engineering hours Get the product to early customers as soon as possible Find a base for other products Establish a builder's abilities in crafting the product required Build a brand very quickly Testing Testing is the essence of minimum viable products. As described above, an MVP seeks to test out whether an idea works in market environments while using the least possible expenditure. This would be beneficial as it reduces the risk of innovating (so that enormous amounts of capital would not have to be sacrificed before proving that the concept does not actually work), and allowing for gradual, market-tested expansion models such as the real options model. A simple method of testing the financial viability of an idea would be discovery-driven planning, which first tests the financial viability of new ventures by carefully examining the assumptions behind the idea by a reverse income statement (first, begin with the income you want to obtain, then the costs the new invention would take, and see if the required amount of revenue that must be gained for the project to work). Results from a minimum viable product test aim to indicate if the product should be built, to begin with. Testing evaluates if the initial problem or goal is solved in a manner that makes it reasonable to move forward. Notable quotes Steve Blank: "You're selling the vision and delivering the minimum feature set to visionaries, not everyone." Marketing Releasing and assessing the impact of a minimum viable product is a market testing strategy that is used to screen product ideas soon after their generation. In software development, the release is facilitated by rapid application development tools and languages common to web application development. The MVP differs from the conventional market testing strategy of investing time and money early to implement a product before testing it in the market. It is intended to ensure that the market wants the product before large time and monetary investments are made. The MVP differs from the open-source software methodology of release early, release often that listens to users, letting them define the features and future of the product. Instead, the MVP starts with a product vision, which is maintained throughout the product life cycle, although it is adapted based on the explicit and implicit (indirect measures) feedback from potential future customers of the product. The MVP is a strategy that may be used as a part of Blank's customer development methodology that focuses on continual product iteration and refinement based on customer feedback. Additionally, the presentation of non-existing products and features may be refined using web-based statistical hypothesis testing, such as A/B testing. Business Model Canvas The Business Model Canvas is used to map in the major components and activities for a company starting out. The minimum viable product can be designed by using selected components of the Business Model Canvas: Customers Customers on the Business Model Canvas denote to whom a value proposition is considered for. Utilizing the minimum viable concept here would be useful to determine whether the selected customer segment actually wants that product, either from questionnaires or experimental launches. Whichever method is chosen, the key in using the MVP is to spend as little as possible while learning as much as possible, thus in this case validating the market with the least possible cost. Value proposition The value proposition details what does a business offer to its customers – what desires it satisfies or what problems it solves. In this case, usage of the MVP would focus more on the technical feasibilities of the product (whether such value is possible to deliver using the product), as in the Rippa case described earlier. Channels In the business model canvas lingo, channels refer to the ways by which a business delivers value to its customers. MVPs would thus be used here to test whether a newly proposed method of value delivery (for example new channels of distribution, innovations in supply chains) works. Relationship As its name implies, relationships refer to how a business attracts and maintains its customers by providing them with the treatment and care they expect. MVPs here would be used to learn if customers would better appreciate a new method of relationship building, and true to the MVP concept the test would seek to learn as much as possible whilst sacrificing the least amount of brand equity, reputation, or costs possible. Emerging applications Concepts from minimum viable products are applied in other aspects of startups and organizations. Minimum viable brand (MVB) Using a minimum viable brand (MVB) concept can ensure brand hypotheses are grounded in strategic intent and market insights. Minimum viable co-founder Finding other people to create a minimum viable product is a common challenge for new companies and startups. The concept of minimum viable co-founder is based on looking for a co-founder with the following attributes: Trust Exceptional at building or selling Company commitment Personally likable Productivity Reasonable Rational Realistic Criticism Some research has shown that early release of an MVP may hurt a company more than help when companies risk imitation by a competitor and have not established other barriers to imitation. It has also indicated that negative feedback on an MVP can negatively affect a company's reputation. Many developers of mobile and digital products are now criticizing the MVP because customers can easily switch between competing products through platforms (e.g. app stores). Also, products that do not offer the expected minimum standard of quality are inferior to competitors that enter the market with a higher standard. A notable limitation of the MVP is rooted in its approach that seeks out to test its ideas to the market. Since the business's new product ideas can be inferred from their testing, the method may be unsuited to environments where the protection of the intellectual property is limited (and where products are easily imitated). The criticism of the MVP approach has led to several new approaches, e.g. the Minimum Viable Experiment MVE, the Minimum Awesome Product MAP, or the Simple, Lovable, Complete. See also Lean startup Minimum marketable feature Mockup Pilot experiment Proof of concept Startup company The Cathedral and the Bazaar References Product development Systems engineering External links
Minimum viable product
[ "Engineering" ]
2,086
[ "Systems engineering" ]
23,854,465
https://en.wikipedia.org/wiki/Real-time%20Control%20System
Real-time Control System (RCS) is a reference model architecture, suitable for many software-intensive, real-time computing control problem domains. It defines the types of functions needed in a real-time intelligent control system, and how these functions relate to each other. RCS is not a system design, nor is it a specification of how to implement specific systems. RCS prescribes a hierarchical control model based on a set of well-founded engineering principles to organize system complexity. All the control nodes at all levels share a generic node model. Also RCS provides a comprehensive methodology for designing, engineering, integrating, and testing control systems. Architects iteratively partition system tasks and information into finer, finite subsets that are controllable and efficient. RCS focuses on intelligent control that adapts to uncertain and unstructured operating environments. The key concerns are sensing, perception, knowledge, costs, learning, planning, and execution. Overview A reference model architecture is a canonical form, not a system design specification. The RCS reference model architecture combines real-time motion planning and control with high level task planning, problem solving, world modeling, recursive state estimation, tactile and visual image processing, and acoustic signature analysis. In fact, the evolution of the RCS concept has been driven by an effort to include the best properties and capabilities of most, if not all, the intelligent control systems currently known in the literature, from subsumption to SOAR, from blackboards to object-oriented programming. RCS (real-time control system) is developed into an intelligent agent architecture designed to enable any level of intelligent behavior, up to and including human levels of performance. RCS was inspired by a theoretical model of the cerebellum, the portion of the brain responsible for fine motor coordination and control of conscious motions. It was originally designed for sensory-interactive goal-directed control of laboratory manipulators. Over three decades, it has evolved into a real-time control architecture for intelligent machine tools, factory automation systems, and intelligent autonomous vehicles. RCS applies to many problem domains including manufacturing examples and vehicle systems examples. Systems based on the RCS architecture have been designed and implemented to varying degrees for a wide variety of applications that include loading and unloading of parts and tools in machine tools, controlling machining workstations, performing robotic deburring and chamfering, and controlling space station telerobots, multiple autonomous undersea vehicles, unmanned land vehicles, coal mining automation systems, postal service mail handling systems, and submarine operational automation systems. History RCS has evolved through a variety of versions over a number of years as understanding of the complexity and sophistication of intelligent behavior has increased. The first implementation was designed for sensory-interactive robotics by Barbera in the mid 1970s. RCS-1 In RCS-1, the emphasis was on combining commands with sensory feedback so as to compute the proper response to every combination of goals and states. The application was to control a robot arm with a structured light vision system in visual pursuit tasks. RCS-1 was heavily influenced by biological models such as the Marr-Albus model, and the Cerebellar Model Arithmetic Computer (CMAC). of the cerebellum. CMAC becomes a state machine when some of its outputs are fed directly back to the input, so RCS-1 was implemented as a set of state-machines arranged in a hierarchy of control levels. At each level, the input command effectively selects a behavior that is driven by feedback in stimulus-response fashion. CMAC thus became the reference model building block of RCS-1, as shown in the figure. A hierarchy of these building blocks was used to implement a hierarchy of behaviors such as observed by Tinbergen and others. RCS-1 is similar in many respects to Brooks' subsumption architecture, except that RCS selects behaviors before the fact through goals expressed in commands, rather than after the fact through subsumption. RCS-2 The next generation, RCS-2, was developed by Barbera, Fitzgerald, Kent, and others for manufacturing control in the NIST Automated Manufacturing Research Facility (AMRF) during the early 1980s. The basic building block of RCS-2 is shown in the figure. The H function remained a finite-state machine state-table executor. The new feature of RCS-2 was the inclusion of the G function consisting of a number of sensory processing algorithms including structured light and blob analysis algorithms. RCS-2 was used to define an eight level hierarchy consisting of Servo, Coordinate Transform, E-Move, Task, Workstation, Cell, Shop, and Facility levels of control. Only the first six levels were actually built. Two of the AMRF workstations fully implemented five levels of RCS-2. The control system for the Army Field Material Handling Robot (FMR) was also implemented in RCS-2, as was the Army TMAP semi-autonomous land vehicle project. RCS-3 RCS-3 was designed for the NBS/DARPA Multiple Autonomous Undersea Vehicle (MAUV) project and was adapted for the NASA/NBS Standard Reference Model Telerobot Control System Architecture (NASREM) developed for the space station Flight Telerobotic Servicer The basic building block of RCS-3 is shown in the figure. The principal new features introduced in RCS-3 are the World Model and the operator interface. The inclusion of the World Model provides the basis for task planning and for model-based sensory processing. This led to refinement of the task decomposition (TD) modules so that each have a job assigner, and planner and executor for each of the subsystems assigned a job. This corresponds roughly to Saridis' three level control hierarchy. RCS-4 RCS-4 is developed since the 1990s by the NIST Robot Systems Division. The basic building block is shown in the figure). The principal new feature in RCS-4 is the explicit representation of the Value Judgment (VJ) system. VJ modules provide to the RCS-4 control system the type of functions provided to the biological brain by the limbic system. The VJ modules contain processes that compute cost, benefit, and risk of planned actions, and that place value on objects, materials, territory, situations, events, and outcomes. Value state-variables define what goals are important and what objects or regions should be attended to, attacked, defended, assisted, or otherwise acted upon. Value judgments, or evaluation functions, are an essential part of any form of planning or learning. The application of value judgments to intelligent control systems has been addressed by George Pugh. The structure and function of VJ modules are developed more completely developed in Albus (1991). RCS-4 also uses the term behavior generation (BG) in place of the RCS-3 term task 5 decomposition (TD). The purpose of this change is to emphasize the degree of autonomous decision making. RCS-4 is designed to address highly autonomous applications in unstructured environments where high bandwidth communications are impossible, such as unmanned vehicles operating on the battlefield, deep undersea, or on distant planets. These applications require autonomous value judgments and sophisticated real-time perceptual capabilities. RCS-3 will continue to be used for less demanding applications, such as manufacturing, construction, or telerobotics for near-space, or shallow undersea operations, where environments are more structured and communication bandwidth to a human interface is less restricted. In these applications, value judgments are often represented implicitly in task planning processes, or in human operator input. Methodology In the figure, an example of the RCS methodology for designing a control system for autonomous onroad driving under everyday traffic conditions is summarized in six steps. Step 1 consists of an intensive analysis of domain knowledge from training manuals and subject matter experts. Scenarios are developed and analyzed for each task and subtask. The result of this step is a structuring of procedural knowledge into a task decomposition tree with simpler and simpler tasks at each echelon. At each echelon, a vocabulary of commands (action verbs with goal states, parameters, and constraints) is defined to evoke task behavior at each echelon. Step 2 defines a hierarchical structure of organizational units that will execute the commands defined in step 1. For each unit, its duties and responsibilities in response to each command are specified. This is analogous to establishing a work breakdown structure for a development project, or defining an organizational chart for a business or military operation. Step 3 specifies the processing that is triggered within each unit upon receipt of an input command. For each input command, a state-graph (or statetable or extended finite state automaton) is defined that provides a plan (or procedure for making a plan) for accomplishing the commanded task. The input command selects (or causes to be generated) an appropriate state-table, the execution of which generates a series of output commands to units at the next lower echelon. The library of state-tables contains a set of statesensitive procedural rules that identify all the task branching conditions and specify the corresponding state transition and output command parameters. The result of step 3 is that each organizational unit has for each input command a state-table of ordered production rules, each suitable for execution by an extended finite state automaton (FSA). The sequence of output subcommands required to accomplish the input command is generated by situations (i.e., branching conditions) that cause the FSA to transition from one output subcommand to the next. In step 4, each of the situations that are defined in step 3 are analyzed to reveal their dependencies on world and task states. This step identifies the detailed relationships between entities, events, and states of the world that cause a particular situation to be true. In step 5, we identify and name all of the objects and entities together with their particular features and attributes that are relevant to detecting the above world states and situations. In step 6, we use the context of the particular task activities to establish the distances and, therefore, the resolutions at which the relevant objects and entities must be measured and recognized by the sensory processing component. This establishes a set of requirements and/or specifications for the sensor system to support each subtask activity. Software Based on the RCS Reference Model Architecture the NIST has developed a Real-time Control System Software Library. This is an archive of free C++, Java and Ada code, scripts, tools, makefiles, and documentation developed to aid programmers of software to be used in real-time control systems, especially those using the Reference Model Architecture for Intelligent Systems Design. Applications The ISAM Framework is an RCS application to the Manufacturing Domain. The 4D-RCS Reference Model Architecture is the RCS application to the Vehicle Domain, and The NASA/NBS Standard Reference Model for Telerobot Control Systems Architecture (NASREM) is an application to the Space Domain. References External links RCS The Real-time Control Systems Architecture NIST Homepage Control theory Enterprise modelling Industrial computing
Real-time Control System
[ "Mathematics", "Technology", "Engineering" ]
2,296
[ "Systems engineering", "Applied mathematics", "Control theory", "Enterprise modelling", "Automation", "Industrial engineering", "Industrial computing", "Dynamical systems" ]
23,855,903
https://en.wikipedia.org/wiki/Quantum%20ergodicity
In quantum chaos, a branch of mathematical physics, quantum ergodicity is a property of the quantization of classical mechanical systems that are chaotic in the sense of exponential sensitivity to initial conditions. Quantum ergodicity states, roughly, that in the high-energy limit, the probability distributions associated to energy eigenstates of a quantized ergodic Hamiltonian tend to a uniform distribution in the classical phase space. This is consistent with the intuition that the flows of ergodic systems are equidistributed in phase space. By contrast, classical completely integrable systems generally have periodic orbits in phase space, and this is exhibited in a variety of ways in the high-energy limit of the eigenstates: typically, some form of concentration occurs in the semiclassical limit . The model case of a Hamiltonian is the geodesic Hamiltonian on the cotangent bundle of a compact Riemannian manifold. The quantization of the geodesic flow is given by the fundamental solution of the Schrödinger equation where is the square root of the Laplace–Beltrami operator. The quantum ergodicity theorem of Shnirelman 1974, Zelditch, and Yves Colin de Verdière states that a compact Riemannian manifold whose unit tangent bundle is ergodic under the geodesic flow is also ergodic in the sense that the probability density associated to the nth eigenfunction of the Laplacian tends weakly to the uniform distribution on the unit cotangent bundle as n → ∞ in a subset of the natural numbers of natural density equal to one. Quantum ergodicity can be formulated as a non-commutative analogue of the classical ergodicity (T. Sunada). Since a classically chaotic system is also ergodic, almost all of its trajectories eventually explore uniformly the entire accessible phase space. Thus, when translating the concept of ergodicity to the quantum realm, it is natural to assume that the eigenstates of the quantum chaotic system would fill the quantum phase space evenly (up to random fluctuations) in the semiclassical limit . The quantum ergodicity theorems of Shnirelman, Zelditch, and Yves Colin de Verdière proves that the expectation value of an operator converges in the semiclassical limit to the corresponding microcanonical classical average. However, the quantum ergodicity theorem leaves open the possibility of eigenfunctions become sparse with serious holes as , leaving large but not macroscopic gaps on the energy manifolds in the phase space. In particular, the theorem allows the existence of a subset of macroscopically nonergodic states which on the other hand must approach zero measure, i.e., the contribution of this set goes towards zero percent of all eigenstates when . For example, the theorem do not exclude quantum scarring, as the phase space volume of the scars also gradually vanishes in this limit. A quantum eigenstate is scarred by periodic orbit if its probability density is on the classical invariant manifolds near and all along that periodic orbit is systematically enhanced above the classical, statistically expected density along that orbit. In a simplified manner, a quantum scar refers to an eigenstate of whose probability density is enhanced in the neighborhood of a classical periodic orbit when the corresponding classical system is chaotic. In conventional scarring, the responsive periodic orbit is unstable. The instability is a decisive point that separates quantum scars from a more trivial finding that the probability density is enhanced near stable periodic orbits due to the Bohr's correspondence principle. The latter can be viewed as a purely classical phenomenon, whereas in the former quantum interference is important. On the other hand, in the perturbation-induced quantum scarring, some of the high-energy eigenstates of a locally perturbed quantum dot contain scars of short periodic orbits of the corresponding unperturbed system. Even though similar in appearance to ordinary quantum scars, these scars have a fundamentally different origin., In this type of scarring, there are no periodic orbits in the perturbed classical counterpart or they are too unstable to cause a scar in a conventional sense. Conventional and perturbation-induced scars are both a striking visual example of classical-quantum correspondence and of a quantum suppression of chaos (see the figure). In particular, scars are a significant correction to the assumption that the corresponding eigenstates of a classically chaotic Hamiltonian are only featureless and random. In some sense, scars can be considered as an eigenstate counterpart to the quantum ergodicity theorem of how short periodic orbits provide corrections to the universal random matrix theory eigenvalue statistics. See also Eigenstate thermalization hypothesis Ergodic hypothesis Quantum chaos Scar (physics) External links Shnirelman theorem, Scholarpedia article References Modular forms Chaos theory Ergodic theory Quantum mechanics Quantum chaos theory
Quantum ergodicity
[ "Physics", "Mathematics" ]
1,015
[ "Theoretical physics", "Quantum mechanics", "Ergodic theory", "Modular forms", "Number theory", "Dynamical systems" ]
23,859,945
https://en.wikipedia.org/wiki/N-body%20problem
In physics, the -body problem is the problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally. Solving this problem has been motivated by the desire to understand the motions of the Sun, Moon, planets, and visible stars. In the 20th century, understanding the dynamics of globular cluster star systems became an important -body problem. The -body problem in general relativity is considerably more difficult to solve due to additional factors like time and space distortions. The classical physical problem can be informally stated as the following: The two-body problem has been completely solved and is discussed below, as well as the famous restricted three-body problem. History Knowing three orbital positions of a planet's orbit – positions obtained by Sir Isaac Newton from astronomer John Flamsteed – Newton was able to produce an equation by straightforward analytical geometry, to predict a planet's motion; i.e., to give its orbital properties: position, orbital diameter, period and orbital velocity. Having done so, he and others soon discovered over the course of a few years, those equations of motion did not predict some orbits correctly or even very well. Newton realized that this was because gravitational interactive forces amongst all the planets were affecting all their orbits. The aforementioned revelation strikes directly at the core of what the n-body issue physically is: as Newton understood, it is not enough to just provide the beginning location and velocity, or even three orbital positions, in order to establish a planet's actual orbit; one must also be aware of the gravitational interaction forces. Thus came the awareness and rise of the -body "problem" in the early 17th century. These gravitational attractive forces do conform to Newton's laws of motion and to his law of universal gravitation, but the many multiple (-body) interactions have historically made any exact solution intractable. Ironically, this conformity led to the wrong approach. After Newton's time the -body problem historically was not stated correctly because it did not include a reference to those gravitational interactive forces. Newton does not say it directly but implies in his Principia the -body problem is unsolvable because of those gravitational interactive forces. Newton said in his Principia, paragraph 21: Newton concluded via his third law of motion that "according to this Law all bodies must attract each other." This last statement, which implies the existence of gravitational interactive forces, is key. As shown below, the problem also conforms to Jean Le Rond D'Alembert's non-Newtonian first and second Principles and to the nonlinear -body problem algorithm, the latter allowing for a closed form solution for calculating those interactive forces. The problem of finding the general solution of the -body problem was considered very important and challenging. Indeed, in the late 19th century King Oscar II of Sweden, advised by Gösta Mittag-Leffler, established a prize for anyone who could find the solution to the problem. The announcement was quite specific: In case the problem could not be solved, any other important contribution to classical mechanics would then be considered to be prizeworthy. The prize was awarded to Poincaré, even though he did not solve the original problem. (The first version of his contribution even contained a serious error.) The version finally printed contained many important ideas which led to the development of chaos theory. The problem as stated originally was finally solved by Karl Fritiof Sundman for and generalized to by L. K. Babadzanjanz and Qiudong Wang. General formulation The -body problem considers point masses in an inertial reference frame in three dimensional space moving under the influence of mutual gravitational attraction. Each mass has a position vector . Newton's second law says that mass times acceleration is equal to the sum of the forces on the mass. Newton's law of gravity says that the gravitational force felt on mass by a single mass is given by where is the gravitational constant and is the magnitude of the distance between and (metric induced by the norm). Summing over all masses yields the -body equations of motion:where is the self-potential energy Defining the momentum to be , Hamilton's equations of motion for the -body problem become where the Hamiltonian function is and is the kinetic energy Hamilton's equations show that the -body problem is a system of first-order differential equations, with initial conditions as initial position coordinates and initial momentum values. Symmetries in the -body problem yield global integrals of motion that simplify the problem. Translational symmetry of the problem results in the center of mass moving with constant velocity, so that , where is the linear velocity and is the initial position. The constants of motion and represent six integrals of the motion. Rotational symmetry results in the total angular momentum being constant where × is the cross product. The three components of the total angular momentum yield three more constants of the motion. The last general constant of the motion is given by the conservation of energy . Hence, every -body problem has ten integrals of motion. Because and are homogeneous functions of degree 2 and −1, respectively, the equations of motion have a scaling invariance: if is a solution, then so is for any . The moment of inertia of an -body system is given by and the virial is given by . Then the Lagrange–Jacobi formula states that For systems in dynamic equilibrium, the longterm time average of is zero. Then on average the total kinetic energy is half the total potential energy, , which is an example of the virial theorem for gravitational systems. If is the total mass and a characteristic size of the system (for example, the radius containing half the mass of the system), then the critical time for a system to settle down to a dynamic equilibrium is Special cases Two-body problem Any discussion of planetary interactive forces has always started historically with the two-body problem. The purpose of this section is to relate the real complexity in calculating any planetary forces. Note in this Section also, several subjects, such as gravity, barycenter, Kepler's Laws, etc.; and in the following Section too (Three-body problem) are discussed on other Wikipedia pages. Here though, these subjects are discussed from the perspective of the -body problem. The two-body problem () was completely solved by Johann Bernoulli (1667–1748) by classical theory (and not by Newton) by assuming the main point-mass was fixed; this is outlined here. Consider then the motion of two bodies, say the Sun and the Earth, with the Sun fixed, then: The equation describing the motion of mass relative to mass is readily obtained from the differences between these two equations and after canceling common terms gives: Where is the vector position of relative to ; is the Eulerian acceleration ; . The equation is the fundamental differential equation for the two-body problem Bernoulli solved in 1734. Notice for this approach forces have to be determined first, then the equation of motion resolved. This differential equation has elliptic, or parabolic or hyperbolic solutions. It is incorrect to think of (the Sun) as fixed in space when applying Newton's law of universal gravitation, and to do so leads to erroneous results. The fixed point for two isolated gravitationally interacting bodies is their mutual barycenter, and this two-body problem can be solved exactly, such as using Jacobi coordinates relative to the barycenter. Dr. Clarence Cleminshaw calculated the approximate position of the Solar System's barycenter, a result achieved mainly by combining only the masses of Jupiter and the Sun. Science Program stated in reference to his work: The Sun wobbles as it rotates around the Galactic Center, dragging the Solar System and Earth along with it. What mathematician Kepler did in arriving at his three famous equations was curve-fit the apparent motions of the planets using Tycho Brahe's data, and not curve-fitting their true circular motions about the Sun (see Figure). Both Robert Hooke and Newton were well aware that Newton's Law of Universal Gravitation did not hold for the forces associated with elliptical orbits. In fact, Newton's Universal Law does not account for the orbit of Mercury, the asteroid belt's gravitational behavior, or Saturn's rings. Newton stated (in section 11 of the Principia) that the main reason, however, for failing to predict the forces for elliptical orbits was that his math model was for a body confined to a situation that hardly existed in the real world, namely, the motions of bodies attracted toward an unmoving center. Some present physics and astronomy textbooks do not emphasize the negative significance of Newton's assumption and end up teaching that his mathematical model is in effect reality. It is to be understood that the classical two-body problem solution above is a mathematical idealization. See also Kepler's first law of planetary motion. Three-body problem This section relates a historically important -body problem solution after simplifying assumptions were made. In the past not much was known about the -body problem for . The case has been the most studied. Many earlier attempts to understand the three-body problem were quantitative, aiming at finding explicit solutions for special situations. In 1687, Isaac Newton published in the Principia the first steps in the study of the problem of the movements of three bodies subject to their mutual gravitational attractions, but his efforts resulted in verbal descriptions and geometrical sketches; see especially Book 1, Proposition 66 and its corollaries (Newton, 1687 and 1999 (transl.), see also Tisserand, 1894). In 1767, Euler found collinear motions, in which three bodies of any masses move proportionately along a fixed straight line. The Euler's three-body problem is the special case in which two of the bodies are fixed in space (this should not be confused with the circular restricted three-body problem, in which the two massive bodies describe a circular orbit and are only fixed in a synodic reference frame). In 1772, Lagrange discovered two classes of periodic solution, each for three bodies of any masses. In one class, the bodies lie on a rotating straight line. In the other class, the bodies lie at the vertices of a rotating equilateral triangle. In either case, the paths of the bodies will be conic sections. Those solutions led to the study of central configurations, for which for some constant . A major study of the Earth–Moon–Sun system was undertaken by Charles-Eugène Delaunay, who published two volumes on the topic, each of 900 pages in length, in 1860 and 1867. Among many other accomplishments, the work already hints at chaos, and clearly demonstrates the problem of so-called "small denominators" in perturbation theory. In 1917, Forest Ray Moulton published his now classic, An Introduction to Celestial Mechanics (see references) with its plot of the restricted three-body problem solution (see figure below). An aside, see Meirovitch's book, pages 413–414 for his restricted three-body problem solution. Moulton's solution may be easier to visualize (and definitely easier to solve) if one considers the more massive body (such as the Sun) to be stationary in space, and the less massive body (such as Jupiter) to orbit around it, with the equilibrium points (Lagrangian points) maintaining the 60° spacing ahead of, and behind, the less massive body almost in its orbit (although in reality neither of the bodies are truly stationary, as they both orbit the center of mass of the whole system—about the barycenter). For sufficiently small mass ratio of the primaries, these triangular equilibrium points are stable, such that (nearly) massless particles will orbit about these points as they orbit around the larger primary (Sun). The five equilibrium points of the circular problem are known as the Lagrangian points. See figure below: In the restricted three-body problem math model figure above (after Moulton), the Lagrangian points L4 and L5 are where the Trojan planetoids resided (see Lagrangian point); is the Sun and is Jupiter. L2 is a point within the asteroid belt. It has to be realized for this model, this whole Sun-Jupiter diagram is rotating about its barycenter. The restricted three-body problem solution predicted the Trojan planetoids before they were first seen. The -circles and closed loops echo the electromagnetic fluxes issued from the Sun and Jupiter. It is conjectured, contrary to Richard H. Batin's conjecture (see References), the two are gravity sinks, in and where gravitational forces are zero, and the reason the Trojan planetoids are trapped there. The total amount of mass of the planetoids is unknown. The restricted three-body problem assumes the mass of one of the bodies is negligible. For a discussion of the case where the negligible body is a satellite of the body of lesser mass, see Hill sphere; for binary systems, see Roche lobe. Specific solutions to the three-body problem result in chaotic motion with no obvious sign of a repetitious path. The restricted problem (both circular and elliptical) was worked on extensively by many famous mathematicians and physicists, most notably by Poincaré at the end of the 19th century. Poincaré's work on the restricted three-body problem was the foundation of deterministic chaos theory. In the restricted problem, there exist five equilibrium points. Three are collinear with the masses (in the rotating frame) and are unstable. The remaining two are located on the third vertex of both equilateral triangles of which the two bodies are the first and second vertices. Four-body problem Inspired by the circular restricted three-body problem, the four-body problem can be greatly simplified by considering a smaller body to have a small mass compared to the other three massive bodies, which in turn are approximated to describe circular orbits. This is known as the bicircular restricted four-body problem (also known as bicircular model) and it can be traced back to 1960 in a NASA report written by Su-Shu Huang. This formulation has been highly relevant in the astrodynamics, mainly to model spacecraft trajectories in the Earth-Moon system with the addition of the gravitational attraction of the Sun. The former formulation of the bicircular restricted four-body problem can be problematic when modelling other systems than the Earth-Moon-Sun, so the formulation was generalized by Negri and Prado to expand the application range and improve the accuracy without loss of simplicity. Planetary problem The planetary problem is the -body problem in the case that one of the masses is much larger than all the others. A prototypical example of a planetary problem is the Sun–Jupiter–Saturn system, where the mass of the Sun is about 1000 times larger than the masses of Jupiter or Saturn. An approximate solution to the problem is to decompose it into pairs of star–planet Kepler problems, treating interactions among the planets as perturbations. Perturbative approximation works well as long as there are no orbital resonances in the system, that is none of the ratios of unperturbed Kepler frequencies is a rational number. Resonances appear as small denominators in the expansion. The existence of resonances and small denominators led to the important question of stability in the planetary problem: do planets, in nearly circular orbits around a star, remain in stable or bounded orbits over time? In 1963, Vladimir Arnold proved using KAM theory a kind of stability of the planetary problem: there exists a set of positive measure of quasiperiodic orbits in the case of the planetary problem restricted to the plane. In the KAM theory, chaotic planetary orbits would be bounded by quasiperiodic KAM tori. Arnold's result was extended to a more general theorem by Féjoz and Herman in 2004. Central configurations A central configuration is an initial configuration such that if the particles were all released with zero velocity, they would all collapse toward the center of mass . Such a motion is called homothetic. Central configurations may also give rise to homographic motions in which all masses moves along Keplerian trajectories (elliptical, circular, parabolic, or hyperbolic), with all trajectories having the same eccentricity . For elliptical trajectories, corresponds to homothetic motion and gives a relative equilibrium motion in which the configuration remains an isometry of the initial configuration, as if the configuration was a rigid body. Central configurations have played an important role in understanding the topology of invariant manifolds created by fixing the first integrals of a system. -body choreography Solutions in which all masses move on the same curve without collisions are called choreographies. A choreography for was discovered by Lagrange in 1772 in which three bodies are situated at the vertices of an equilateral triangle in the rotating frame. A figure eight choreography for was found numerically by C. Moore in 1993 and generalized and proven by A. Chenciner and R. Montgomery in 2000. Since then, many other choreographies have been found for . Analytic approaches For every solution of the problem, not only applying an isometry or a time shift but also a reversal of time (unlike in the case of friction) gives a solution as well. In the physical literature about the -body problem (), sometimes reference is made to "the impossibility of solving the -body problem" (via employing the above approach). However, care must be taken when discussing the 'impossibility' of a solution, as this refers only to the method of first integrals (compare the theorems by Abel and Galois about the impossibility of solving algebraic equations of degree five or higher by means of formulas only involving roots). Power series solution One way of solving the classical -body problem is "the -body problem by Taylor series". We start by defining the system of differential equations: As and are given as initial conditions, every is known. Differentiating results in which at which is also known, and the Taylor series is constructed iteratively. A generalized Sundman global solution In order to generalize Sundman's result for the case (or and ) one has to face two obstacles: As has been shown by Siegel, collisions which involve more than two bodies cannot be regularized analytically, hence Sundman's regularization cannot be generalized. The structure of singularities is more complicated in this case: other types of singularities may occur (see below). Lastly, Sundman's result was generalized to the case of bodies by Qiudong Wang in the 1990s. Since the structure of singularities is more complicated, Wang had to leave out completely the questions of singularities. The central point of his approach is to transform, in an appropriate manner, the equations to a new system, such that the interval of existence for the solutions of this new system is . Singularities of the -body problem There can be two types of singularities of the -body problem: collisions of two or more bodies, but for which (the bodies' positions) remains finite. (In this mathematical sense, a "collision" means that two pointlike bodies have identical positions in space.) singularities in which a collision does not occur, but does not remain finite. In this scenario, bodies diverge to infinity in a finite time, while at the same time tending towards zero separation (an imaginary collision occurs "at infinity"). The latter ones are called Painlevé's conjecture (no-collisions singularities). Their existence has been conjectured for by Painlevé (see Painlevé conjecture). Examples of this behavior for have been constructed by Xia and a heuristic model for by Gerver. Donald G. Saari has shown that for 4 or fewer bodies, the set of initial data giving rise to singularities has measure zero. Simulation While there are analytic solutions available for the classical (i.e. nonrelativistic) two-body problem and for selected configurations with , in general -body problems must be solved or simulated using numerical methods. Few bodies For a small number of bodies, an -body problem can be solved using direct methods, also called particle–particle methods. These methods numerically integrate the differential equations of motion. Numerical integration for this problem can be a challenge for several reasons. First, the gravitational potential is singular; it goes to infinity as the distance between two particles goes to zero. The gravitational potential may be "softened" to remove the singularity at small distances: Second, in general for , the -body problem is chaotic, which means that even small errors in integration may grow exponentially in time. Third, a simulation may be over large stretches of model time (e.g. millions of years) and numerical errors accumulate as integration time increases. There are a number of techniques to reduce errors in numerical integration. Local coordinate systems are used to deal with widely differing scales in some problems, for example an Earth–Moon coordinate system in the context of a solar system simulation. Variational methods and perturbation theory can yield approximate analytic trajectories upon which the numerical integration can be a correction. The use of a symplectic integrator ensures that the simulation obeys Hamilton's equations to a high degree of accuracy and in particular that energy is conserved. Many bodies Direct methods using numerical integration require on the order of computations to evaluate the potential energy over all pairs of particles, and thus have a time complexity of . For simulations with many particles, the factor makes large-scale calculations especially time-consuming. A number of approximate methods have been developed that reduce the time complexity relative to direct methods: Tree code methods, such as a Barnes–Hut simulation, are spatially-hierarchical methods used when distant particle contributions do not need to be computed to high accuracy. The potential of a distant group of particles is computed using a multipole expansion or other approximation of the potential. This allows for a reduction in complexity to . Fast multipole methods take advantage of the fact that the multipole-expanded forces from distant particles are similar for particles close to each other, and uses local expansions of far-field forces to reduce computational effort. It is claimed that this further approximation reduces the complexity to . Particle mesh methods divide up simulation space into a three dimensional grid onto which the mass density of the particles is interpolated. Then calculating the potential becomes a matter of solving a Poisson equation on the grid, which can be computed in time using fast Fourier transform or time using multigrid techniques. This can provide fast solutions at the cost of higher error for short-range forces. Adaptive mesh refinement can be used to increase accuracy in regions with large numbers of particles. P3M and PM-tree methods are hybrid methods that use the particle mesh approximation for distant particles, but use more accurate methods for close particles (within a few grid intervals). P3M stands for particle–particle, particle–mesh and uses direct methods with softened potentials at close range. PM-tree methods instead use tree codes at close range. As with particle mesh methods, adaptive meshes can increase computational efficiency. Mean field methods approximate the system of particles with a time-dependent Boltzmann equation representing the mass density that is coupled to a self-consistent Poisson equation representing the potential. It is a type of smoothed-particle hydrodynamics approximation suitable for large systems. Strong gravitation In astrophysical systems with strong gravitational fields, such as those near the event horizon of a black hole, -body simulations must take into account general relativity; such simulations are the domain of numerical relativity. Numerically simulating the Einstein field equations is extremely challenging and a parameterized post-Newtonian formalism (PPN), such as the Einstein–Infeld–Hoffmann equations, is used if possible. The two-body problem in general relativity is analytically solvable only for the Kepler problem, in which one mass is assumed to be much larger than the other. Other -body problems Most work done on the -body problem has been on the gravitational problem. But there exist other systems for which -body mathematics and simulation techniques have proven useful. In large scale electrostatics problems, such as the simulation of proteins and cellular assemblies in structural biology, the Coulomb potential has the same form as the gravitational potential, except that charges may be positive or negative, leading to repulsive as well as attractive forces. Fast Coulomb solvers are the electrostatic counterpart to fast multipole method simulators. These are often used with periodic boundary conditions on the region simulated and Ewald summation techniques are used to speed up computations. In statistics and machine learning, some models have loss functions of a form similar to that of the gravitational potential: a sum of kernel functions over all pairs of objects, where the kernel function depends on the distance between the objects in parameter space. Example problems that fit into this form include all-nearest-neighbors in manifold learning, kernel density estimation, and kernel machines. Alternative optimizations to reduce the time complexity to have been developed, such as dual tree algorithms, that have applicability to the gravitational -body problem as well. A technique in Computational fluid dynamics called Vortex Methods sees the vorticity in a fluid domain discretized onto particles which are then advected with the velocity at their centers. Because the fluid velocity and vorticity are related via a Poisson's equation, the velocity can be solved in the same manner as gravitation and electrostatics: as an -body summation over all vorticity-containing particles. The summation uses the Biot-Savart law, with vorticity taking the place of electrical current. In the context of particle-laden turbulent multiphase flows, determining an overall disturbance field generated by all particles is an -body problem. If the particles translating within the flow are much smaller than the flow's Kolmogorov scale, their linear Stokes disturbance fields can be superposed, yielding a system of 3 equations for 3 components of disturbance velocities at the location of particles. See also Celestial mechanics Gravitational two-body problem Jacobi integral Lunar theory Natural units Numerical model of the Solar System Stability of the Solar System Few-body systems N-body simulation, a method for numerically obtaining trajectories of bodies in an N-body system. Notes References Also English translation of 3rd (1726) edition by I. Bernard Cohen and Anne Whitman (Berkeley, CA, 1999). . Further reading Employs energy methods rather than a Newtonian approach. nbody*.zip is available at https://web.archive.org/web/19990221123102/http://ftp.cica.indiana.edu/: see external links. External links Three-Body Problem at Scholarpedia More detailed information on the three-body problem Regular Keplerian motions in classical many-body systems Applet demonstrating chaos in restricted three-body problem Applets demonstrating many different three-body motions On the integration of the -body equations Java applet simulating Solar System Java applet simulating a stable solution to the equi-mass 3-body problem A java applet to simulate the 3D movement of set of particles under gravitational interaction Javascript Simulation of our Solar System The Lagrange Points – with links to the original papers of Euler and Lagrange, and to translations, with discussion Parallel GPU N-body simulation program with fast stackless particles tree traversal Concepts in astrophysics Classical mechanics Computational problems Computational physics Gravity Orbits
N-body problem
[ "Physics", "Mathematics" ]
5,697
[ "Concepts in astrophysics", "Classical mechanics", "Astrophysics", "Computational physics", "Computational problems", "Mechanics", "Mathematical problems" ]
125,024
https://en.wikipedia.org/wiki/Tevatron
The Tevatron was a circular particle accelerator (active until 2011) in the United States, at the Fermi National Accelerator Laboratory (called Fermilab), east of Batavia, Illinois, and was the highest energy particle collider until the Large Hadron Collider (LHC) of the European Organization for Nuclear Research (CERN) was built near Geneva, Switzerland. The Tevatron was a synchrotron that accelerated protons and antiprotons in a circumference ring to energies of up to 1 TeV, hence its name. The Tevatron was completed in 1983 at a cost of $120 million and significant upgrade investments were made during its active years of 1983–2011. The main achievement of the Tevatron was the discovery in 1995 of the top quark—the last fundamental fermion predicted by the Standard Model of particle physics. On July 2, 2012, scientists of the CDF and DØ collider experiment teams at Fermilab announced the findings from the analysis of around 500 trillion collisions produced from the Tevatron collider since 2001, and found that the existence of the suspected Higgs boson was highly likely with a confidence of 99.8%, later improved to over 99.9%. The Tevatron ceased operations on 30 September 2011, due to budget cuts and because of the completion of the LHC, which began operations in early 2010 and is far more powerful (planned energies were two 7 TeV beams at the LHC compared to 1 TeV at the Tevatron). The main ring of the Tevatron will probably be reused in future experiments, and its components may be transferred to other particle accelerators. History December 1, 1968, saw the breaking of ground for the linear accelerator (linac). The construction of the Main Accelerator Enclosure began on October 3, 1969, when the first shovel of earth was turned by Robert R. Wilson, NAL's director. This would become the 6.3 km circumference Fermilab's Main Ring. The linac first 200 MeV beam started on December 1, 1970. The booster first 8 GeV beam was produced on May 20, 1971. On June 30, 1971, a proton beam was guided for the first time through the entire National Accelerator Laboratory accelerator system including the Main Ring. The beam was accelerated to only 7 GeV. Back then, the Booster Accelerator took 200 MeV protons from the Linac and "boosted" their energy to 8 billion electron volts. They were then injected into the Main Accelerator. On the same year before the completion of the Main Ring, Wilson testified to the Joint Committee on Atomic Energy on March 9, 1971, that it was feasible to achieve a higher energy by using superconducting magnets. He also suggested that it could be done by using the same tunnel as the main ring and the new magnets would be installed in the same locations to be operated in parallel to the existing magnets of the Main Ring. That was the starting point of the Tevatron project. The Tevatron was in research and development phase between 1973 and 1979 while the acceleration at the Main Ring continued to be enhanced. A series of milestones saw acceleration rise to 20 GeV on January 22, 1972, to 53 GeV on February 4 and to 100 GeV on February 11. On March 1, 1972, the then NAL accelerator system accelerated for the first time a beam of protons to its design energy of 200 GeV. By the end of 1973, NAL's accelerator system operated routinely at 300 GeV. On 14 May 1976 Fermilab took its protons all the way to 500 GeV. This achievement provided the opportunity to introduce a new energy scale, the teraelectronvolt (TeV), equal to 1000 GeV. On 17 June of that year, the European Super Proton Synchrotron accelerator (SPS) had achieved an initial circulating proton beam (with no accelerating radio-frequency power) of only 400 GeV. The conventional magnet Main Ring was shut down in 1981 for installation of superconducting magnets underneath it. The Main Ring continued to serve as an injector for the Tevatron until the Main Injector was completed west of the Main Ring in 2000. The 'Energy Doubler', as it was known then, produced its first accelerated beam—512 GeV—on July 3, 1983. Its initial energy of 800 GeV was achieved on February 16, 1984. On October 21, 1986, acceleration at the Tevatron was pushed to 900 GeV, providing a first proton–antiproton collision at 1.8 TeV on November 30, 1986. The Main Injector, which replaced the Main Ring, was the most substantial addition, built over six years from 1993 at a cost of $290 million. Tevatron collider Run II begun on March 1, 2001, after successful completion of that facility upgrade. From then, the beam had been capable of delivering an energy of 980 GeV. On July 16, 2004, the Tevatron achieved a new peak luminosity, breaking the record previously held by the old European Intersecting Storage Rings (ISR) at CERN. That very Fermilab record was doubled on September 9, 2006, then a bit more than tripled on March 17, 2008, and ultimately multiplied by a factor of 4 over the previous 2004 record on April 16, 2010 (up to 4 cm−2 s−1). The Tevatron ceased operations on 30 September 2011. By the end of 2011, the Large Hadron Collider (LHC) at CERN had achieved a luminosity almost ten times higher than Tevatron's (at 3.65 cm−2 s−1) and a beam energy of 3.5 TeV each (doing so since March 18, 2010), already ~3.6 times the capabilities of the Tevatron (at 0.98 TeV). Mechanics The acceleration occurred in a number of stages. The first stage was the 750 keV Cockcroft–Walton pre-accelerator, which ionized hydrogen gas and accelerated the negative ions created using a positive voltage. The ions then passed into the 150 meter long linear accelerator (linac) which used oscillating electrical fields to accelerate the ions to 400 MeV. The ions then passed through a carbon foil, to remove the electrons, and the charged protons then moved into the Booster. The Booster was a small circular synchrotron, around which the protons passed up to 20,000 times to attain an energy of around 8 GeV. From the Booster the particles were fed into the Main Injector, which had been completed in 1999 to perform a number of tasks. It could accelerate protons up to 150 GeV; produce 120 GeV protons for antiproton creation; increase antiproton energy to 150 GeV; and inject protons or antiprotons into the Tevatron. The antiprotons were created by the Antiproton Source. 120 GeV protons were collided with a nickel target producing a range of particles including antiprotons which could be collected and stored in the accumulator ring. The ring could then pass the antiprotons to the Main Injector. The Tevatron could accelerate the particles from the Main Injector up to 980 GeV. The protons and antiprotons were accelerated in opposite directions, crossing paths in the CDF and DØ detectors to collide at 1.96 TeV. To hold the particles on track the Tevatron used 774 niobium–titanium superconducting dipole magnets cooled in liquid helium producing the field strength of 4.2 tesla. The field ramped over about 20 seconds as the particles accelerated. Another 240 NbTi quadrupole magnets were used to focus the beam. The initial design luminosity of the Tevatron was 1030 cm−2 s−1, however, following upgrades, the accelerator had been able to deliver luminosities up to 4 cm−2 s−1. On September 27, 1993, the cryogenic cooling system of the Tevatron Accelerator was named an International Historic Landmark by the American Society of Mechanical Engineers. The system, which provided cryogenic liquid helium to the Tevatron's superconducting magnets, was the largest low-temperature system in existence upon its completion in 1978. It kept the coils of the magnets, which bent and focused the particle beam, in a superconducting state, so that they consumed only ⅓ of the power they would have required at normal temperatures. Discoveries The Tevatron confirmed the existence of several subatomic particles that were predicted by theoretical particle physics, or gave suggestions to their existence. In 1995, the CDF experiment and DØ experiment collaborations announced the discovery of the top quark, and by 2007 they measured its mass (172 GeV) to a precision of nearly 1%. In 2006, the CDF collaboration reported the first measurement of Bs oscillations, and observation of two types of sigma baryons. In 2007, the DØ and CDF collaborations reported direct observation of the "Cascade B" () Xi baryon. In September 2008, the DØ collaboration reported detection of the , a "double strange" Omega baryon with the measured mass significantly higher than the quark model prediction. In May 2009 the CDF collaboration made public their results on search for based on analysis of data sample roughly four times larger than the one used by DØ experiment. The mass measurements from the CDF experiment were and in excellent agreement with Standard Model predictions, and no signal has been observed at the previously reported value from the DØ experiment. The two inconsistent results from DØ and CDF differ by or by 6.2 standard deviations. Due to excellent agreement between the mass measured by CDF and the theoretical expectation, it is a strong indication that the particle discovered by CDF is indeed the . It is anticipated that new data from LHC experiments will clarify the situation in the near future. On July 2, 2012, two days before a scheduled announcement at the Large Hadron Collider (LHC), scientists at the Tevatron collider from the CDF and DØ collaborations announced their findings from the analysis of around 500 trillion collisions produced since 2001: They found that the existence of the Higgs boson was likely with a mass in the region of 115 to 135 GeV. The statistical significance of the observed signs was 2.9 sigma, which meant that there is only a 1-in-550 chance that a signal of that magnitude would have occurred if no particle in fact existed with those properties. The final analysis of data from the Tevatron did however not settle the question of whether the Higgs particle exists. Only when the scientists from the Large Hadron Collider announced the more precise LHC results on July 4, 2012, with a mass of 125.3 ± 0.4 GeV (CMS) or 126 ± 0.4 GeV (ATLAS) respectively, was there strong evidence through consistent measurements by the LHC and the Tevatron for the existence of a Higgs particle at that mass range. Disruptions due to earthquakes Even from thousands of miles away, earthquakes caused strong enough movements in the magnets to negatively affect the quality of particle beams and even disrupt them. Therefore, tiltmeters were installed on Tevatron's magnets to monitor minute movements and to help identify the cause of problems quickly. The first known earthquake to disrupt the beam was the 2002 Denali earthquake, with another collider shutdown caused by a moderate local quake on June 28, 2004. Since then, the minute seismic vibrations emanating from over 20 earthquakes were detected at the Tevatron without a shutdown including the 2004 Indian Ocean earthquake, the 2005 Nias–Simeulue earthquake, New Zealand's 2007 Gisborne earthquake, the 2010 Haiti earthquake and the 2010 Chile earthquake. See also Bevatron Large Hadron Collider Superconducting Super Collider Ultra-high-energy cosmic ray Relativistic Heavy Ion Collider References Further reading External links Live Tevatron status FermiLab page for Tevatron – with labelled components The Hunt for the Higgs at Tevatron Technical details of the accelerators Particle accelerators Fermilab Buildings and structures in DuPage County, Illinois Buildings and structures in Kane County, Illinois Physics beyond the Standard Model 1983 establishments in Illinois
Tevatron
[ "Physics" ]
2,591
[ "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model" ]
125,297
https://en.wikipedia.org/wiki/Dynamic%20programming
Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure. If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems. In the optimization literature this relationship is called the Bellman equation. Overview Mathematical optimization In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time. This is done by defining a sequence of value functions V1, V2, ..., Vn taking y as an argument representing the state of the system at times i from 1 to n. The definition of Vn(y) is the value obtained in state y at the last time n. The values Vi at earlier times i = n −1, n − 2, ..., 2, 1 can be found by working backwards, using a recursive relationship called the Bellman equation. For i = 2, ..., n, Vi−1 at any state y is calculated from Vi by maximizing a simple function (usually the sum) of the gain from a decision at time i − 1 and the function Vi at the new state of the system if this decision is made. Since Vi has already been calculated for the needed states, the above operation yields Vi−1 for those states. Finally, V1 at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed. Control theory In control theory, a typical problem is to find an admissible control which causes the system to follow an admissible trajectory on a continuous time interval that minimizes a cost function The solution to this problem is an optimal control law or policy , which produces an optimal trajectory and a cost-to-go function . The latter obeys the fundamental equation of dynamic programming: a partial differential equation known as the Hamilton–Jacobi–Bellman equation, in which and . One finds that minimizing in terms of , , and the unknown function and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary condition . In practice, this generally requires numerical techniques for some discrete approximation to the exact optimization relationship. Alternatively, the continuous process can be approximated by a discrete system, which leads to a following recurrence relation analog to the Hamilton–Jacobi–Bellman equation: at the -th stage of equally spaced discrete time intervals, and where and denote discrete approximations to and . This functional equation is known as the Bellman equation, which can be solved for an exact solution of the discrete approximation of the optimization equation. Example from economics: Ramsey's problem of optimal saving In economics, the objective is generally to maximize (rather than minimize) some dynamic social welfare function. In Ramsey's problem, this function relates amounts of consumption to levels of utility. Loosely speaking, the planner faces the trade-off between contemporaneous consumption and future consumption (via investment in capital stock that is used in production), known as intertemporal choice. Future consumption is discounted at a constant rate . A discrete approximation to the transition equation of capital is given by where is consumption, is capital, and is a production function satisfying the Inada conditions. An initial capital stock is assumed. Let be consumption in period , and assume consumption yields utility as long as the consumer lives. Assume the consumer is impatient, so that he discounts future utility by a factor each period, where . Let be capital in period . Assume initial capital is a given amount , and suppose that this period's capital and consumption determine next period's capital as , where is a positive constant and . Assume capital cannot be negative. Then the consumer's decision problem can be written as follows: subject to for all Written this way, the problem looks complicated, because it involves solving for all the choice variables . (The capital is not a choice variable—the consumer's initial capital is taken as given.) The dynamic programming approach to solve this problem involves breaking it apart into a sequence of smaller decisions. To do so, we define a sequence of value functions , for which represent the value of having any amount of capital at each time . There is (by assumption) no utility from having capital after death, . The value of any quantity of capital at any previous time can be calculated by backward induction using the Bellman equation. In this problem, for each , the Bellman equation is subject to This problem is much simpler than the one we wrote down before, because it involves only two decision variables, and . Intuitively, instead of choosing his whole lifetime plan at birth, the consumer can take things one step at a time. At time , his current capital is given, and he only needs to choose current consumption and saving . To actually solve this problem, we work backwards. For simplicity, the current level of capital is denoted as . is already known, so using the Bellman equation once we can calculate , and so on until we get to , which is the value of the initial decision problem for the whole lifetime. In other words, once we know , we can calculate , which is the maximum of , where is the choice variable and . Working backwards, it can be shown that the value function at time is where each is a constant, and the optimal amount to consume at time is which can be simplified to We see that it is optimal to consume a larger fraction of current wealth as one gets older, finally consuming all remaining wealth in period , the last period of life. Computer science There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems. If a problem can be solved by combining optimal solutions to non-overlapping sub-problems, the strategy is called "divide and conquer" instead. This is why merge sort and quick sort are not classified as dynamic programming problems. Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub-problems. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly the shortest path, then it can be split into sub-paths p1 from u to w and p2 from w to v such that these, in turn, are indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste argument described in Introduction to Algorithms). Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the Bellman–Ford algorithm or the Floyd–Warshall algorithm does. Overlapping sub-problems means that the space of sub-problems must be small, that is, any recursive algorithm solving the problem should solve the same sub-problems over and over, rather than generating new sub-problems. For example, consider the recursive formulation for generating the Fibonacci sequence: Fi = Fi−1 + Fi−2, with base case F1 = F2 = 1. Then F43 = F42 + F41, and F42 = F41 + F40. Now F41 is being solved in the recursive sub-trees of both F43 as well as F42. Even though the total number of sub-problems is actually small (only 43 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming takes account of this fact and solves each sub-problem only once. This can be achieved in either of two ways: Top-down approach: This is the direct fall-out of the recursive formulation of any problem. If the solution to any problem can be formulated recursively using the solution to its sub-problems, and if its sub-problems are overlapping, then one can easily memoize or store the solutions to the sub-problems in a table (often an array or hashtable in practice). Whenever we attempt to solve a new sub-problem, we first check the table to see if it is already solved. If a solution has been recorded, we can use it directly, otherwise we solve the sub-problem and add its solution to the table. Bottom-up approach: Once we formulate the solution to a problem recursively as in terms of its sub-problems, we can try reformulating the problem in a bottom-up fashion: try solving the sub-problems first and use their solutions to build-on and arrive at solutions to bigger sub-problems. This is also usually done in a tabular form by iteratively generating solutions to bigger and bigger sub-problems by using the solutions to small sub-problems. For example, if we already know the values of F41 and F40, we can directly calculate the value of F42. Some programming languages can automatically memoize the result of a function call with a particular set of arguments, in order to speed up call-by-name evaluation (this mechanism is referred to as call-by-need). Some languages make it possible portably (e.g. Scheme, Common Lisp, Perl or D). Some languages have automatic memoization built in, such as tabled Prolog and J, which supports memoization with the M. adverb. In any case, this is only possible for a referentially transparent function. Memoization is also encountered as an easily accessible design pattern within term-rewrite based languages such as Wolfram Language. Bioinformatics Dynamic programming is widely used in bioinformatics for tasks such as sequence alignment, protein folding, RNA structure prediction and protein-DNA binding. The first dynamic programming algorithms for protein-DNA binding were developed in the 1970s independently by Charles DeLisi in the US and by Georgii Gurskii and Alexander Zasedatelev in the Soviet Union. Recently these algorithms have become very popular in bioinformatics and computational biology, particularly in the studies of nucleosome positioning and transcription factor binding. Examples: computer algorithms Dijkstra's algorithm for the shortest path problem From a dynamic programming point of view, Dijkstra's algorithm for the shortest path problem is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method. In fact, Dijkstra's explanation of the logic behind the algorithm, namely is a paraphrasing of Bellman's famous Principle of Optimality in the context of the shortest path problem. Fibonacci sequence Using dynamic programming in the calculation of the nth member of the Fibonacci sequence improves its performance greatly. Here is a naïve implementation, based directly on the mathematical definition: function fib(n) if n <= 1 return n return fib(n − 1) + fib(n − 2) Notice that if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times: fib(5) fib(4) + fib(3) (fib(3) + fib(2)) + (fib(2) + fib(1)) ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1)) (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1)) In particular, fib(2) was calculated three times from scratch. In larger examples, many more values of fib, or subproblems, are recalculated, leading to an exponential time algorithm. Now, suppose we have a simple map object, m, which maps each value of fib that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time (but requires O(n) space): var m := map(0 → 0, 1 → 1) function fib(n) if key n is not in map m m[n] := fib(n − 1) + fib(n − 2) return m[n] This technique of saving values that have already been calculated is called memoization; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values. In the bottom-up approach, we calculate the smaller values of fib first, then build larger values from them. This method also uses O(n) time since it contains a loop that repeats n − 1 times, but it only takes constant (O(1)) space, in contrast to the top-down approach which requires O(n) space to store the map. function fib(n) if n = 0 return 0 else var previousFib := 0, currentFib := 1 repeat n − 1 times // loop is skipped if n = 1 var newFib := previousFib + currentFib previousFib := currentFib currentFib := newFib return currentFib In both examples, we only calculate fib(2) one time, and then use it to calculate both fib(4) and fib(3), instead of computing it every time either of them is evaluated. A type of balanced 0–1 matrix Consider the problem of assigning values, either zero or one, to the positions of an matrix, with even, so that each row and each column contains exactly zeros and ones. We ask how many different assignments there are for a given . For example, when , five possible solutions are There are at least three possible approaches: brute force, backtracking, and dynamic programming. Brute force consists of checking all assignments of zeros and ones and counting those that have balanced rows and columns ( zeros and ones). As there are possible assignments and sensible assignments, this strategy is not practical except maybe up to . Backtracking for this problem consists of choosing some order of the matrix elements and recursively placing ones or zeros, while checking that in every row and column the number of elements that have not been assigned plus the number of ones or zeros are both at least . While more sophisticated than brute force, this approach will visit every solution once, making it impractical for larger than six, since the number of solutions is already 116,963,796,250 for  = 8, as we shall see. Dynamic programming makes it possible to count the number of solutions without visiting them all. Imagine backtracking values for the first row – what information would we require about the remaining rows, in order to be able to accurately count the solutions obtained for each first row value? We consider boards, where , whose rows contain zeros and ones. The function f to which memoization is applied maps vectors of n pairs of integers to the number of admissible boards (solutions). There is one pair for each column, and its two components indicate respectively the number of zeros and ones that have yet to be placed in that column. We seek the value of ( arguments or one vector of elements). The process of subproblem creation involves iterating over every one of possible assignments for the top row of the board, and going through every column, subtracting one from the appropriate element of the pair for that column, depending on whether the assignment for the top row contained a zero or a one at that position. If any one of the results is negative, then the assignment is invalid and does not contribute to the set of solutions (recursion stops). Otherwise, we have an assignment for the top row of the board and recursively compute the number of solutions to the remaining board, adding the numbers of solutions for every admissible assignment of the top row and returning the sum, which is being memoized. The base case is the trivial subproblem, which occurs for a board. The number of solutions for this board is either zero or one, depending on whether the vector is a permutation of and pairs or not. For example, in the first two boards shown above the sequences of vectors would be ((2, 2) (2, 2) (2, 2) (2, 2)) ((2, 2) (2, 2) (2, 2) (2, 2)) k = 4 0 1 0 1 0 0 1 1 ((1, 2) (2, 1) (1, 2) (2, 1)) ((1, 2) (1, 2) (2, 1) (2, 1)) k = 3 1 0 1 0 0 0 1 1 ((1, 1) (1, 1) (1, 1) (1, 1)) ((0, 2) (0, 2) (2, 0) (2, 0)) k = 2 0 1 0 1 1 1 0 0 ((0, 1) (1, 0) (0, 1) (1, 0)) ((0, 1) (0, 1) (1, 0) (1, 0)) k = 1 1 0 1 0 1 1 0 0 ((0, 0) (0, 0) (0, 0) (0, 0)) ((0, 0) (0, 0), (0, 0) (0, 0)) The number of solutions is Links to the MAPLE implementation of the dynamic programming approach may be found among the external links. Checkerboard Consider a checkerboard with n × n squares and a cost function c(i, j) which returns a cost associated with square (i,j) (i being the row, j being the column). For instance (on a 5 × 5 checkerboard), Thus c(1, 3) = 5 Let us say there was a checker that could start at any square on the first rank (i.e., row) and you wanted to know the shortest path (the sum of the minimum costs at each visited rank) to get to the last rank; assuming the checker could move only diagonally left forward, diagonally right forward, or straight forward. That is, a checker on (1,3) can move to (2,2), (2,3) or (2,4). This problem exhibits optimal substructure. That is, the solution to the entire problem relies on solutions to subproblems. Let us define a function q(i, j) as q(i, j) = the minimum cost to reach square (i, j). Starting at rank n and descending to rank 1, we compute the value of this function for all the squares at each successive rank. Picking the square that holds the minimum value at each rank gives us the shortest path between rank n and rank 1. The function q(i, j) is equal to the minimum cost to get to any of the three squares below it (since those are the only squares that can reach it) plus c(i, j). For instance: Now, let us define q(i, j) in somewhat more general terms: The first line of this equation deals with a board modeled as squares indexed on 1 at the lowest bound and n at the highest bound. The second line specifies what happens at the first rank; providing a base case. The third line, the recursion, is the important part. It represents the A,B,C,D terms in the example. From this definition we can derive straightforward recursive code for q(i, j). In the following pseudocode, n is the size of the board, c(i, j) is the cost function, and min() returns the minimum of a number of values: function minCost(i, j) if j < 1 or j > n return infinity else if i = 1 return c(i, j) else return min( minCost(i-1, j-1), minCost(i-1, j), minCost(i-1, j+1) ) + c(i, j) This function only computes the path cost, not the actual path. We discuss the actual path below. This, like the Fibonacci-numbers example, is horribly slow because it too exhibits the overlapping sub-problems attribute. That is, it recomputes the same path costs over and over. However, we can compute it much faster in a bottom-up fashion if we store path costs in a two-dimensional array q[i, j] rather than using a function. This avoids recomputation; all the values needed for array q[i, j] are computed ahead of time only once. Precomputed values for (i,j) are simply looked up whenever needed. We also need to know what the actual shortest path is. To do this, we use another array p[i, j]; a predecessor array. This array records the path to any square s. The predecessor of s is modeled as an offset relative to the index (in q[i, j]) of the precomputed path cost of s. To reconstruct the complete path, we lookup the predecessor of s, then the predecessor of that square, then the predecessor of that square, and so on recursively, until we reach the starting square. Consider the following pseudocode: function computeShortestPathArrays() for x from 1 to n q[1, x] := c(1, x) for y from 1 to n q[y, 0] := infinity q[y, n + 1] := infinity for y from 2 to n for x from 1 to n m := min(q[y-1, x-1], q[y-1, x], q[y-1, x+1]) q[y, x] := m + c(y, x) if m = q[y-1, x-1] p[y, x] := -1 else if m = q[y-1, x] p[y, x] := 0 else p[y, x] := 1 Now the rest is a simple matter of finding the minimum and printing it. function computeShortestPath() computeShortestPathArrays() minIndex := 1 min := q[n, 1] for i from 2 to n if q[n, i] < min minIndex := i min := q[n, i] printPath(n, minIndex) function printPath(y, x) print(x) print("<-") if y = 2 print(x + p[y, x]) else printPath(y-1, x + p[y, x]) Sequence alignment In genetics, sequence alignment is an important application where dynamic programming is essential. Typically, the problem consists of transforming one sequence into another using edit operations that replace, insert, or remove an element. Each operation has an associated cost, and the goal is to find the sequence of edits with the lowest total cost. The problem can be stated naturally as a recursion, a sequence A is optimally edited into a sequence B by either: inserting the first character of B, and performing an optimal alignment of A and the tail of B deleting the first character of A, and performing the optimal alignment of the tail of A and B replacing the first character of A with the first character of B, and performing optimal alignments of the tails of A and B. The partial alignments can be tabulated in a matrix, where cell (i,j) contains the cost of the optimal alignment of A[1..i] to B[1..j]. The cost in cell (i,j) can be calculated by adding the cost of the relevant operations to the cost of its neighboring cells, and selecting the optimum. Different variants exist, see Smith–Waterman algorithm and Needleman–Wunsch algorithm. Tower of Hanoi puzzle The Tower of Hanoi or Towers of Hanoi is a mathematical game or puzzle. It consists of three rods, and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape. The objective of the puzzle is to move the entire stack to another rod, obeying the following rules: Only one disk may be moved at a time. Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod. No disk may be placed on top of a smaller disk. The dynamic programming solution consists of solving the functional equation S(n,h,t) = S(n-1,h, not(h,t)) ; S(1,h,t) ; S(n-1,not(h,t),t) where n denotes the number of disks to be moved, h denotes the home rod, t denotes the target rod, not(h,t) denotes the third rod (neither h nor t), ";" denotes concatenation, and S(n, h, t) := solution to a problem consisting of n disks that are to be moved from rod h to rod t. For n=1 the problem is trivial, namely S(1,h,t) = "move a disk from rod h to rod t" (there is only one disk left). The number of moves required by this solution is 2n − 1. If the objective is to maximize the number of moves (without cycling) then the dynamic programming functional equation is slightly more complicated and 3n − 1 moves are required. Egg dropping puzzle The following is a description of the instance of this famous puzzle involving N=2 eggs and a building with H=36 floors: Suppose that we wish to know which stories in a 36-story building are safe to drop eggs from, and which will cause the eggs to break on landing (using U.S. English terminology, in which the first floor is at ground level). We make a few assumptions: An egg that survives a fall can be used again. A broken egg must be discarded. The effect of a fall is the same for all eggs. If an egg breaks when dropped, then it would break if dropped from a higher window. If an egg survives a fall, then it would survive a shorter fall. It is not ruled out that the first-floor windows break eggs, nor is it ruled out that eggs can survive the 36th-floor windows. If only one egg is available and we wish to be sure of obtaining the right result, the experiment can be carried out in only one way. Drop the egg from the first-floor window; if it survives, drop it from the second-floor window. Continue upward until it breaks. In the worst case, this method may require 36 droppings. Suppose 2 eggs are available. What is the lowest number of egg-droppings that is guaranteed to work in all cases? To derive a dynamic programming functional equation for this puzzle, let the state of the dynamic programming model be a pair s = (n,k), where n = number of test eggs available, n = 0, 1, 2, 3, ..., N − 1. k = number of (consecutive) floors yet to be tested, k = 0, 1, 2, ..., H − 1. For instance, s = (2,6) indicates that two test eggs are available and 6 (consecutive) floors are yet to be tested. The initial state of the process is s = (N,H) where N denotes the number of test eggs available at the commencement of the experiment. The process terminates either when there are no more test eggs (n = 0) or when k = 0, whichever occurs first. If termination occurs at state s = (0,k) and k > 0, then the test failed. Now, let W(n,k) = minimum number of trials required to identify the value of the critical floor under the worst-case scenario given that the process is in state s = (n,k). Then it can be shown that W(n,k) = 1 + min{max(W(n − 1, x − 1), W(n,k − x)): x = 1, 2, ..., k } with W(n,0) = 0 for all n > 0 and W(1,k) = k for all k. It is easy to solve this equation iteratively by systematically increasing the values of n and k. Faster DP solution using a different parametrization Notice that the above solution takes time with a DP solution. This can be improved to time by binary searching on the optimal in the above recurrence, since is increasing in while is decreasing in , thus a local minimum of is a global minimum. Also, by storing the optimal for each cell in the DP table and referring to its value for the previous cell, the optimal for each cell can be found in constant time, improving it to time. However, there is an even faster solution that involves a different parametrization of the problem: Let be the total number of floors such that the eggs break when dropped from the th floor (The example above is equivalent to taking ). Let be the minimum floor from which the egg must be dropped to be broken. Let be the maximum number of values of that are distinguishable using tries and eggs. Then for all . Let be the floor from which the first egg is dropped in the optimal strategy. If the first egg broke, is from to and distinguishable using at most tries and eggs. If the first egg did not break, is from to and distinguishable using tries and eggs. Therefore, . Then the problem is equivalent to finding the minimum such that . To do so, we could compute in order of increasing , which would take time. Thus, if we separately handle the case of , the algorithm would take time. But the recurrence relation can in fact be solved, giving , which can be computed in time using the identity for all . Since for all , we can binary search on to find , giving an algorithm. Matrix chain multiplication Matrix chain multiplication is a well-known example that demonstrates utility of dynamic programming. For example, engineering applications often have to multiply a chain of matrices. It is not surprising to find matrices of large dimensions, for example 100×100. Therefore, our task is to multiply matrices . Matrix multiplication is not commutative, but is associative; and we can multiply only two matrices at a time. So, we can multiply this chain of matrices in many different ways, for example: and so on. There are numerous ways to multiply this chain of matrices. They will all produce the same final result, however they will take more or less time to compute, based on which particular matrices are multiplied. If matrix A has dimensions m×n and matrix B has dimensions n×q, then matrix C=A×B will have dimensions m×q, and will require m*n*q scalar multiplications (using a simplistic matrix multiplication algorithm for purposes of illustration). For example, let us multiply matrices A, B and C. Let us assume that their dimensions are m×n, n×p, and p×s, respectively. Matrix A×B×C will be of size m×s and can be calculated in two ways shown below: Ax(B×C) This order of matrix multiplication will require nps + mns scalar multiplications. (A×B)×C This order of matrix multiplication will require mnp + mps scalar calculations. Let us assume that m = 10, n = 100, p = 10 and s = 1000. So, the first way to multiply the chain will require 1,000,000 + 1,000,000 calculations. The second way will require only 10,000+100,000 calculations. Obviously, the second way is faster, and we should multiply the matrices using that arrangement of parenthesis. Therefore, our conclusion is that the order of parenthesis matters, and that our task is to find the optimal order of parenthesis. At this point, we have several choices, one of which is to design a dynamic programming algorithm that will split the problem into overlapping problems and calculate the optimal arrangement of parenthesis. The dynamic programming solution is presented below. Let's call m[i,j] the minimum number of scalar multiplications needed to multiply a chain of matrices from matrix i to matrix j (i.e. Ai × .... × Aj, i.e. i<=j). We split the chain at some matrix k, such that i <= k < j, and try to find out which combination produces minimum m[i,j]. The formula is: if i = j, m[i,j]= 0 if i < j, m[i,j]= min over all possible values of k where k ranges from i to j − 1. is the row dimension of matrix i, is the column dimension of matrix k, is the column dimension of matrix j. This formula can be coded as shown below, where input parameter "chain" is the chain of matrices, i.e. : function OptimalMatrixChainParenthesis(chain) n = length(chain) for i = 1, n m[i,i] = 0 // Since it takes no calculations to multiply one matrix for len = 2, n for i = 1, n - len + 1 j = i + len -1 m[i,j] = infinity // So that the first calculation updates for k = i, j-1 if q < m[i, j] // The new order of parentheses is better than what we had m[i, j] = q // Update s[i, j] = k // Record which k to split on, i.e. where to place the parenthesis So far, we have calculated values for all possible , the minimum number of calculations to multiply a chain from matrix i to matrix j, and we have recorded the corresponding "split point". For example, if we are multiplying chain , and it turns out that and , that means that the optimal placement of parenthesis for matrices 1 to 3 is and to multiply those matrices will require 100 scalar calculations. This algorithm will produce "tables" m[, ] and s[, ] that will have entries for all possible values of i and j. The final solution for the entire chain is m[1, n], with corresponding split at s[1, n]. Unraveling the solution will be recursive, starting from the top and continuing until we reach the base case, i.e. multiplication of single matrices. Therefore, the next step is to actually split the chain, i.e. to place the parenthesis where they (optimally) belong. For this purpose we could use the following algorithm: function PrintOptimalParenthesis(s, i, j) if i = j print "A"i else print "(" PrintOptimalParenthesis(s, i, s[i, j]) PrintOptimalParenthesis(s, s[i, j] + 1, j) print ")" Of course, this algorithm is not useful for actual multiplication. This algorithm is just a user-friendly way to see what the result looks like. To actually multiply the matrices using the proper splits, we need the following algorithm: function MatrixChainMultiply(chain from 1 to n) // returns the final matrix, i.e. A1×A2×... ×An OptimalMatrixChainParenthesis(chain from 1 to n) // this will produce s[ . ] and m[ . ] "tables" OptimalMatrixMultiplication(s, chain from 1 to n) // actually multiply function OptimalMatrixMultiplication(s, i, j) // returns the result of multiplying a chain of matrices from Ai to Aj in optimal way if i < j // keep on splitting the chain and multiplying the matrices in left and right sides LeftSide = OptimalMatrixMultiplication(s, i, s[i, j]) RightSide = OptimalMatrixMultiplication(s, s[i, j] + 1, j) return MatrixMultiply(LeftSide, RightSide) else if i = j return Ai // matrix at position i else print "error, i <= j must hold" function MatrixMultiply(A, B) // function that multiplies two matrices if columns(A) = rows(B) for i = 1, rows(A) for j = 1, columns(B) C[i, j] = 0 for k = 1, columns(A) C[i, j] = C[i, j] + A[i, k]*B[k, j] return C else print "error, incompatible dimensions." History of the name The term dynamic programming was originally used in the 1940s by Richard Bellman to describe the process of solving problems where one needs to find the best decisions one after another. By 1953, he refined this to the modern meaning, referring specifically to nesting smaller decision problems inside larger decisions, and the field was thereafter recognized by the IEEE as a systems analysis and engineering topic. Bellman's contribution is remembered in the name of the Bellman equation, a central result of dynamic programming which restates an optimization problem in recursive form. Bellman explains the reasoning behind the term dynamic programming in his autobiography, Eye of the Hurricane: An Autobiography: The word dynamic was chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive. The word programming referred to the use of the method to find an optimal program, in the sense of a military schedule for training or logistics. This usage is the same as that in the phrases linear programming and mathematical programming, a synonym for mathematical optimization. The above explanation of the origin of the term may be inaccurate: According to Russell and Norvig, the above story "cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953." Also, Harold J. Kushner stated in a speech that, "On the other hand, when I asked [Bellman] the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true." See also References Further reading . An accessible introduction to dynamic programming in economics. MATLAB code for the book . . Includes an extensive bibliography of the literature in the area, up to the year 1954. . Dover paperback edition (2003), . . Especially pp. 323–69. . . . . External links A Tutorial on Dynamic programming MIT course on algorithms - Includes 4 video lectures on DP, lectures 15–18 Applied Mathematical Programming by Bradley, Hax, and Magnanti, Chapter 11 More DP Notes King, Ian, 2002 (1987), "A Simple Introduction to Dynamic Programming in Macroeconomic Models." An introduction to dynamic programming as an important tool in economic theory. Dynamic Programming: from novice to advanced A TopCoder.com article by Dumitru on Dynamic Programming Algebraic Dynamic Programming – a formalized framework for dynamic programming, including an entry-level course to DP, University of Bielefeld Dreyfus, Stuart, "Richard Bellman on the birth of Dynamic Programming. " Dynamic programming tutorial A Gentle Introduction to Dynamic Programming and the Viterbi Algorithm Tabled Prolog BProlog, XSB, SWI-Prolog IFORS online interactive dynamic programming modules including, shortest path, traveling salesman, knapsack, false coin, egg dropping, bridge and torch, replacement, chained matrix products, and critical path problem. Optimization algorithms and methods Equations Systems engineering Optimal control
Dynamic programming
[ "Mathematics", "Engineering" ]
8,664
[ "Systems engineering", "Mathematical objects", "Equations" ]
23,690,843
https://en.wikipedia.org/wiki/Absolute%20rotation
In physics, the concept of absolute rotation—rotation independent of any external reference—is a topic of debate about relativity, cosmology, and the nature of physical laws. For the concept of absolute rotation to be scientifically meaningful, it must be measurable. In other words, can an observer distinguish between the rotation of an observed object and their own rotation? Newton suggested two experiments to resolve this problem. One is the effects of centrifugal force upon the shape of the surface of water rotating in a bucket, equivalent to the phenomenon of rotational gravity used in proposals for human spaceflight. The second is the effect of centrifugal force upon the tension in a string joining two spheres rotating about their center of mass. Classical mechanics Newton's bucket argument Newton suggested the shape of the surface of the water indicates the presence or absence of absolute rotation relative to absolute space: rotating water has a curved surface, still water has a flat surface. Because rotating water has a concave surface, if the surface you see is concave, and the water does not seem to you to be rotating, then you are rotating with the water. Centrifugal force is needed to explain the concavity of the water in a co-rotating frame of reference (one that rotates with the water) because the water appears stationary in this frame, and so should have a flat surface. Thus, observers looking at the stationary water need the centrifugal force to explain why the water surface is concave and not flat. The centrifugal force pushes the water toward the sides of the bucket, where it piles up deeper and deeper, Pile-up is arrested when any further climb costs as much work against gravity as is the energy gained from the centrifugal force, which is greater at larger radius. If you need a centrifugal force to explain what you see, then you are rotating. Newton's conclusion was that rotation is absolute. Other thinkers suggest that pure logic implies only relative rotation makes sense. For example, Bishop Berkeley and Ernst Mach (among others) suggested that it is relative rotation with respect to the fixed stars that matters, and rotation of the fixed stars relative to an object has the same effect as rotation of the object with respect to the fixed stars. Newton's arguments do not settle this issue; his arguments may be viewed, however, as establishing centrifugal force as a basis for an operational definition of what we actually mean by absolute rotation. Rotating spheres Newton also proposed another experiment to measure one's rate of rotation: using the tension in a cord joining two spheres rotating about their center of mass. Non-zero tension in the string indicates rotation of the spheres, whether or not the observer thinks they are rotating. This experiment is simpler than the bucket experiment in principle, because it need not involve gravity. Beyond a simple "yes or no" answer to rotation, one may actually calculate one's rotation. To do that, one takes one's measured rate of rotation of the spheres and computes the tension appropriate to this observed rate. This calculated tension then is compared to the measured tension. If the two agree, one is in a stationary (non-rotating) frame. If the two do not agree, to obtain agreement, one must include a centrifugal force in the tension calculation; for example, if the spheres appear to be stationary, but the tension is non-zero, the entire tension is due to centrifugal force. From the necessary centrifugal force, one can determine one's speed of rotation; for example, if the calculated tension is greater than measured, one is rotating in the sense opposite to the spheres, and the larger the discrepancy the faster this rotation. The tension in the wire is the required centripetal force to sustain the rotation. What is experienced by the physically rotating observer is the centripetal force and the physical effect arising from his own inertia. The effect arising from inertia is referred to as reactive centrifugal force. Whether or not the effects from inertia are attributed to a fictitious centrifugal force is a matter of choice. Special relativity French physicist Georges Sagnac in 1913 conducted an experiment that was similar to the Michelson–Morley experiment, which was intended to observe the effects of rotation. Sagnac set up this experiment to prove the existence of the luminiferous aether that Einstein's 1905 theory of special relativity had discarded. The Sagnac experiment and later similar experiments showed that a stationary object on the surface of the Earth will rotate once every rotation of the Earth when using stars as a stationary reference point. Rotation was thus concluded to be absolute rather than relative. General relativity Mach's principle is the name given by Einstein to a hypothesis often credited to the physicist and philosopher Ernst Mach. The idea is that the local motion of a rotating reference frame is determined by the large-scale distribution of matter in the universe. Mach's principle says that there is a physical law that relates the motion of the distant stars to the local inertial frame. If you see all the stars whirling around you, Mach suggests that there is some physical law which would make it so you would feel a centrifugal force. The principle is often stated in vague ways, like "mass out there influences inertia here". The example considered by Einstein was the rotating elastic sphere. Like a rotating planet bulging at the equator, a rotating sphere deforms into an oblate (squashed) spheroid depending on its rotation. In classical mechanics, an explanation of this deformation requires external causes in a frame of reference in which the spheroid is not rotating, and these external causes may be taken as "absolute rotation" in classical physics and special relativity. In general relativity, no external causes are invoked. The rotation is relative to the local geodesics, and since the local geodesics eventually channel information from the distant stars, there appears to be absolute rotation relative to these stars. See also Absolute time and space Mach's principle Foucault pendulum References Force Rotation Theory of relativity
Absolute rotation
[ "Physics", "Mathematics" ]
1,256
[ "Physical phenomena", "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Rotation", "Motion (physics)", "Theory of relativity", "Wikipedia categories named after physical quantities", "Matter" ]
23,692,678
https://en.wikipedia.org/wiki/Quantum%20dot%20display
A quantum dot display is a display device that uses quantum dots (QD), semiconductor nanocrystals which can produce pure monochromatic red, green, and blue light. Photo-emissive quantum dot particles are used in LCD backlights or display color filters. Quantum dots are excited by the blue light from the display panel to emit pure basic colors, which reduces light losses and color crosstalk in color filters, improving display brightness and color gamut. Light travels through QD layer film and traditional RGB filters made from color pigments, or through QD filters with red/green QD color converters and blue passthrough. Although the QD color filter technology is primarily used in LED-backlit LCDs, it is applicable to other display technologies which use color filters, such as blue/UV active-matrix organic light-emitting diode (AMOLED) or QNED/MicroLED display panels. LED-backlit LCDs are the main application of photo-emissive quantum dots, though blue organic light-emitting diode (OLED) panels with QD color filters are now coming to market. Electro-emissive or electroluminiscent quantum dot displays are an experimental type of display based on quantum-dot light-emitting diodes (QD-LED; also EL-QLED, ELQD, QDEL). These displays are similar to AMOLED and MicroLED displays, in that light would be produced directly in each pixel by applying electric current to inorganic nano-particles. Manufacturers asserted that QD-LED displays could support large, flexible displays and would not degrade as readily as OLEDs, making them good candidates for flat-panel TV screens, digital cameras, mobile phones and handheld game consoles. all commercial products, such as LCD TVs branded as QLED, employ quantum dots as photo-emissive particles; electro-emissive QD-LED TVs exist in laboratories only. Quantum dot displays are capable of displaying wider color gamuts, with some devices approaching full coverage of the BT.2020 color gamut. QD-OLED and QD-LED displays can achieve the same contrast as OLED/MicroLED displays with "perfect" black levels in the off state, unlike LED-backlit LCDs. Working principle The idea of using quantum dots as a light source emerged in the 1990s. Early applications included imaging using QD infrared photodetectors, light emitting diodes and single-color light emitting devices. Starting in the early 2000s, scientists started to realize the potential of developing quantum dots for light sources and displays. QDs are either photo-emissive (photoluminescent) or electro-emissive (electroluminescent) allowing them to be readily incorporated into new emissive display architectures. Quantum dots naturally produce monochromatic light, so they are more efficient than white light sources when color filtered and allow more saturated colors that reach nearly 100% of Rec. 2020 color gamut. Quantum dot enhancement layer A widespread practical application is using quantum dot enhancement film (QDEF) layer to improve the LED backlighting in LCD TVs. Light from a blue LED backlight is converted by QDs to relatively pure red and green, so that this combination of blue, green and red light incurs less blue-green crosstalk and light absorption in the color filters after the LCD screen, thereby increasing useful light throughput and providing a better color gamut. The first manufacturer shipping TVs of this kind was Sony in 2013 as Triluminos, Sony's trademark for the technology. At the Consumer Electronics Show 2015, Samsung Electronics, LG Electronics, TCL Corporation and Sony showed QD-enhanced LED-backlighting of LCD TVs. At the CES 2017, Samsung rebranded their 'SUHD' TVs as 'QLED'; later in April 2017, Samsung formed the QLED Alliance with Hisense and TCL to produce and market QD-enhanced TVs. Quantum dot on glass (QDOG) replaces QD film with a thin QD layer coated on top of the light-guide plate (LGP), reducing costs and improving efficiency. Traditional white LED backlights that use blue LEDs with on-chip or on-rail red-green QD structures are being researched, though high operating temperatures negatively affect their lifespan. Quantum dot color converter QD color converter (QDCC) LED-backlit LCDs would use QD film or ink-printed QD layer with red/green sub-pixel patterned (i.e. aligned to precisely match the red and green subpixels) quantum dots to produce pure red/green light; blue subpixels can be transparent to pass through the pure blue LED backlight, or can be made with blue patterned quantum dots in case of UV-LED backlight. This configuration effectively replaces passive color filters, which incur substantial losses by filtering out 2/3 of passing light, with photo-emissive QD structures, improving power efficiency and/or peak brightness, and enhancing color purity. Because quantum dots depolarize the light, output polarizer (the analyzer) needs to be moved behind the color converter and embedded in-cell of the LCD glass; this would improve viewing angles as well. In-cell arrangement of the analyzer and/or the polarizer would also reduce depolarization effects in the LC layer, increasing contrast ratio. To reduce self-excitement of QD film and to improve efficiency, the ambient light can be blocked using traditional color filters, and reflective polarizers can direct light from the QDCC towards the viewer. As only blue or UV light passes through the liquid crystal layer, it can be made thinner, resulting in faster pixel response times. Nanosys made presentations of their photo-emissive color converter technology during 2017; commercial products were expected by 2019, though in-cell polarizer remained a major challenge. As of December 2019, issues with in-cell polarizer remain unresolved and no LCDs with QD color converter appeared on the market since then. QD color converters can be used with OLED or micro-LED panels, improving their efficiency and color gamut. QD-OLED panels with blue emitters and red-green color converters are researched by Samsung and TCL; as of May 2019, Samsung intends to start production in 2021. In October 2019, Samsung Display announced an investment of $10.8 billion in both research and production, with the aim to convert all their 8G panel factories to QD-OLED production during 2019–2025. Samsung Display presented 55" and 65" QD-OLED panels at CES 2022, with TVs from Samsung Electronics and Sony to be released later in 2022. QD-OLED displays show better color volume, covering 90% of Rec.2020 color gamut with peak brightness of 1500 nits, while current OLED and LCD TVs cover 70–75% of Rec.2020 (95–100% of DCI-P3). QNED A further development of QD-OLED displays is quantum dot nanorod emitting diode (QNED) display which replaces blue OLED layer with InGaN/GaN blue nanorod LEDs. Nanorods have a larger emitting surface compared to planar LED, allowing increased efficiency and higher light emission. Nanorod solution is ink-printed on the substrate, then subpixels are aligned in-place by electric current, and QD color converters are placed on top of red/green subpixels. Samsung Display was expected to begin test production of QNED panels in 2021, with mass production in 2024–2025, but test production has been postponed as of May 2022. Starting in 2021 LG Electronics introduced a series of TVs branded as "QNED Mini LED". These TVs are based on LCD displays with mini LED backlighting and don't use self-emissive technologies. LG explains that the acronym "QNED" in their case stands for "Quantum Nano-Emitting Diode". The following year LG launched "QNED" TVs that don't use mini LED technology but still rely on LCD technology. Self-emissive quantum dot diodes Self-emissive quantum dot displays will use electroluminescent QD nanoparticles functioning as Quantum-dot-based LEDs (QD-LED) arranged in either active matrix or passive matrix array. Rather than requiring a separate LED backlight for illumination and TFT LCD to control the brightness of color primaries, these QDEL displays would natively control the light emitted by individual color subpixels, greatly reducing pixel response times by eliminating the liquid crystal layer. This technology has also been called true QLED display, and Electroluminescent quantum dots (ELQD, QDLE, QDEL, EL-QLED). The structure of a QD-LED is similar to the basic design of an OLED. The major difference is that the light emitting devices are quantum dots, such as cadmium selenide (CdSe) nanocrystals. A layer of quantum dots is sandwiched between layers of electron-transporting and hole-transporting organic materials. An applied electric field causes electrons and holes to move into the quantum dot layer, where they are captured in the quantum dot and recombine, emitting photons. The demonstrated color gamut from QD-LEDs exceeds the performance of both LCD and OLED display technologies. To realize all-QD LED, the challenge that should be overcome is the currently poor electrical conduction in the emitting QD layers. As cadmium-based materials cannot be used in lighting applications due to their environmental impact, InP (indium phosphide) ink-jet solutions are being researched by Nanosys, Nanoco, Nanophotonica, OSRAM OLED, Fraunhofer IAP, Merck, and Seoul National University, among others. As of 2019, InP based materials are still not yet ready for commercial production due to limited lifetime. Mass production of active-matrix QLED displays using ink-jet printing was expected to begin in 2020–2021, but as of 2024, longevity issues are not resolved and the technology remains in prototyping stage. Nanosys expects their QD electroluminiscent technology to be available for production by 2026. At CES 2024, Sharp NEC Display privately demonstrated prototypes of 12" and 30" display panels. Optical properties of quantum dots Performance of QDs is determined by the size and/or composition of the QD structures. Unlike simple atomic structures, a quantum dot structure has the unusual property that energy levels are strongly dependent on the structure's size. For example, CdSe quantum dot light emission can be tuned from red (5 nm diameter) to the violet region (1.5 nm dot). The physical reason for QD coloration is the quantum confinement effect and is directly related to their energy levels. The bandgap energy that determines the energy (and hence color) of the fluorescent light is inversely proportional to the square of the size of quantum dot. Larger QDs have more energy levels that are more closely spaced, allowing the QD to emit (or absorb) photons of lower energy (redder color). In other words, the emitted photon energy increases as the dot size decreases, because greater energy is required to confine the semiconductor excitation to a smaller volume. Newer quantum dot structures employ indium instead of cadmium, as the latter is not exempted for use in lighting by the European Commission RoHS directive, and also because of cadmium's toxicity. QD-LEDs are characterized by pure and saturated emission colors with narrow bandwidth, with FWHM (full width at half maximum) in the range of 20–40 nm. Their emission wavelength is easily tuned by changing the size of the quantum dots. Moreover, QD-LED offer high color purity and durability combined with the efficiency, flexibility, and low processing cost of comparable organic light-emitting devices. QD-LED structure can be tuned over the entire visible wavelength range from 460 nm (blue) to 650 nm (red) (the human eye can detect light from 380 to 750 nm). The emission wavelengths have been continuously extended to UV and NIR range by tailoring the chemical composition of the QDs and device structure. Fabrication process Quantum dots are solution processable and suitable for wet processing techniques. The two major fabrication techniques for QD-LED are called phase separation and contact-printing. Phase separation Phase separation is suitable for forming large-area ordered QD monolayers. A single QD layer is formed by spin casting a mixed solution of QD and an organic semiconductor such as TPD (N,N′-Bis(3-methylphenyl)-N,N′-diphenylbenzidine). This process simultaneously yields QD monolayers self-assembled into hexagonally close-packed arrays and places this monolayer on top of a co-deposited contact. During solvent drying, the QDs phase separate from the organic under-layer material (TPD) and rise towards the film's surface. The resulting QD structure is affected by many parameters: solution concentration, solvent ration, QD size distribution and QD aspect ratio. Also important is QD solution and organic solvent purity. Although phase separation is relatively simple, it is not suitable for display device applications. Since spin-casting does not allow lateral patterning of different sized QDs (RGB), phase separation cannot create a multi-color QD-LED. Moreover, it is not ideal to have an organic under-layer material for a QD-LED; an organic under-layer must be homogeneous, a constraint which limits the number of applicable device designs. Contact printing The contact printing process for forming QD thin films is a solvent-free water-based suspension method, which is simple and cost efficient with high throughput. During the process, the device structure is not exposed to solvents. Since charge transport layers in QD-LED structures are solvent-sensitive organic thin films, avoiding solvent during the process is a major benefit. This method can produce RGB patterned electroluminescent structures with 1000 ppi (pixels-per-inch) resolution. The overall process of contact printing: Polydimethylsiloxane (PDMS) is molded using a silicon master. Top side of resulting PDMS stamp is coated with a thin film of Parylene-c, a chemical-vapor deposited (CVD) aromatic organic polymer. Parylene-c coated stamp is inked via spin-casting of a solution of colloidal QDs suspended in an organic solvent. After the solvent evaporates, the formed QD monolayer is transferred to the substrate by contact printing. The array of quantum dots is manufactured by self-assembly in a process known as spin casting: a solution of quantum dots in an organic material is poured onto a substrate, which is then set spinning to spread the solution evenly. Contact printing allows fabrication of multi-color QD-LEDs. A QD-LED was fabricated with an emissive layer consisting of 25-μm wide stripes of red, green and blue QD monolayers. Contact printing methods also minimize the amount of QD required, reducing costs. Comparison Nanocrystal displays would render as much as a 30% increase in the visible spectrum, while using 30 to 50% less power than LCDs, in large part because nanocrystal displays would not need backlighting. QD LEDs are 50–100 times brighter than CRT and LC displays, emitting 40,000 nits (cd/m2). QDs are dispersable in both aqueous and non-aqueous solvents, which provides for printable and flexible displays of all sizes, including large area TVs. QDs can be inorganic, offering the potential for improved lifetimes compared to OLED (however, since many parts of QD-LED are often made of organic materials, further development is required to improve the functional lifetime.) In addition to OLED displays, pick-and-place microLED displays are emerging as competing technologies to nanocrystal displays. Samsung has developed a method for making self-emissive quantum dot diodes with a lifetime of 1 million hours. Other advantages include better saturated green colors, manufacturability on polymers, thinner display and the use of the same material to generate different colors. One disadvantage is that blue quantum dots require highly precise timing control during the reaction, because blue quantum dots are just slightly above the minimum size. Since sunlight contains roughly equal luminosities of red, green and blue across the entire spectrum, a display also needs to produce roughly equal luminosities of red, green and blue to achieve pure white as defined by CIE Standard Illuminant D65. However, the blue component in the display can have relatively lower color purity and/or precision (dynamic range) in comparison to green and red, because the human eye is three to five times less sensitive to blue in daylight conditions according to CIE luminosity function. In contrast to traditional LCD panels and Quantum Dot LCD panels, QD-OLEDs suffer from the same screen burn-in effect as normal OLED panels. See also Bandwidth (signal processing) Electron hole Energy level Nanotechnology Organic light-emitting diode Potential well Quantum dot Spectral linewidth Notes References External links Quantum Dots: Technical Status and Market Prospects Quantum dots that produce white light could be the light bulb’s successor Quantum electronics Display technology Quantum dots
Quantum dot display
[ "Physics", "Materials_science", "Engineering" ]
3,682
[ "Quantum electronics", "Quantum mechanics", "Electronic engineering", "Condensed matter physics", "Display technology", "Nanotechnology" ]
19,696,006
https://en.wikipedia.org/wiki/Copper%20cladding
There are four main techniques used today in the UK and mainland Europe for copper cladding a building: seamed-cladding (typically 0.7 mm thick copper sheet on the facade): max 600 mm by 4000 mm 'seam centres'. shingle-cladding (typically made from 0.7 mm thick copper sheet): max 600 mm by 4000 mm 'seam centres'. slot-in panels (typically made from 1.0 mm thick copper sheet): max 350 mm wide for 1.0 mm, by nominal 4 m length. cassettes (typically made from 1.0 mm up to 1.5 mm thick copper sheet): largest-format cladding elements, more subframing is needed: can be 900 mm x nominal 4000 mm length. When selecting size of a cladding element, take wind-loadings into account, and also consider the standard sizes available of the sheet (or coil) pre-material, to minimise material wastage through off-cuts. This helps to reduce costs. The choice of which system to use depends on the aesthetic effect required, and building geometry can also have an influence on the choice. Copper cladding is very durable, lightweight compared to other materials and techniques, and at the end of the building life is also 100% recyclable. Depending on metal prices, copper may be a very cost-effective cladding and roofing material. With good building design, materials choice and craftsmanship, copper roofing or facade cladding may be cheaper than slates or concrete tiles, especially when one takes into account the lasting colour, durability, maintenance-free and lightweight nature of the cladding. Because the UK code of practice for "hard metal" cladding (as opposed to lead cladding) is quite old – CP143: part 12 (1970) – the major manufacturers have to provide detailed technical advice and information for architects, designers and builders, and cultivate skilled installers with years of experience to draw on. Typically, an installer of hard metal roofing and cladding must put in around 8–10 years on-the-job in order to achieve a respectable experience on a work site. See also Copper in architecture: Wall cladding References Copper Building materials Roofing materials
Copper cladding
[ "Physics", "Engineering" ]
463
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
19,696,519
https://en.wikipedia.org/wiki/Polyhedral%20combinatorics
Polyhedral combinatorics is a branch of mathematics, within combinatorics and discrete geometry, that studies the problems of counting and describing the faces of convex polyhedra and higher-dimensional convex polytopes. Research in polyhedral combinatorics falls into two distinct areas. Mathematicians in this area study the combinatorics of polytopes; for instance, they seek inequalities that describe the relations between the numbers of vertices, edges, and faces of higher dimensions in arbitrary polytopes or in certain important subclasses of polytopes, and study other combinatorial properties of polytopes such as their connectivity and diameter (number of steps needed to reach any vertex from any other vertex). Additionally, many computer scientists use the phrase “polyhedral combinatorics” to describe research into precise descriptions of the faces of certain specific polytopes (especially 0-1 polytopes, whose vertices are subsets of a hypercube) arising from integer programming problems. Faces and face-counting vectors A face of a convex polytope P may be defined as the intersection of P and a closed halfspace H such that the boundary of H contains no interior point of P. The dimension of a face is the dimension of this hull. The 0-dimensional faces are the vertices themselves, and the 1-dimensional faces (called edges) are line segments connecting pairs of vertices. Note that this definition also includes as faces the empty set and the whole polytope P. If P itself has dimension d, the faces of P with dimension d − 1 are called facets of P and the faces with dimension d − 2 are called ridges. The faces of P may be partially ordered by inclusion, forming a face lattice that has as its top element P itself and as its bottom element the empty set. A key tool in polyhedral combinatorics is the ƒ-vector of a polytope, the vector (f0, f1, ..., fd − 1) where fi is the number of i-dimensional features of the polytope. For instance, a cube has eight vertices, twelve edges, and six facets, so its ƒ-vector is (8,12,6). The dual polytope has a ƒ-vector with the same numbers in the reverse order; thus, for instance, the regular octahedron, the dual to a cube, has the ƒ-vector (6,12,8). Configuration matrices include the f-vectors of regular polytopes as diagonal elements. The extended ƒ-vector is formed by concatenating the number one at each end of the ƒ-vector, counting the number of objects at all levels of the face lattice; on the left side of the vector, f−1 = 1 counts the empty set as a face, while on the right side, fd = 1 counts P itself. For the cube the extended ƒ-vector is (1,8,12,6,1) and for the octahedron it is (1,6,12,8,1). Although the vectors for these example polyhedra are unimodal (the coefficients, taken in left to right order, increase to a maximum and then decrease), there are higher-dimensional polytopes for which this is not true. For simplicial polytopes (polytopes in which every facet is a simplex), it is often convenient to transform these vectors, producing a different vector called the h-vector. If we interpret the terms of the ƒ-vector (omitting the final 1) as coefficients of a polynomial ƒ(x) = Σfixd − i − 1 (for instance, for the octahedron this gives the polynomial ƒ(x) = x3 + 6x2 + 12x + 8), then the h-vector lists the coefficients of the polynomial h(x) = ƒ(x − 1) (again, for the octahedron, h(x) = x3 + 3x2 + 3x + 1). As Ziegler writes, “for various problems about simplicial polytopes, h-vectors are a much more convenient and concise way to encode the information about the face numbers than ƒ-vectors.” Equalities and inequalities The most important relation among the coefficients of the ƒ-vector of a polytope is Euler's formula Σ(−1)ifi = 0, where the terms of the sum range over the coefficients of the extended ƒ-vector. In three dimensions, moving the two 1's at the left and right ends of the extended ƒ-vector (1, v, e, f, 1) to the right hand side of the equation transforms this identity into the more familiar form v − e + f = 2. From the fact that each facet of a three-dimensional polyhedron has at least three edges, it follows by double counting that 2e ≥ 3f, and using this inequality to eliminate e and f from Euler's formula leads to the further inequalities e ≤ 3v − 6 and f ≤ 2v − 4. By duality, e ≤ 3f − 6 and v ≤ 2f − 4. It follows from Steinitz's theorem that any 3-dimensional integer vector satisfying these equalities and inequalities is the ƒ-vector of a convex polyhedron. In higher dimensions, other relations among the numbers of faces of a polytope become important as well, including the Dehn–Sommerville equations which, expressed in terms of h-vectors of simplicial polytopes, take the simple form hk = hd − k for all k. The instance of these equations with k = 0 is equivalent to Euler's formula but for d > 3 the other instances of these equations are linearly independent of each other and constrain the h-vectors (and therefore also the ƒ-vectors) in additional ways. Another important inequality on polytope face counts is given by the upper bound theorem, first proven by , which states that a d-dimensional polytope with n vertices can have at most as many faces of any other dimension as a neighborly polytope with the same number of vertices: where the asterisk means that the final term of the sum should be halved when d is even. Asymptotically, this implies that there are at most faces of all dimensions. Even in four dimensions, the set of possible ƒ-vectors of convex polytopes does not form a convex subset of the four-dimensional integer lattice, and much remains unknown about the possible values of these vectors. Graph-theoretic properties Along with investigating the numbers of faces of polytopes, researchers have studied other combinatorial properties of them, such as descriptions of the graphs obtained from the vertices and edges of polytopes (their 1-skeleta). Balinski's theorem states that the graph obtained in this way from any d-dimensional convex polytope is d-vertex-connected. In the case of three-dimensional polyhedra, this property and planarity may be used to exactly characterize the graphs of polyhedra: Steinitz's theorem states that G is the skeleton of a three-dimensional polyhedron if and only if G is a 3-vertex-connected planar graph. A theorem of (previously conjectured by Micha Perles) states that one can reconstruct the face structure of a simple polytope from its graph. That is, if a given undirected graph is the skeleton of a simple polytope, there is only one polytope (up to combinatorial equivalence) for which this is true. This is in sharp contrast with (non-simple) neighborly polytopes whose graph is a complete graph; there can be many different neighborly polytopes for the same graph. Another proof of this theorem based on unique sink orientations was given by , and showed how to use this theorem to derive a polynomial time algorithm for reconstructing the face lattices of simple polytopes from their graphs. However, testing whether a given graph or lattice can be realized as the face lattice of a simple polytope is equivalent (by polarity) to realization of simplicial polytopes, which was shown to be complete for the existential theory of the reals by . In the context of the simplex method for linear programming, it is important to understand the diameter of a polytope, the minimum number of edges needed to reach any vertex by a path from any other vertex. The system of linear inequalities of a linear program define facets of a polytope representing all feasible solutions to the program, and the simplex method finds the optimal solution by following a path in this polytope. Thus, the diameter provides a lower bound on the number of steps this method requires. The Hirsch conjecture, now disproved, suggested a strong (linear) bound on how large the diameter of a polytope with fixed dimension and number of facets could be. Weaker (quasi-polynomial in and ) upper bounds on their diameter are known, as well as proofs of the Hirsch conjecture for special classes of polytopes. Computational properties Deciding whether the number of vertices of a given polytope is bounded by some natural number k is a computationally difficult problem and complete for the complexity class PP. Facets of 0-1 polytopes It is important in the context of cutting-plane methods for integer programming to be able to describe accurately the facets of polytopes that have vertices corresponding to the solutions of combinatorial optimization problems. Often, these problems have solutions that can be described by binary vectors, and the corresponding polytopes have vertex coordinates that are all zero or one. As an example, consider the Birkhoff polytope, the set of n × n matrices that can be formed from convex combinations of permutation matrices. Equivalently, its vertices can be thought of as describing all perfect matchings in a complete bipartite graph, and a linear optimization problem on this polytope can be interpreted as a bipartite minimum weight perfect matching problem. The Birkhoff–von Neumann theorem states that this polytope can be described by two types of linear inequality or equality. First, for each matrix cell, there is a constraint that this cell has a non-negative value. And second, for each row or column of the matrix, there is a constraint that the sum of the cells in that row or column equal one. The row and column constraints define a linear subspace of dimension n2 − 2n + 1 in which the Birkhoff polytope lies, and the non-negativity constraints define facets of the Birkhoff polytope within that subspace. However, the Birkhoff polytope is unusual in that a complete description of its facets is available. For many other 0-1 polytopes, there are exponentially many or superexponentially many facets, and only partial descriptions of their facets are available. See also Abstract polytope Combinatorial commutative algebra Matroid polytope Order polytope Simplicial sphere Stable matching polytope Notes References . . . . . . . . . . . . . . . External links .
Polyhedral combinatorics
[ "Mathematics" ]
2,358
[ "Polyhedral combinatorics", "Combinatorics" ]
19,714,020
https://en.wikipedia.org/wiki/Digital%20test%20controller
Digital test controllers are devices (usually computer based) that provide motion control by processing digital signals. Typically a controller has inputs connected to sensors on the device they control, which measure the feedback, its current state (for example the current position), and process this signal to provide an output to a hydraulical, electrical or other type of servomechanism control of the controlled device, with the aim of matching a control signal. A good example is an elevator. The control signal is the button selects the floor the passenger wants to go. The controller of the elevator looks at which floor the elevator currently is (current position), at the floor selected (by the button) and by comparing them to each other derives a signal to control a servo (either hydraulic or electric) that makes the elevator move until the right floor is reached. In the older days test controllers were usually analog, but with the rapid developments in digital signal processing and computer technology, test controllers are almost exclusively digital devices. This offers many advantages, because it allows the user to execute all kinds of additional operations on the digital signals, in addition to the standard PID controller. Digital test controllers offered by Moog, provide novel advantages for this type of system control. References Control devices
Digital test controller
[ "Engineering" ]
254
[ "Control devices", "Control engineering" ]
19,714,534
https://en.wikipedia.org/wiki/Technetium%20%2899mTc%29%20votumumab
{{DISPLAYTITLE:Technetium (99mTc) votumumab}} Technetium (99mTc) votumumab (trade name HumaSPECT) is a human monoclonal antibody labelled with the radionuclide technetium-99m. It was developed for the detection of colorectal tumors, but has never been marketed. The target of votumumab is CTAA16.88, a complex of cytokeratin polypeptides in the molecular weight range of 35 to 43 kDa, which is expressed in colorectal tumors. References Radiopharmaceuticals Technetium-99m Abandoned drugs Technetium compounds Antibody-drug conjugates
Technetium (99mTc) votumumab
[ "Chemistry", "Biology" ]
154
[ "Medicinal radiochemistry", "Antibody-drug conjugates", "Drug safety", "Radiopharmaceuticals", "Chemicals in medicine", "Abandoned drugs" ]
8,245,866
https://en.wikipedia.org/wiki/Alpha-particle%20spectroscopy
Alpha spectrometry (also known as alpha(-particle) spectroscopy) is the quantitative study of the energy of alpha particles emitted by a radioactive nuclide that is an alpha emitter. As emitted alpha particles are mono-energetic (i.e. not emitted with a spectrum of energies, such as beta decay) with energies often distinct to the decay they can be used to identify which radionuclide they originated from. Experimental methods Counting with a source deposited onto a metal disk It is common to place a drop of the test solution on a metal disk which is then dried out to give a uniform coating on the disk. This is then used as the test sample. If the thickness of the layer formed on the disk is too thick then the lines of the spectrum are broadened to lower energies. This is because some of the energy of the alpha particles is lost during their movement through the layer of active material. Liquid scintillation An alternative method is to use liquid scintillation counting (LSC), where the sample is directly mixed with a scintillation cocktail. When the individual light emission events are counted, the LSC instrument records the amount of light energy per radioactive decay event. The alpha spectra obtained by liquid scintillation counting are broaden because of the two main intrinsic limitations of the LSC method: (1) because the random quenching reduces the number of photons emitted per radioactive decay, and (2) because the emitted photons can be absorbed by cloudy or coloured samples (Lambert-Beer law). The liquid scintillation spectra are subject to Gaussian broadening, rather than to the distortion caused by the absorption of alpha-particles by the sample when the layer of active material deposited onto a disk is too thick. Alpha spectra From left to right the peaks are due to 209Po, 239Pu, 210Po and 241Am. The fact that isotopes such as 239Pu and 241Am have more than one alpha line indicates that the (daughter) nucleus can be in different discrete energy levels. Calibration: MCA does not work on energy, it works on voltage. To relate the energy to voltage one must calibrate the detection system. Here different alpha emitting sources of known energy were placed under the detector and the full energy peak is recorded. Measurement of thickness of thin foils: Energies of alpha particles from radioactive sources are measured before and after passing through the thin films. By measuring difference and using SRIM we can measure the thickness of thin foils. Kinematics of alpha decay The decay energy, Q (also called the Q-value of the reaction), corresponds to a disappearance of mass. For the alpha decay nuclear reaction: ^{A}_{Z}P -> ^{(A-4)}_{(Z-2)}D + \alpha, (where P is the parent nuclide and D the daughter). , or to put in the more commonly used units: Q (MeV) = -931.5 ΔM (Da), (where ΔM = ΣMproducts - ΣMreactants). When the daughter nuclide and alpha particle formed are in their ground states (common for alpha decay), the total decay energy is divided between the two in kinetic energy (T): The size of T is dependent on the ratio of masses of the products and due to the conservation of momentum (the parent's momentum = 0 at the moment of decay) this can be calculated: and , The alpha particle, or 4He nucleus, is an especially strongly bound particle. This combined with the fact that the binding energy per nucleon has a maximum value near A=56 and systematically decreases for heavier nuclei, creates the situation that nuclei with A>150 have positive Qα-values for the emission of alpha particles. For example, one of the heaviest naturally occurring isotopes, ^238U -> ^234Th + ^4He (ignoring charges): Qα = -931.5 (234.043 601 + 4.002 603 254 13 - 238.050 788 2) = 4.2699 MeV Note that the decay energy will be divided between the alpha-particle and the heavy recoiling daughter so that the kinetic energy of the alpha particle (Tα) will be slightly less: Tα = (234.043 601 / 238.050 788 2) 4.2699 = 4.198 MeV, (note this is for the 238gU to 234gTh reaction, which in this case has the branching ratio of 79%). The kinetic energy of the recoiling 234Th daughter nucleus is TD = (mα / mP) Qα = (4.002 603 254 13 / 238.050 788 2) 4.2699 = 0.0718 MeV or 71.8 keV, which whilst much smaller is still substantially bigger than that of chemical bonds (<10 eV) meaning the daughter nuclide will break away from whatever chemical environment the parent had been in. The recoil energy is also the reason that alpha spectrometers, whilst run under reduced pressure, are not operated at too low a pressure so that the air helps stop the recoiling daughter from moving completely out of the original alpha-source and cause serious contamination problems if the daughters are themselves radioactive. The Qα-values generally increase with increasing atomic number but the variation in the mass surface due to shell effects can overwhelm the systematic increase. The sharp peaks near A = 214 are due to the effects of the N = 126 shell. References Spectroscopy
Alpha-particle spectroscopy
[ "Physics", "Chemistry" ]
1,155
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
8,248,238
https://en.wikipedia.org/wiki/Dortmund%20Data%20Bank
The Dortmund Data Bank (short DDB) is a factual data bank for thermodynamic and thermophysical data. Its main usage is the data supply for process simulation where experimental data are the basis for the design, analysis, synthesis, and optimization of chemical processes. The DDB is used for fitting parameters for thermodynamic models like NRTL or UNIQUAC and for many different equations describing pure component properties, e.g., the Antoine equation for vapor pressures. The DDB is also used for the development and revision of predictive methods like UNIFAC and PSRK. Contents Mixture properties Phase equilibria data (vapor–liquid, liquid–liquid, solid–liquid), data on azeotropy and zeotropy Mixing enthalpies Gas solubilities Activity coefficients at infinite dilution Heat capacities and excess heat capacities Volumes, densities, and excess volumes (volume effect of mixing) Salt solubilities Octanol-water partition coefficients Critical data The mixture data banks contain () approx. 308,000 data sets with 2,157,000 data points for 10,750 components building 84,870 different binary, ternary, and higher systems/combinations. Pure component properties Saturated vapor pressures Saturated densities Viscosities Thermal conductivities Critical data (Tc, Pc, Vc) Triple points Melting points Heat capacities Heats of fusion, sublimation and vaporization Heats of formation and combustion Heats and temperatures of transitions for solids Speed of sound P-v-T data including virial coefficients Energy functions Enthalpies and entropies Surface tensions The pure component properties data bank contains () approx. 157,000 data sets with 1,080,000 data points for 16,700 different components. Data sources The DDB is a collection of experimental data published by the original authors. All data are referenced and a quite large literature data bank is part of the DDB, currently containing more than 92,000 articles, books, private communications, deposited documents from Russia (VINITI), Ukraine (Ukrniiti) and other former USSR states, company reports (mainly from former GDR companies), theses, patents, and conference contributions. Secondary sources like data collections are normally neglected and only used as a literature source. Derived data are also not collected with the main exception of the azeotropic data bank which is built partly from evaluated vapor–liquid equilibrium data. History The Dortmund Data Bank was founded in the 1970s at the University of Dortmund in Germany. The original reason for starting a vapor–liquid phase equilibria data collection was the development of the group contribution method UNIFAC which allows to estimate vapor pressures of mixtures. The DDB has since been extended to many other properties and has increased dramatically in size also because of intensive (German) government aid. The funding has ended and the further development and maintenance is performed by DDBST GmbH, a company founded by members of the industrial chemistry chair of the Carl von Ossietzky University of Oldenburg, Germany. Additional contributors are the DECHEMA, the FIZ CHEMIE (Berlin), the Technical University in Tallinn, and others. Availability The Dortmund Data Bank is distributed by DDBST GmbH as in-house software. Many parts of the Dortmund Data Bank are also distributed as part of the DETHERM data bank which is also available online. See also Beilstein database Elektrolytdatenbank Regensburg References External links DDBST GmbH DDB Online Search DECHEMA Topological Analysis of the Gibbs Energy Function (Liquid-Liquid Equilibrium Correlation Data). Including a Thermodinamic Review and a Graphical User Interface (GUI) for Surfaces/Tie-lines/Hessian matrix analysis - University of Alicante Thermodynamics Chemical databases Technical University of Dortmund University of Oldenburg Thermodynamics databases
Dortmund Data Bank
[ "Physics", "Chemistry", "Mathematics" ]
794
[ "Thermodynamics databases", "Chemical databases", "Thermodynamics", "Dynamical systems" ]
8,249,556
https://en.wikipedia.org/wiki/Parametric%20array
A parametric array, in the field of acoustics, is a nonlinear transduction mechanism that generates narrow, nearly side lobe-free beams of low frequency sound, through the mixing and interaction of high frequency sound waves, effectively overcoming the diffraction limit (a kind of spatial 'uncertainty principle') associated with linear acoustics. The main side lobe-free beam of low frequency sound is created as a result of nonlinear mixing of two high frequency sound beams at their difference frequency. Parametric arrays can be formed in water, air, and earth materials/rock. History Priority for discovery and explanation of the parametric array owes to Peter J. Westervelt, winner of the Lord Rayleigh Medal (currently Professor Emeritus at Brown University), although important experimental work was contemporaneously underway in the former Soviet Union. According to Muir and Albers, the concept for the parametric array occurred to Dr. Westervelt while he was stationed at the London, England, branch office of the Office of Naval Research in 1951. According to Albers, he (Westervelt) there first observed an accidental generation of low frequency sound in air by Captain H.J. Round (British pioneer of the superheterodyne receiver) via the parametric array mechanism. The phenomenon of the parametric array, seen first experimentally by Westervelt in the 1950s, was later explained theoretically in 1960, at a meeting of the Acoustical Society of America. A few years after this, a full paper was published as an extension of Westervelt's classic work on the nonlinear Scattering of Sound by Sound. Foundations The foundation for Westervelt's theory of sound generation and scattering in nonlinear acoustic media owes to an application of Lighthill's equation for fluid particle motion. The application of Lighthill’s theory to the nonlinear acoustic realm yields the Westervelt–Lighthill Equation (WLE). Solutions to this equation have been developed using Green's functions and Parabolic Equation (PE) Methods, most notably via the Kokhlov–Zablotskaya–Kuznetzov (KZK) equation. An alternate mathematical formalism using Fourier operator methods in wavenumber space, was also developed and generalized by Westervelt. The solution method is formulated in Fourier (wavenumber) space in a representation related to the beam patterns of the primary fields generated by linear sources in the medium. This formalism has been applied not only to parametric arrays, but also to other nonlinear acoustic effects, such as the absorption of sound by sound and to the equilibrium distribution of sound intensity spectra in cavities. Applications Practical applications are numerous and include: underwater sound sonar depth sounding sub-bottom profiling non-destructive testing and 'see through walls' sensing remote ocean sensing medical ultrasound and tomography underground seismic prospecting active noise control and directional high-fidelity commercial audio systems (Sound from ultrasound) Parametric receiving arrays can also be formed for directional reception. In 2005, Elwood Norris won the $500,000 MIT-Lemelson Prize for his application of the parametric array to commercial high-fidelity loudspeakers. References Further reading Harvey C. Woodsum, "Analytical and Numerical Solutions to the 'General Theory for the Scattering of Sound by Sound”, J. Acoust. Soc. Am. Vol. 95, No. 5, Part 2 (2PA14), June, 1994 (Program of the 134th Meeting of the Acoustical Society of America, Cambridge Massachusetts) H.O. Berktay and D.J. Leahy, Journal of the Acoustical Society of America, 55, p. 539 (1974) M.J. Lighthill, "On Sound Generated Aerodynamically”, Proc. R. Soc. Lond. A211, 564-687 (1952) M.J. Lighthill, “On Sound Generated Aerodynamically”, Proc. R. Soc. Lond. A222, 1-32 (1954) M.J. Lighthill, Math. Revs. 19, 915 (1958) H.C. Woodsum, Bull. Of Am. Phys. Soc., Fall 1980; “A Boundary Condition Operator for Nonlinear Acoustics” Nonlinear Parameter Imaging Computed Tomography by Parametric Acoustic Array Y. Nakagawa; M. Nakagawa; M. Yoneyama; M. Kikuchi. IEEE 1984 Ultrasonics Symposium. Volume, Issue, 1984 Page(s):673–676 Active Nonlinear Acoustic Sensing of an Object with Sum or Difference Frequency Fields. Zhang, W.; Liu, Y.; Ratilal, P.; Cho, B.; Makris, N.C.; Remote Sens. 2017, 9, 954. https://doi.org/10.3390/rs9090954 Sound Acoustics Nonlinear systems
Parametric array
[ "Physics", "Mathematics" ]
1,013
[ "Nonlinear systems", "Classical mechanics", "Acoustics", "Dynamical systems" ]
4,790,683
https://en.wikipedia.org/wiki/Engineering%20ethics
Engineering ethics is the field of system of moral principles that apply to the practice of engineering. The field examines and sets the obligations by engineers to society, to their clients, and to the profession. As a scholarly discipline, it is closely related to subjects such as the philosophy of science, the philosophy of engineering, and the ethics of technology. Background and origins Up to the 19th century and growing concerns As engineering rose as a distinct profession during the 19th century, engineers saw themselves as either independent professional practitioners or technical employees of large enterprises. There was considerable tension between the two sides as large industrial employers fought to maintain control of their employees. In the United States growing professionalism gave rise to the development of four founding engineering societies: The American Society of Civil Engineers (ASCE) (1851), the American Institute of Electrical Engineers (AIEE) (1884), the American Society of Mechanical Engineers (ASME) (1880), and the American Institute of Mining Engineers (AIME) (1871). ASCE and AIEE were more closely identified with the engineer as learned professional, where ASME, to an extent, and AIME almost entirely, identified with the view that the engineer is a technical employee. Even so, at that time ethics was viewed as a personal rather than a broad professional concern. Turn of the 20th century and turning point When the 19th century drew to a close and the 20th century began, there had been series of significant structural failures, including some spectacular bridge failures, notably the Ashtabula River Railroad Disaster (1876), Tay Bridge Disaster (1879), and the Quebec Bridge collapse (1907). These had a profound effect on engineers and forced the profession to confront shortcomings in technical and construction practice, as well as ethical standards. One response was the development of formal codes of ethics by three of the four founding engineering societies. AIEE adopted theirs in 1912. ASCE and ASME did so in 1914. AIME did not adopt a code of ethics in its history. Concerns for professional practice and protecting the public highlighted by these bridge failures, as well as the Boston molasses disaster (1919), provided impetus for another movement that had been underway for some time: to require formal credentials (Professional Engineering licensure in the US) as a requirement to practice. This involves meeting some combination of educational, experience, and testing requirements. In 1950, the Association of German Engineers developed an oath for all its members titled 'The Confession of the Engineers', directly hinting at the role of engineers in the atrocities committed during World War II. Over the following decades most American states and Canadian provinces either required engineers to be licensed, or passed special legislation reserving title rights to organization of professional engineers. The Canadian model is to require all persons working in fields of engineering that posed a risk to life, health, property, the public welfare and the environment to be licensed, and all provinces required licensing by the 1950s. The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This has perpetuated the split between professional engineers and those in private industry. Professional societies have adopted generally uniform codes of ethics. Recent developments Efforts to promote ethical practice continue. In addition to the professional societies and chartering organizations efforts with their members, the Canadian Iron Ring and American Order of the Engineer trace their roots to the 1907 Quebec Bridge collapse. Both require members to swear an oath to uphold ethical practice and wear a symbolic ring as a reminder. In the United States, the National Society of Professional Engineers released in 1946 its Canons of Ethics for Engineers and Rules of Professional Conduct, which evolved to the current Code of Ethics, adopted in 1964. These requests ultimately led to the creation of the Board of Ethical Review in 1954. Ethics cases rarely have easy answers, but the BER's nearly 500 advisory opinions have helped bring clarity to the ethical issues engineers face daily. Currently, bribery and political corruption is being addressed very directly by several professional societies and business groups around the world. However, new issues have arisen, such as offshoring, sustainable development, and environmental protection, that the profession is having to consider and address. General principles Codes of engineering ethics identify a specific precedence with respect to the engineer's consideration for the public, clients, employers, and the profession. Many engineering professional societies have prepared codes of ethics. Some date to the early decades of the twentieth century. These have been incorporated to a greater or lesser degree into the regulatory laws of several jurisdictions. While these statements of general principles served as a guide, engineers still require sound judgment to interpret how the code would apply to specific circumstances. The general principles of the codes of ethics are largely similar across the various engineering societies and chartering authorities of the world, which further extend the code and publish specific guidance. The following is an example from the American Society of Civil Engineers: Engineers shall hold paramount the safety, health and welfare of the public and shall strive to comply with the principles of sustainable development in the performance of their professional duties. Engineers shall perform services only in areas of their competence. Engineers shall issue public statements only in an objective and truthful manner. Engineers shall act in professional matters for each employer or client as faithful agents or trustees, and shall avoid conflicts of interest. Engineers shall build their professional reputation on the merit of their services and shall not compete unfairly with others. Engineers shall act in such a manner as to uphold and enhance the honor, integrity, and dignity of the engineering profession and shall act with zero-tolerance for bribery, fraud, and corruption. Engineers shall continue their professional development throughout their careers, and shall provide opportunities for the professional development of those engineers under their supervision. Engineers shall, in all matters related to their profession, treat all persons fairly and encourage equitable participation without regard to gender or gender identity, race, national origin, ethnicity, religion, age, sexual orientation, disability, political affiliation, or family, marital, or economic status. In 1990, EPFL students elaborated the Archimedean Oath, which is an ethical code of practice for engineers and technicians, similar to the Hippocratic Oath used in the medical world. Obligation to society The paramount value recognized by engineers is the safety and welfare of the public. As demonstrated by the following selected excerpts, this is the case for professional engineering organizations in nearly every jurisdiction and engineering discipline: Institute of Electrical and Electronics Engineers: "We, the members of the IEEE, … do hereby commit ourselves to the highest ethical and professional conduct and agree: 1. to accept responsibility in making decisions consistent with the safety, health and welfare of the public, and to disclose promptly factors that might endanger the public or the environment;" Institution of Civil Engineers: "Members of the ICE should always be aware of their overriding responsibility to the public good. A member’s obligations to the client can never override this, and members of the ICE should not enter undertakings which compromise this responsibility. The ‘public good’ encompasses care and respect for the environment, and for humanity's cultural, historical and archaeological heritage, as well as the primary responsibility members have to protect the health and well-being of present and future generations." Professional Engineers Ontario: "A practitioner shall, regard the practitioner's duty to public welfare as paramount." National Society of Professional Engineers: "Engineers, in the fulfillment of their professional duties, shall: Hold paramount the safety, health, and welfare of the public." American Society of Mechanical Engineers: "Engineers shall hold paramount the safety, health and welfare of the public in the performance of their professional duties." Institute of Industrial Engineers: "Engineers uphold and advance the integrity, honor and dignity of the engineering profession by: 2. Being honest and impartial, and serving with fidelity the public, their employers and clients." American Institute of Chemical Engineers: "To achieve these goals, members shall hold paramount the safety, health and welfare of the public and protect the environment in performance of their professional duties." American Nuclear Society: "ANS members uphold and advance the integrity and honor of their professions by using their knowledge and skill for the enhancement of human welfare and the environment; being honest and impartial; serving with fidelity the public, their employers, and their clients; and striving to continuously improve the competence and prestige of their various professions." Society of Fire Protection Engineers: "In the practice of their profession, fire protection engineers must maintain and constantly improve their competence and perform under a standard of professional behavior which requires adherence to the highest principles of ethical conduct with balanced regard for the interests of the public, clients, employers, colleagues, and the profession." Responsibility of engineers The engineers recognize that the greatest merit is the work and exercise their profession committed to serving society, attending to the welfare and progress of the majority. By transforming nature for the benefit of mankind, engineers must increase their awareness of the world as the abode of humanity, their interest in the universe as a guarantee of overcoming their spirit, and knowledge of reality to make the world fairer and happier. The engineer should reject any paper that is intended to harm the general interest, thus avoiding a situation that might be hazardous or threatening to the environment, life, health, or other rights of human beings. It is an inescapable duty of the engineer to uphold the prestige of the profession, to ensure its proper discharge, and to maintain a professional demeanor rooted in ability, honesty, fortitude, temperance, magnanimity, modesty, honesty, and justice; with the consciousness of individual well-being subordinate to the social good. The engineers and their employers must ensure the continuous improvement of their knowledge, particularly of their profession, disseminate their knowledge, share their experience, provide opportunities for education and training of workers, provide recognition, moral and material support to the schools where they studied, thus returning the benefits and opportunities they and their employers have received. It is the responsibility of the engineers to carry out their work efficiently and to support the law. In particular, they must ensure compliance with the standards of worker protection as provided by the law. As professionals, the engineers are expected to commit themselves to high standards of conduct (NSPE). [1] 11/27/11 Duty to Report (Whistleblowing) A basic ethical dilemma is that an engineer has the duty to report to the appropriate authority a possible risk to others from a client or employer failing to follow the engineer's directions. According to first principles, this duty overrides the duty to a client and/or employer. An engineer may be disciplined, or have their license revoked, even if the failure to report such a danger does not result in the loss of life or health. If an engineer is overruled by a non-technical authority or a technical authority they must inform the authority, in writing, the reasons for their advice and the consequences of the deviation from the advice. In many cases, this duty can be discharged by advising the client of the consequences in a forthright matter, and ensuring the client takes the engineer's advice. In very rare cases, where even a governmental authority may not take appropriate action, the engineer can only discharge the duty by making the situation public. As a result, whistleblowing by professional engineers is not an unusual event, and courts have often sided with engineers in such cases, overruling duties to employers and confidentiality considerations that otherwise would have prevented the engineer from speaking out. Conduct There are several other ethical issues that engineers may face. Some have to do with technical practice, but many others have to do with broader considerations of business conduct. These include: Relationships with clients, consultants, competitors, and contractors Ensuring legal compliance by clients, client's contractors, and others Conflict of interest Bribery and kickbacks, which also may include: Gifts, meals, services, and entertainment Treatment of confidential or proprietary information Consideration of the employer's assets Outside employment/activities (Moonlighting) Some engineering societies are addressing environmental protection as a stand-alone question of ethics. The field of business ethics often overlaps and informs ethical decision making for engineers. Case studies and key individuals Petroski notes that most engineering failures are much more involved than simple technical mis-calculations and involve the failure of the design process or management culture. However, not all engineering failures involve ethical issues. The infamous collapse of the first Tacoma Narrows Bridge, and the losses of the Mars Polar Lander and Mars Climate Orbiter were technical and design process failures. Nor are all engineering ethics issues necessary engineering failures per se - Northwestern University instructor Sheldon Epstein cited The Holocaust as an example of a breach in engineering ethics despite (and because of) the engineers' creations being successful at carrying out the Nazis' mission of genocide. There is the ethics issue of whether engineers considered vulnerability to hostile intent, such as governmental buildings or industrial sites, in the same way weather is considered regardless of the project specifications. Lysenkoism is a specific form of ethical failure, which when engineers (or scientists) allow political agendas take precedent over professional ethics. These episodes of engineering failure include ethical as well as technical issues. Titan submersible implosion (2023) General Motors ignition switch recalls (2014) Deepwater Horizon oil spill (2010) Space Shuttle Columbia disaster (2003) Space Shuttle Challenger disaster (1986) Therac-25 accidents (1985 to 1987) Chernobyl disaster (1986) Bhopal disaster (1984) Kansas City Hyatt Regency walkway collapse (1981) Love Canal (1980), Lois Gibbs Three Mile Island accident (1979) Citigroup Center (1978), Ford Pinto safety problems (1970s) Minamata disease (1908–1973) Aberfan disaster (1966) Chevrolet Corvair safety problems (1960s), Ralph Nader, and Unsafe at Any Speed Boston molasses disaster (1919) Quebec Bridge collapse (1907), Theodore Cooper Johnstown Flood (1889), South Fork Fishing and Hunting Club Tay Bridge Disaster (1879), Thomas Bouch, William Henry Barlow, and William Yolland Ashtabula River Railroad Disaster (1876), Amasa Stone Notes References Further reading Alford, C.F. (2002). Whistleblowers: Broken Lives and Organizational Power, Cornell University Press. , 192 pp. Fleddermann, C.B. (2011). Engineering Ethics, Prentice Hall, 4th edition. , 192pp. Glazer, M.P. (1991).Whistleblower, New York, NY: Basic Books. , 306pp. Harris, C.E., M.S. Pritchard, and M.J. Rabins (2008).Engineering Ethics: Concept and Cases, Wadsworth Publishing, 4th edition. , 332 pp. Peterson, Martin (2020). Ethics for Engineers, Oxford University Press. , 256 pp. Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won’t Save Us or the Environment, Chapter 14, “Critical Science and Social Responsibility”, New Society Publishers, Gabriola Island, British Columbia, Canada, , 464 pp. Martin, M.W., and R. Schinzinger (2004). Ethics in Engineering, McGraw-Hill, 4th edition. , 432 pp. Van de Poel, I., and L. Royakkers (2011). Ethics, Technology, and Engineering: An Introduction, Wiley-Blackwell. , 376 pp. External links Australia Association of Professional Engineers, Scientists and Managers, Australia Ethical Decision Making Engineers Australia Code of Ethics Canada Association of Professional Engineers and Geoscientists of British Columbia (APEGBC) Act, Bylaws and Code of Ethics Association of Professional Engineers, and Geoscientists of Alberta (APEGA) EGGP Code of Ethics Association of Professional Engineers and Geoscientists of Manitoba (APEGM) Code of Ethics Professional Engineers Ontario (PEO) Code of Ethics (See link on front page.) L'Ordre des ingénieurs du Québec (OIQ) Code of Ethics of Engineers Iron Ring The Ritual of the Calling of an Engineer University of Western Ontario Software Ethics - A Guide to the Ethical and Legal Use of Software for Members of the University Community of the University of Western Ontario Germany Verein Deutscher Ingenieure Ethical principles of engineering profession Ireland Engineers Ireland Code of Ethics Sri Lanka Institution of Engineers, Sri Lanka Code of Ethics Turkey Union of Chambers of Turkish Engineers and Architects Professional Behavior Principles United Kingdom Association for Consultancy and Engineering (ACE) Anti-Corruption Action Statement Engineering Professors' Council (EPC) Engineering Ethics Toolkit Ethics Explorer Institution of Civil Engineers (ICE) Royal Charter, By-laws, Regulations and Rules Institution of Engineering and Technology (IET) Professional ethics and the IET Engineering Council (EC) Joint Statement of Ethical Principles United States National Academy of Engineering Online Ethics Center of the National Academy of Engineering List of links to various professional and scientific societies' codes of ethics Onlineethics.org National Institute for Engineering Ethics (NIEE) National Society of Professional Engineers (NSPE) Code of Ethics Board of Ethical Review and BER Cases Ethics Resources and References American Institute of Chemical Engineers (AIChE) Code of Ethics American Society of Civil Engineers (ASCE) Code of Ethics Standards of Professional Conduct for Civil Engineers American Society of Mechanical Engineers (ASME), Code of Ethics Institute of Electrical and Electronics Engineers (IEEE) Code of Ethics The Order of the Engineer The Obligation of an Engineer Society of Manufacturing Engineers (SME) Code of Ethics International Global Infrastructure Anti-Corruption Centre Transparency International Ethics of science and technology Philosophy of engineering Professional ethics ethics
Engineering ethics
[ "Technology" ]
3,635
[ "Ethics of science and technology" ]
4,791,570
https://en.wikipedia.org/wiki/W-algebra
In conformal field theory and representation theory, a W-algebra is an associative algebra that generalizes the Virasoro algebra. W-algebras were introduced by Alexander Zamolodchikov, and the name "W-algebra" comes from the fact that Zamolodchikov used the letter W for one of the elements of one of his examples. Definition A W-algebra is an associative algebra that is generated by the modes of a finite number of meromorphic fields , including the energy-momentum tensor . For , is a primary field of conformal dimension . The generators of the algebra are related to the meromorphic fields by the mode expansions The commutation relations of are given by the Virasoro algebra, which is parameterized by a central charge . This number is also called the central charge of the W-algebra. The commutation relations are equivalent to the assumption that is a primary field of dimension . The rest of the commutation relations can in principle be determined by solving the Jacobi identities. Given a finite set of conformal dimensions (not necessarily all distinct), the number of W-algebras generated by may be zero, one or more. The resulting W-algebras may exist for all , or only for some specific values of the central charge. A W-algebra is called freely generated if its generators obey no other relations than the commutation relations. Most commonly studied W-algebras are freely generated, including the W(N) algebras. In this article, the sections on representation theory and correlation functions apply to freely generated W-algebras. Constructions While it is possible to construct W-algebras by assuming the existence of a number of meromorphic fields and solving the Jacobi identities, there also exist systematic constructions of families of W-algebras. Drinfeld-Sokolov reduction From a finite-dimensional Lie algebra , together with an embedding , a W-algebra may be constructed from the universal enveloping algebra of the affine Lie algebra by a kind of BRST construction. Then the central charge of the W-algebra is a function of the level of the affine Lie algebra. Coset construction Given a finite-dimensional Lie algebra , together with a subalgebra , a W-algebra may be constructed from the corresponding affine Lie algebras . The fields that generate are the polynomials in the currents of and their derivatives that commute with the currents of . The central charge of is the difference of the central charges of and , which are themselves given in terms of their level by the Sugawara construction. Commutator of a set of screenings Given a holomorphic field with values in , and a set of vectors , a W-algebra may be defined as the set of polynomials of and its derivatives that commute with the screening charges . If the vectors are the simple roots of a Lie algebra , the resulting W-algebra coincides with an algebra that is obtained from by Drinfeld-Sokolov reduction. The W(N) algebras For any integer , the W(N) algebra is a W-algebra which is generated by meromorphic fields of dimensions . The W(2) algebra coincides with the Virasoro algebra. Construction The W(N) algebra is obtained by Drinfeld-Sokolov reduction of the affine Lie algebra . The embeddings are parametrized by the integer partitions of , interpreted as decompositions of the fundamental representation of into representations of . The set of dimensions of the generators of the resulting W-algebra is such that where is the -dimensional irreducible representation of . The trivial partition corresponds to the W(N) algebra, while corresponds to itself. In the case , the partition leads to the Bershadsky-Polyakov algebra, whose generating fields have the dimensions . Properties The central charge of the W(N) algebra is given in terms of the level of the affine Lie algebra by in notations where the central charge of the affine Lie algebra is It is possible to choose a basis such that the commutation relations are invariant under . While the Virasoro algebra is a subalgebra of the universal enveloping algebra of , the W(N) algebra with is not a subalgebra of the universal enveloping algebra of . Example of the W(3) algebra The W(3) algebra is generated by the generators of the Virasoro algebra , plus another infinite family of generators . The commutation relations are where is the central charge, and we define The field is such that . Representation theory Highest weight representations A highest weight representation of a W-algebra is a representation that is generated by a primary state: a vector such that for some numbers called the charges, including the conformal dimension . Given a set of charges, the corresponding Verma module is the largest highest-weight representation that is generated by a primary state with these charges. A basis of the Verma module is where is the set of ordered tuples of strictly positive integers of the type with , and . Except for itself, the elements of this basis are called descendant states, and their linear combinations are also called descendant states. For generic values of the charges, the Verma module is the only highest weight representation. For special values of the charges that depend on the algebra's central charge, there exist other highest weight representations, called degenerate representations. Degenerate representations exist if the Verma module is reducible, and they are quotients of the Verma module by its nontrivial submodules. Degenerate representations If a Verma module is reducible, any indecomposible submodule is itself a highest weight representation, and is generated by a state that is both descendant and primary, called a null state or null vector. A degenerate representation is obtained by setting one or more null vectors to zero. Setting all the null vectors to zero leads to an irreducible representation. The structures and characters of irreducible representations can be deduced by Drinfeld-Sokolov reduction from representations of affine Lie algebras. The existence of null vectors is possible only under -dependent constraints on the charge . A Verma module can have only finitely many null vectors that are not descendants of other null vectors. If we start from a Verma module that has a maximal number of null vectors, and set all these null vectors to zero, we obtain an irreducible representation called a fully degenerate representation. For example, in the case of the algebra W(3), the Verma module with vanishing charges has the three null vectors at levels 1, 1 and 2. Setting these null vectors to zero yields a fully degenerate representation called the vacuum module. The simplest nontrivial fully degenerate representation of W(3) has vanishing null vectors at levels 1, 2 and 3, whose expressions are explicitly known. An alternative characterization of a fully degenerate representation is that its fusion product with any Verma module is a sum of finitely many indecomposable representations. Case of W(N) It is convenient to parametrize highest-weight representations not by the set of charges , but by an element of the weight space of , called the momentum. Let be the simple roots of , with a scalar product given by the Cartan matrix of , whose nonzero elements are . The positive simple roots are sums of any number of consecutive simple roots, and the Weyl vector is their half-sum , which obeys . The fundamental weights are defined by . Then the momentum is a vector The charges are functions of the momentum and the central charge, invariant under the action of the Weyl group. In particular, is a polynomial of the momentum of degree , which under the Dynkin diagram automorphism behaves as . The conformal dimension is Let us parametrize the central charge in terms of a number such that If there is a positive root and two integers such that then the Verma module of momentum has a null vector at level . This null vector is itself a primary state of momentum or equivalently (by a Weyl reflection) . The number of independent null vectors is the number of positive roots such that (up to a Weyl reflection). The maximal number of null vectors is the number of positive roots . The corresponding momentums are of the type where are integral dominant weights, i.e. elements of , which are highest weights of irreducible finite-dimensional representations of . Let us call the corresponding fully degenerate representation of the W(N) algebra. The irreducible finite-dimensional representation of of highest weight has a finite set of weights , with . Its tensor product with a Verma module of weight is . The fusion product of the fully degenerate representation of W(N) with a Verma module of momentum is then Correlation functions Primary fields To a primary state of charge , the state-field correspondence associates a primary field , whose operator product expansions with the fields are On any field , the mode of the energy-momentum tensor acts as a derivative, . Ward identities On the Riemann sphere, if there is no field at infinity, we have . For , the identity may be inserted in any correlation function. Therefore, the field gives rise to global Ward identities. Local Ward identities are obtained by inserting , where is a meromorphic function such that . In a correlation function of primary fields, local Ward identities determine the action of with in terms of the action of with . For example, in the case of a three-point function on the sphere of W(3)-primary fields, local Ward identities determine all the descendant three-point functions as linear combinations of descendant three-point functions that involve only . Global Ward identities further reduce the problem to determining three-point functions of the type for . In the W(3) algebra, as in generic W-algebras, correlation functions of descendant fields can therefore not be deduced from correlation functions of primary fields using Ward identities, as was the case for the Virasoro algebra. A W(3)-Verma module appears in the fusion product of two other W(3)-Verma modules with a multiplicity that is in general infinite. Differential equations A correlation function may obey a differential equation that generalizes the BPZ equations if the fields have sufficiently many vanishing null vectors. A four-point function of W(N)-primary fields on the sphere with one fully degenerate field obeys a differential equation if but not if . In the latter case, for a differential equation to exist, one of the other fields must have vanishing null vectors. For example, a four-point function with two fields of momentums (fully degenerate) and with (almost fully degenerate) obeys a differential equation whose solutions are generalized hypergeometric functions of type . Applications to conformal field theory W-minimal models W-minimal models are generalizations of Virasoro minimal models based on a W-algebra. Their spaces of states are made of finitely many fully degenerate representations. They exist for certain rational values of the central charge: in the case of the W(N) algebra, values of the type A W(N)-minimal model with central charge may be constructed as a coset of Wess-Zumino-Witten models . For example, the two-dimensional critical three-state Potts model has central charge . Spin observables of the model may be described in terms of the D-series non-diagonal Virasoro minimal model with , or in terms of the diagonal W(3)-minimal model with . Conformal Toda theory Conformal Toda theory is a generalization of Liouville theory that is based on a W-algebra. Given a simple Lie algebra , the Lagrangian is a functional of a field which belongs to the root space of , with one interaction term for each simple root: This depends on the cosmological constant , which plays no meaningful role, and on the parameter , which is related to the central charge. The resulting field theory is a conformal field theory, whose chiral symmetry algebra is a W-algebra constructed from by Drinfeld-Sokolov reduction. For the preservation of conformal symmetry in the quantum theory, it is crucial that there are no more interaction terms than components of the vector . The methods that lead to the solution of Liouville theory may be applied to W(N)-conformal Toda theory, but they only lead to the analytic determination of a particular class of three-point structure constants, and W(N)-conformal Toda theory with has not been solved. Logarithmic conformal field theory At central charge , the Virasoro algebra can be extended by a triplet of generators of dimension , thus forming a W-algebra with the set of dimensions . Then it is possible to build a rational conformal field theory based on this W-algebra, which is logarithmic. The simplest case is obtained for , has central charge , and has been particularly well studied, including in the presence of a boundary. Related concepts Classical W-algebras Finite W-algebras Finite W-algebras are certain associative algebras associated to nilpotent elements of semisimple Lie algebras. The original definition, provided by Alexander Premet, starts with a pair consisting of a reductive Lie algebra over the complex numbers and a nilpotent element e. By the Jacobson-Morozov theorem, e is part of a sl2 triple (e, h, f). The eigenspace decomposition of ad(h) induces a -grading on : Define a character (i.e. a homomorphism from to the trivial 1-dimensional Lie algebra) by the rule , where denotes the Killing form. This induces a non-degenerate anti-symmetric bilinear form on the −1 graded piece by the rule: After choosing any Lagrangian subspace , we may define the following nilpotent subalgebra which acts on the universal enveloping algebra by the adjoint action. The left ideal of the universal enveloping algebra generated by is invariant under this action. It follows from a short calculation that the invariants in under ad inherit the associative algebra structure from . The invariant subspace is called the finite W-algebra constructed from , and is usually denoted . References Further reading Conformal field theory Integrable systems Representation theory
W-algebra
[ "Physics", "Mathematics" ]
2,996
[ "Integrable systems", "Representation theory", "Fields of abstract algebra", "Theoretical physics" ]
4,792,574
https://en.wikipedia.org/wiki/Supermathematics
Supermathematics is the branch of mathematical physics which applies the mathematics of Lie superalgebras to the behaviour of bosons and fermions. The driving force in its formation in the 1960s and 1970s was Felix Berezin. Objects of study include superalgebras (such as super Minkowski space and super-Poincaré algebra), superschemes, supermetrics/supersymmetry, supermanifolds, supergeometry, and supergravity, namely in the context of superstring theory. References "The importance of Lie algebras"; Professor Isaiah Kantor, Lund University External links Felix Berezin, The Life and Death of the Mastermind of Supermathematics, edited by Mikhail Shifman, World Scientific, Singapore, 2007, Mathematical physics Supersymmetry Lie algebras String theory
Supermathematics
[ "Physics", "Astronomy", "Mathematics" ]
177
[ "Astronomical hypotheses", "Applied mathematics", "Theoretical physics", "Unsolved problems in physics", "Physics beyond the Standard Model", "String theory", "Mathematical physics", "Supersymmetry", "Symmetry" ]
22,349,086
https://en.wikipedia.org/wiki/Bloch%E2%80%93Siegert%20shift
The Bloch–Siegert shift is a phenomenon in quantum physics that becomes important for driven two-level systems when the driving gets strong (e.g. atoms driven by a strong laser drive or nuclear spins in NMR, driven by a strong oscillating magnetic field). When the rotating-wave approximation (RWA) is invoked, the resonance between the driving field and a pseudospin occurs when the field frequency is identical to the spin's transition frequency . The RWA is, however, an approximation. In 1940 Felix Bloch and Arnold Siegert showed that the dropped parts oscillating rapidly can give rise to a shift in the true resonance frequency of the dipoles. The Bloch–Siegert shift has been used for practical purposes in both NMR and MRI, including power calibration, image encoding, and magnetic field mapping. Rotating wave approximation In RWA, when the perturbation to the two level system is , a linearly polarized field is considered as a superposition of two circularly polarized fields of the same amplitude rotating in opposite directions with frequencies . Then, in the rotating frame(), we can neglect the counter-rotating field and the Rabi frequency is where is the on-resonance Rabi frequency. Bloch–Siegert shift Consider the effect due to the counter-rotating field. In the counter-rotating frame (), the effective detuning is and the counter-rotating field adds a driving component perpendicular to the detuning, with equal amplitude . The counter-rotating field effectively dresses the system, where we can define a new quantization axis slightly tilted from the original one, with an effective detuning Therefore, the resonance frequency () of the system dressed by the counter-rotating field is away from our frame of reference, which is rotating at and there are two solutions for and The shift from the RWA of the first solution is dominant, and the correction to is known as the Bloch–Siegert shift: The counter-rotating frequency gives rise to a population oscillation at , with amplitude proportional to , and phase that depends on the phase of the driving field. Such Bloch–Siegert oscillation may become relevant in spin flipping operations at high rate. This effect can be suppressed by using an off-resonant Λ transition. Applications NMR When homonuclear nuclear magnetic resonance decoupling is performed, Bloch–Siegert shifts may become significant due to the strength of the homonuclear decoupling field. Direct measurement of the homonuclear decoupling mean field strength can be achieved by measuring the resulting Bloch–Siegert shift. MRI The Bloch–Siegert shift is currently being widely investigated a potential encoding mechanism for MRI. The first significant use of the phenomenon in the MR imaging community was to perform mapping of the RF transmit field, by using the imaging system to measure the spatial phase accrual produced by an off-resonant RF pulse. Since then, it has been recognized that Bloch–Siegert shifts can be used in MRI sequences within imaging systems with a transmit field gradient to provide slice selection, phase encoding, and frequency encoding. The motivation for this research is to provide an alternative to conventional gradient encoding, which is currently used in clinical imaging systems but produces undesirable acoustic noise, peripheral nerve stimulation, and spatial design constraints. AC-Stark shift The AC-Stark shift is a similar shift in the resonance frequency, caused by a non-resonant field of the form perturbing the spin. It can be derived using a similar treatment as above, invoking the RWA on the off-resonant field. The resulting AC-Stark shift is: , with . References L. Allen and J. H. Eberly, Optical Resonance and Two-level Atoms, Dover Publications, 1987. Wave mechanics
Bloch–Siegert shift
[ "Physics" ]
794
[ "Wave mechanics", "Waves", "Physical phenomena", "Classical mechanics" ]
22,350,680
https://en.wikipedia.org/wiki/Free%20carrier%20absorption
Free carrier absorption occurs when a material absorbs a photon, and a carrier (electron or hole) is excited from an already-excited state to another, unoccupied state in the same band (but possibly a different subband). This intraband absorption is different from interband absorption because the excited carrier is already in an excited band, such as an electron in the conduction band or a hole in the valence band, where it is free to move. In interband absorption, the carrier starts in a fixed, nonconducting band and is excited to a conducting one. In the simplest approximation, the Drude model, free carrier absorption is proportional to the square of the wavelength. Quantum mechanical approach It is well known that the optical transition of electrons and holes in the solid state is a useful clue to understand the physical properties of the material. However, the dynamics of the carriers are affected by other carriers and not only by the periodic lattice potential. Moreover, the thermal fluctuation of each electron should be taken into account. Therefore, a statistical approach is needed. To predict the optical transition with appropriate precision, one chooses an approximation, called the assumption of quasi-thermal distributions, of the electrons in the conduction band and of the holes in the valence band. In this case, the diagonal components of the density matrix become negligible after introducing the thermal distribution function, This is the Fermi–Dirac distribution for the distribution of electron energies . Thus, summing over all possible states (l and k) yields the total number of carriers N. The optical susceptibility Using the above distribution function, the time evolution of the density matrix can be ignored, which greatly simplifies the analysis. The optical polarization is With this relation and after adjusting the Fourier transformation, the optical susceptibility is Absorption coefficient The transition amplitude corresponds to the absorption of energy and the absorbed energy is proportional to the optical conductivity which is the imaginary part of the optical susceptibility after frequency is multiplied. Therefore, in order to obtain the absorption coefficient that is crucial quantity for investigation of electronic structure, we can use the optical susceptibility. The energy of free carriers is proportional to the square of momentum (). Using the band gap energy and the electron-hole distribution function, we can obtain the absorption coefficient with some mathematical calculation. The final result is This result is important to understand the optical measurement data and the electronic properties of metals and semiconductors. The absorption coefficient is negative when the material supports stimulated emission, which is the basis for the operation of lasers, particularly semiconductor laser. References 1. H. Haug and S. W. Koch, " ", World Scientific (1994). sec.5.4 a Quantum mechanics
Free carrier absorption
[ "Physics" ]
562
[ "Theoretical physics", "Quantum mechanics" ]
22,354,099
https://en.wikipedia.org/wiki/Long%20code%20%28mathematics%29
In theoretical computer science and coding theory, the long code is an error-correcting code that is locally decodable. Long codes have an extremely poor rate, but play a fundamental role in the theory of hardness of approximation. Definition Let for be the list of all functions from . Then the long code encoding of a message is the string where denotes concatenation of strings. This string has length . The Walsh-Hadamard code is a subcode of the long code, and can be obtained by only using functions that are linear functions when interpreted as functions on the finite field with two elements. Since there are only such functions, the block length of the Walsh-Hadamard code is . An equivalent definition of the long code is as follows: The Long code encoding of is defined to be the truth table of the Boolean dictatorship function on the th coordinate, i.e., the truth table of with . Thus, the Long code encodes a -bit string as a -bit string. Properties The long code does not contain repetitions, in the sense that the function computing the th bit of the output is different from any function computing the th bit of the output for . Among all codes that do not contain repetitions, the long code has the longest possible output. Moreover, it contains all non-repeating codes as a subcode. References Coding theory Error detection and correction
Long code (mathematics)
[ "Mathematics", "Engineering" ]
281
[ "Discrete mathematics", "Coding theory", "Reliability engineering", "Error detection and correction" ]
22,355,848
https://en.wikipedia.org/wiki/Consed
Consed is a program for viewing, editing, and finishing DNA sequence assemblies. Originally developed for sequence assemblies created with phrap, recent versions also support other sequence assembly programs like Newbler. History Consed was originally developed as a contig editing and finishing tool for large-scale cosmid shotgun sequencing in the Human Genome Project. At genome sequencing centers, Consed was used to check assemblies generated by phrap, solve assembly problems like those caused by highly identical repeats, and finishing tasks like primer picking and gap closure. Development of Consed has continued after the completion of the Human Genome Project. Current Consed versions support very large projects with millions of reads, enabling the use with newer sequencing methods like 454 sequencing and Solexa sequencing. Consed also has advanced tools for finishing tasks like automated primer picking See also Phred Phrap References External links Consed homepage Bioinformatics software Computational science
Consed
[ "Chemistry", "Mathematics", "Biology" ]
190
[ "Bioinformatics stubs", "Bioinformatics software", "Applied mathematics", "Biotechnology stubs", "Biochemistry stubs", "Computational science", "Bioinformatics" ]
474,589
https://en.wikipedia.org/wiki/Ultra-high-energy%20cosmic%20ray
In astroparticle physics, an ultra-high-energy cosmic ray (UHECR) is a cosmic ray with an energy greater than 1 EeV (1018 electronvolts, approximately 0.16 joules), far beyond both the rest mass and energies typical of other cosmic ray particles. The origin of these highest energy cosmic ray is not known. These particles are extremely rare; between 2004 and 2007, the initial runs of the Pierre Auger Observatory (PAO) detected 27 events with estimated arrival energies above , that is, about one such event every four weeks in the area surveyed by the observatory. Observational history The first observation of a cosmic ray particle with an energy exceeding (16 J) was made by John Linsley and Livio Scarsi at the Volcano Ranch experiment in New Mexico in 1962. Cosmic ray particles with even higher energies have since been observed. Among them was the Oh-My-God particle observed by the University of Utah's Fly's Eye experiment on the evening of 15 October 1991 over Dugway Proving Ground, Utah. Its observation was shocking to astrophysicists, who estimated its energy at approximately (50 J)—essentially an atomic nucleus with kinetic energy equal to a baseball () traveling at about . The energy of this particle is some 40 million times that of the highest energy protons that have been produced in any terrestrial particle accelerator. However, only a small fraction of this energy would be available for an interaction with a proton or neutron on Earth, with most of the energy remaining in the form of kinetic energy of the products of the interaction (see ). The effective energy available for such a collision is the square root of double the product of the particle's energy and the mass energy of the proton, which for this particle gives , roughly 50 times the collision energy of the Large Hadron Collider. Since the first observation, by the University of Utah's Fly's Eye Cosmic Ray Detector, at least fifteen similar events have been recorded, confirming the phenomenon. These very high energy cosmic ray particles are very rare; the energy of most cosmic ray particles is between 10 MeV and 10 GeV. Ultra-high-energy cosmic ray observatories AGASA – Akeno Giant Air Shower Array in Japan Antarctic Impulse Transient Antenna (ANITA) detects ultra-high-energy cosmic neutrinos believed to be caused by ultra-high-energy cosmic ray particles Extreme Universe Space Observatory GRAPES-3 (Gamma Ray Astronomy PeV EnergieS 3rd establishment) is a project for cosmic ray study with air shower detector array and large area muon detectors at Ooty in southern India. High Resolution Fly's Eye Cosmic Ray Detector (HiRes) MARIACHI – Mixed Apparatus for Radar Investigation of Cosmic-rays of High Ionization located on Long Island, USA. Pierre Auger Observatory Telescope Array Project Yakutsk Extensive Air Shower Array Tunka experiment The COSMICi project at Florida A&M University is developing technology for a distributed network of low-cost detectors for UHECR showers in collaboration with MARIACHI. Cosmic-Ray Extremely Distributed Observatory (CREDO) Pierre Auger Observatory Pierre Auger Observatory is an international cosmic ray observatory designed to detect ultra-high-energy cosmic ray particles (with energies beyond 1020 eV). These high-energy particles have an estimated arrival rate of just 1 per square kilometer per century, therefore, in order to record a large number of these events, the Auger Observatory has created a detection area of 3,000 km2 (the size of Rhode Island) in Mendoza Province, western Argentina. The Pierre Auger Observatory, in addition to obtaining directional information from the cluster of water-Cherenkov tanks used to observe the cosmic-ray-shower components, also has four telescopes trained on the night sky to observe fluorescence of the nitrogen molecules as the shower particles traverse the sky, giving further directional information on the original cosmic ray particle. In September 2017, data from 12 years of observations from PAO supported an extragalactic source (outside of Earth's galaxy) for the origin of extremely high energy cosmic rays. Suggested origins The origin of these rare highest energy cosmic rays is not known. Since observations find no correlation with the Galactic plane and Galactic magnetic fields are not strong enough to accelerate particles to these energies, these cosmic rays are believed to have extra-galactic origin. Neutron stars One suggested source of UHECR particles is their origination from neutron stars. In young neutron stars with spin periods of <10 ms, the magnetohydrodynamic (MHD) forces from the quasi-neutral fluid of superconducting protons and electrons existing in a neutron superfluid accelerate iron nuclei to UHECR velocities. The neutron superfluid in rapidly rotating stars creates a magnetic field of 108 to 1011 teslas, at which point the neutron star is classified as a magnetar. This magnetic field is the strongest stable field in the observed universe and creates the relativistic MHD wind believed to accelerate iron nuclei remaining from the supernova to the necessary energy. Another hypothesized source of UHECRs from neutron stars is during neutron star to strange star combustion. This hypothesis relies on the assumption that strange matter is the ground state of matter which has no experimental or observational data to support it. Due to the immense gravitational pressures from the neutron star, it is believed that small pockets of matter consisting of up, down, and strange quarks in equilibrium acting as a single hadron (as opposed to a number of baryons). This will then combust the entire star to strange matter, at which point the neutron star becomes a strange star and its magnetic field breaks down, which occurs because the protons and neutrons in the quasi-neutral fluid have become strangelets. This magnetic field breakdown releases large amplitude electromagnetic waves (LAEMWs). The LAEMWs accelerate light ion remnants from the supernova to UHECR energies. "Ultra-high-energy cosmic ray electrons" (defined as electrons with energies of ≥1014eV) might be explained by the Centrifugal mechanism of acceleration in the magnetospheres of the Crab-like Pulsars. The feasibility of electron acceleration to this energy scale in the Crab pulsar magnetosphere is supported by the 2019 observation of ultra-high-energy gamma rays coming from the Crab Nebula, a young pulsar with a spin period of 33 ms. Active galactic cores Interactions with blue-shifted cosmic microwave background radiation limit the distance that these particles can travel before losing energy; this is known as the Greisen–Zatsepin–Kuzmin limit or GZK limit. The source of such high energy particles has been a mystery for many years. Recent results from the Pierre Auger Observatory show that ultra-high-energy cosmic ray arrival directions appear to be correlated with extragalactic supermassive black holes at the center of nearby galaxies called active galactic nuclei (AGN). However, since the angular correlation scale used is fairly large (3.1°) these results do not unambiguously identify the origins of such cosmic ray particles. The AGN could merely be closely associated with the actual sources, for example in galaxies or other astrophysical objects that are clumped with matter on large scales within 100 megaparsecs. Some of the supermassive black holes in AGN are known to be rotating, as in the Seyfert galaxy MCG 6-30-15 with time-variability in their inner accretion disks. Black hole spin is a potentially effective agent to drive UHECR production, provided ions are suitably launched to circumvent limiting factors deep within the galactic nucleus, notably curvature radiation and inelastic scattering with radiation from the inner disk. Low-luminosity, intermittent Seyfert galaxies may meet the requirements with the formation of a linear accelerator several light years away from the nucleus, yet within their extended ion tori whose UV radiation ensures a supply of ionic contaminants. The corresponding electric fields are small, on the order of 10 V/cm, whereby the observed UHECRs are indicative for the astronomical size of the source. Improved statistics by the Pierre Auger Observatory will be instrumental in identifying the presently tentative association of UHECRs (from the Local Universe) with Seyferts and LINERs. Other possible sources of the particles In addition to neutron stars and active galactic nuclei, the best candidate sources of the UHECR are: Supernova remnants intergalactic shocks created during the epoch of galaxy formation gamma-ray bursts relativistic supernovae Relation with dark matter It is hypothesized that active galactic nuclei are capable of converting dark matter into high energy protons. Yuri Pavlov and Andrey Grib at the Alexander Friedmann Laboratory for Theoretical Physics in Saint Petersburg hypothesize that dark matter particles are about 15 times heavier than protons, and that they can decay into pairs of heavier virtual particles of a type that interacts with ordinary matter. Near an active galactic nucleus, one of these particles can fall into the black hole, while the other escapes, as described by the Penrose process. Some of those particles will collide with incoming particles; these are very high energy collisions which, according to Pavlov, can form ordinary visible protons with very high energy. Pavlov then claims that evidence of such processes are ultra-high-energy cosmic ray particles. Propagation Ultra-high-energy particles can interact with the photons in the cosmic microwave background while traveling over cosmic distances. This lead to a predicted high energy cutoff for those cosmic rays known as the Greisen–Zatsepin–Kuzmin limit (GZK limit) which matches observed cosmic ray spectra. The propagation of particles can also be affected by cosmic magnetic fields. While there is some studies of galactic magnetic fields, the origin and scale of extragalactic magnetic fields are poorly understood. See also References Further reading External links The Highest Energy Particle Ever Recorded The details of the event from the official site of the Fly's Eye detector. John Walker's lively analysis of the 1991 event, published in 1994 Origin of energetic space particles pinpointed, by Mark Peplow for news@nature.com, published January 13, 2005. Subatomic particles Particle physics Astroparticle physics Cosmic rays Unsolved problems in astronomy Unexplained phenomena
Ultra-high-energy cosmic ray
[ "Physics", "Astronomy" ]
2,127
[ "Physical phenomena", "Matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Astroparticle physics", "Astrophysics", "Radiation", "Subatomic particles", "Particle physics", "Astronomical controversies", "Nuclear physics", "Atoms", "Cosmic rays" ]
474,760
https://en.wikipedia.org/wiki/Legendre%20transform%20%28integral%20transform%29
In mathematics, Legendre transform is an integral transform named after the mathematician Adrien-Marie Legendre, which uses Legendre polynomials as kernels of the transform. Legendre transform is a special case of Jacobi transform. The Legendre transform of a function is The inverse Legendre transform is given by Associated Legendre transform Associated Legendre transform is defined as The inverse Legendre transform is given by Some Legendre transform pairs References Integral transforms Mathematical physics
Legendre transform (integral transform)
[ "Physics", "Mathematics" ]
92
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
474,781
https://en.wikipedia.org/wiki/Particle%20in%20a%20ring
In quantum mechanics, the case of a particle in a one-dimensional ring is similar to the particle in a box. The Schrödinger equation for a free particle which is restricted to a ring (technically, whose configuration space is the circle ) is Wave function Using polar coordinates on the 1-dimensional ring of radius R, the wave function depends only on the angular coordinate, and so Requiring that the wave function be periodic in with a period (from the demand that the wave functions be single-valued functions on the circle), and that they be normalized leads to the conditions , and Under these conditions, the solution to the Schrödinger equation is given by Energy eigenvalues The energy eigenvalues are quantized because of the periodic boundary conditions, and they are required to satisfy , or The eigenfunction and eigenenergies are where Therefore, there are two degenerate quantum states for every value of (corresponding to ). Therefore, there are states with energies up to an energy indexed by the number . The case of a particle in a one-dimensional ring is an instructive example when studying the quantization of angular momentum for, say, an electron orbiting the nucleus. The azimuthal wave functions in that case are identical to the energy eigenfunctions of the particle on a ring. The statement that any wavefunction for the particle on a ring can be written as a superposition of energy eigenfunctions is exactly identical to the Fourier theorem about the development of any periodic function in a Fourier series. This simple model can be used to find approximate energy levels of some ring molecules, such as benzene. Application In organic chemistry, aromatic compounds contain atomic rings, such as benzene rings (the Kekulé structure) consisting of five or six, usually carbon, atoms. So does the surface of "buckyballs" (buckminsterfullerene). This ring behaves like a circular waveguide, with the valence electrons orbiting in both directions. To fill all energy levels up to n requires electrons, as electrons have additionally two possible orientations of their spins. This gives exceptional stability ("aromatic"), and is known as the Hückel's rule. Further in rotational spectroscopy this model may be used as an approximation of rotational energy levels. See also Angular momentum Harmonic analysis One-dimensional periodic case Semicircular potential well Spherical potential well References Quantum models
Particle in a ring
[ "Physics" ]
500
[ "Quantum models", "Quantum mechanics" ]
474,908
https://en.wikipedia.org/wiki/Structural%20alignment
Structural alignment attempts to establish homology between two or more polymer structures based on their shape and three-dimensional conformation. This process is usually applied to protein tertiary structures but can also be used for large RNA molecules. In contrast to simple structural superposition, where at least some equivalent residues of the two structures are known, structural alignment requires no a priori knowledge of equivalent positions. Structural alignment is a valuable tool for the comparison of proteins with low sequence similarity, where evolutionary relationships between proteins cannot be easily detected by standard sequence alignment techniques. Structural alignment can therefore be used to imply evolutionary relationships between proteins that share very little common sequence. However, caution should be used in using the results as evidence for shared evolutionary ancestry because of the possible confounding effects of convergent evolution by which multiple unrelated amino acid sequences converge on a common tertiary structure. Structural alignments can compare two sequences or multiple sequences. Because these alignments rely on information about all the query sequences' three-dimensional conformations, the method can only be used on sequences where these structures are known. These are usually found by X-ray crystallography or NMR spectroscopy. It is possible to perform a structural alignment on structures produced by structure prediction methods. Indeed, evaluating such predictions often requires a structural alignment between the model and the true known structure to assess the model's quality. Structural alignments are especially useful in analyzing data from structural genomics and proteomics efforts, and they can be used as comparison points to evaluate alignments produced by purely sequence-based bioinformatics methods. The outputs of a structural alignment are a superposition of the atomic coordinate sets and a minimal root mean square deviation (RMSD) between the structures. The RMSD of two aligned structures indicates their divergence from one another. Structural alignment can be complicated by the existence of multiple protein domains within one or more of the input structures, because changes in relative orientation of the domains between two structures to be aligned can artificially inflate the RMSD. Data produced by structural alignment The minimum information produced from a successful structural alignment is a set of residues that are considered equivalent between the structures. This set of equivalences is then typically used to superpose the three-dimensional coordinates for each input structure. (Note that one input element may be fixed as a reference and therefore its superposed coordinates do not change.) The fitted structures can be used to calculate mutual RMSD values, as well as other more sophisticated measures of structural similarity such as the global distance test (GDT, the metric used in CASP). The structural alignment also implies a corresponding one-dimensional sequence alignment from which a sequence identity, or the percentage of residues that are identical between the input structures, can be calculated as a measure of how closely the two sequences are related. Types of comparisons Because protein structures are composed of amino acids whose side chains are linked by a common protein backbone, a number of different possible subsets of the atoms that make up a protein macromolecule can be used in producing a structural alignment and calculating the corresponding RMSD values. When aligning structures with very different sequences, the side chain atoms generally are not taken into account because their identities differ between many aligned residues. For this reason it is common for structural alignment methods to use by default only the backbone atoms included in the peptide bond. For simplicity and efficiency, often only the alpha carbon positions are considered, since the peptide bond has a minimally variant planar conformation. Only when the structures to be aligned are highly similar or even identical is it meaningful to align side-chain atom positions, in which case the RMSD reflects not only the conformation of the protein backbone but also the rotameric states of the side chains. Other comparison criteria that reduce noise and bolster positive matches include secondary structure assignment, native contact maps or residue interaction patterns, measures of side chain packing, and measures of hydrogen bond retention. Structural superposition The most basic possible comparison between protein structures makes no attempt to align the input structures and requires a precalculated alignment as input to determine which of the residues in the sequence are intended to be considered in the RMSD calculation. Structural superposition is commonly used to compare multiple conformations of the same protein (in which case no alignment is necessary, since the sequences are the same) and to evaluate the quality of alignments produced using only sequence information between two or more sequences whose structures are known. This method traditionally uses a simple least-squares fitting algorithm, in which the optimal rotations and translations are found by minimizing the sum of the squared distances among all structures in the superposition. More recently, maximum likelihood and Bayesian methods have greatly increased the accuracy of the estimated rotations, translations, and covariance matrices for the superposition. Algorithms based on multidimensional rotations and modified quaternions have been developed to identify topological relationships between protein structures without the need for a predetermined alignment. Such algorithms have successfully identified canonical folds such as the four-helix bundle. The SuperPose method is sufficiently extensible to correct for relative domain rotations and other structural pitfalls. Evaluating similarity Often the purpose of seeking a structural superposition is not so much the superposition itself, but an evaluation of the similarity of two structures or a confidence in a remote alignment. A subtle but important distinction from maximal structural superposition is the conversion of an alignment to a meaningful similarity score. Most methods output some sort of "score" indicating the quality of the superposition. However, what one actually wants is not merely an estimated "Z-score" or an estimated E-value of seeing the observed superposition by chance but instead one desires that the estimated E-value is tightly correlated to the true E-value. Critically, even if a method's estimated E-value is precisely correct on average, if it lacks a low standard deviation on its estimated value generation process, then the rank ordering of the relative similarities of a query protein to a comparison set will rarely agree with the "true" ordering. Different methods will superimpose different numbers of residues because they use different quality assurances and different definitions of "overlap"; some only include residues meeting multiple local and global superposition criteria and others are more greedy, flexible, and promiscuous. A greater number of atoms superposed can mean more similarity but it may not always produce the best E-value quantifying the unlikeliness of the superposition and thus not as useful for assessing similarity, especially in remote homologs. Algorithmic complexity Optimal solution The optimal "threading" of a protein sequence onto a known structure and the production of an optimal multiple sequence alignment have been shown to be NP-complete. However, this does not imply that the structural alignment problem is NP-complete. Strictly speaking, an optimal solution to the protein structure alignment problem is only known for certain protein structure similarity measures, such as the measures used in protein structure prediction experiments, GDT_TS and MaxSub. These measures can be rigorously optimized using an algorithm capable of maximizing the number of atoms in two proteins that can be superimposed under a predefined distance cutoff. Unfortunately, the algorithm for optimal solution is not practical, since its running time depends not only on the lengths but also on the intrinsic geometry of input proteins. Approximate solution Approximate polynomial-time algorithms for structural alignment that produce a family of "optimal" solutions within an approximation parameter for a given scoring function have been developed. Although these algorithms theoretically classify the approximate protein structure alignment problem as "tractable", they are still computationally too expensive for large-scale protein structure analysis. As a consequence, practical algorithms that converge to the global solutions of the alignment, given a scoring function, do not exist. Most algorithms are, therefore, heuristic, but algorithms that guarantee the convergence to at least local maximizers of the scoring functions, and are practical, have been developed. Representation of structures Protein structures have to be represented in some coordinate-independent space to make them comparable. This is typically achieved by constructing a sequence-to-sequence matrix or series of matrices that encompass comparative metrics: rather than absolute distances relative to a fixed coordinate space. An intuitive representation is the distance matrix, which is a two-dimensional matrix containing all pairwise distances between some subset of the atoms in each structure (such as the alpha carbons). The matrix increases in dimensionality as the number of structures to be simultaneously aligned increases. Reducing the protein to a coarse metric such as secondary structure elements (SSEs) or structural fragments can also produce sensible alignments, despite the loss of information from discarding distances, as noise is also discarded. Choosing a representation to facilitate computation is critical to developing an efficient alignment mechanism. Methods Structural alignment techniques have been used in comparing individual structures or sets of structures as well as in the production of "all-to-all" comparison databases that measure the divergence between every pair of structures present in the Protein Data Bank (PDB). Such databases are used to classify proteins by their fold. DALI A common and popular structural alignment method is the DALI, or Distance-matrix ALIgnment method, which breaks the input structures into hexapeptide fragments and calculates a distance matrix by evaluating the contact patterns between successive fragments. Secondary structure features that involve residues that are contiguous in sequence appear on the matrix's main diagonal; other diagonals in the matrix reflect spatial contacts between residues that are not near each other in the sequence. When these diagonals are parallel to the main diagonal, the features they represent are parallel; when they are perpendicular, their features are antiparallel. This representation is memory-intensive because the features in the square matrix are symmetrical (and thus redundant) about the main diagonal. When two proteins' distance matrices share the same or similar features in approximately the same positions, they can be said to have similar folds with similar-length loops connecting their secondary structure elements. DALI's actual alignment process requires a similarity search after the two proteins' distance matrices are built; this is normally conducted via a series of overlapping submatrices of size 6x6. Submatrix matches are then reassembled into a final alignment via a standard score-maximization algorithm — the original version of DALI used a Monte Carlo simulation to maximize a structural similarity score that is a function of the distances between putative corresponding atoms. In particular, more distant atoms within corresponding features are exponentially downweighted to reduce the effects of noise introduced by loop mobility, helix torsions, and other minor structural variations. Because DALI relies on an all-to-all distance matrix, it can account for the possibility that structurally aligned features might appear in different orders within the two sequences being compared. The DALI method has also been used to construct a database known as FSSP (Fold classification based on Structure-Structure alignment of Proteins, or Families of Structurally Similar Proteins) in which all known protein structures are aligned with each other to determine their structural neighbors and fold classification. There is a searchable database based on DALI as well as a downloadable program and web search based on a standalone version known as DaliLite. Combinatorial extension The combinatorial extension (CE) method is similar to DALI in that it too breaks each structure in the query set into a series of fragments that it then attempts to reassemble into a complete alignment. A series of pairwise combinations of fragments called aligned fragment pairs, or AFPs, are used to define a similarity matrix through which an optimal path is generated to identify the final alignment. Only AFPs that meet given criteria for local similarity are included in the matrix as a means of reducing the necessary search space and thereby increasing efficiency. A number of similarity metrics are possible; the original definition of the CE method included only structural superpositions and inter-residue distances but has since been expanded to include local environmental properties such as secondary structure, solvent exposure, hydrogen-bonding patterns, and dihedral angles. An alignment path is calculated as the optimal path through the similarity matrix by linearly progressing through the sequences and extending the alignment with the next possible high-scoring AFP pair. The initial AFP pair that nucleates the alignment can occur at any point in the sequence matrix. Extensions then proceed with the next AFP that meets given distance criteria restricting the alignment to low gap sizes. The size of each AFP and the maximum gap size are required input parameters but are usually set to empirically determined values of 8 and 30 respectively. Like DALI and SSAP, CE has been used to construct an all-to-all fold classification database from the known protein structures in the PDB. The RCSB PDB has recently released an updated version of CE, Mammoth, and FATCAT as part of the RCSB PDB Protein Comparison Tool. It provides a new variation of CE that can detect circular permutations in protein structures. Mammoth MAMMOTH approaches the alignment problem from a different objective than almost all other methods. Rather than trying to find an alignment that maximally superimposes the largest number of residues, it seeks the subset of the structural alignment least likely to occur by chance. To do this it marks a local motif alignment with flags to indicate which residues simultaneously satisfy more stringent criteria: 1) Local structure overlap 2) regular secondary structure 3) 3D-superposition 4) same ordering in primary sequence. It converts the statistics of the number of residues with high-confidence matches and the size of the protein to compute an Expectation value for the outcome by chance. It excels at matching remote homologs, particularly structures generated by ab initio structure prediction to structure families such as SCOP, because it emphasizes extracting a statistically reliable sub alignment and not in achieving the maximal sequence alignment or maximal 3D superposition. For every overlapping window of 7 consecutive residues it computes the set of displacement direction unit vectors between adjacent C-alpha residues. All-against-all local motifs are compared based on the URMS score. These values becomes the pair alignment score entries for dynamic programming which produces a seed pair-wise residue alignment. The second phase uses a modified MaxSub algorithm: a single 7 reside aligned pair in each proteins is used to orient the two full length protein structures to maximally superimpose these just these 7 C-alpha, then in this orientation it scans for any additional aligned pairs that are close in 3D. It re-orients the structures to superimpose this expanded set and iterates until no more pairs coincide in 3D. This process is restarted for every 7 residue window in the seed alignment. The output is the maximal number of atoms found from any of these initial seeds. This statistic is converted to a calibrated E-value for the similarity of the proteins. Mammoth makes no attempt to re-iterate the initial alignment or extend the high quality sub-subset. Therefore, the seed alignment it displays can't be fairly compared to DALI or TM align as it was formed simply as a heuristic to prune the search space. (It can be used if one wants an alignment based solely on local structure-motif similarity agnostic of long range rigid body atomic alignment.) Because of that same parsimony, it is well over ten times faster than DALI, CE and TM-align. It is often used in conjunction with these slower tools to pre-screen large data bases to extract the just the best E-value related structures for more exhaustive superposition or expensive calculations. It has been particularly successful at analyzing "decoy" structures from ab initio structure prediction. These decoys are notorious for getting local fragment motif structure correct, and forming some kernels of correct 3D tertiary structure but getting the full length tertiary structure wrong. In this twilight remote homology regime, Mammoth's e-values for the CASP protein structure prediction evaluation have been shown to be significantly more correlated with human ranking than SSAP or DALI. Mammoths ability to extract the multi-criteria partial overlaps with proteins of known structure and rank these with proper E-values, combined with its speed facilitates scanning vast numbers of decoy models against the PDB data base for identifying the most likely correct decoys based on their remote homology to known proteins. SSAP The SSAP (Sequential Structure Alignment Program) method uses double dynamic programming to produce a structural alignment based on atom-to-atom vectors in structure space. Instead of the alpha carbons typically used in structural alignment, SSAP constructs its vectors from the beta carbons for all residues except glycine, a method which thus takes into account the rotameric state of each residue as well as its location along the backbone. SSAP works by first constructing a series of inter-residue distance vectors between each residue and its nearest non-contiguous neighbors on each protein. A series of matrices are then constructed containing the vector differences between neighbors for each pair of residues for which vectors were constructed. Dynamic programming applied to each resulting matrix determines a series of optimal local alignments which are then summed into a "summary" matrix to which dynamic programming is applied again to determine the overall structural alignment. SSAP originally produced only pairwise alignments but has since been extended to multiple alignments as well. It has been applied in an all-to-all fashion to produce a hierarchical fold classification scheme known as CATH (Class, Architecture, Topology, Homology), which has been used to construct the CATH Protein Structure Classification database. Recent developments Improvements in structural alignment methods constitute an active area of research, and new or modified methods are often proposed that are claimed to offer advantages over the older and more widely distributed techniques. A recent example, TM-align, uses a novel method for weighting its distance matrix, to which standard dynamic programming is then applied. The weighting is proposed to accelerate the convergence of dynamic programming and correct for effects arising from alignment lengths. In a benchmarking study, TM-align has been reported to improve in both speed and accuracy over DALI and CE. Other promising methods of structural alignment are local structural alignment methods. These provide comparison of pre-selected parts of proteins (e.g. binding sites, user-defined structural motifs) against binding sites or whole-protein structural databases. The MultiBind and MAPPIS servers allow the identification of common spatial arrangements of physicochemical properties such as H-bond donor, acceptor, aliphatic, aromatic or hydrophobic in a set of user provided protein binding sites defined by interactions with small molecules (MultiBind) or in a set of user-provided protein–protein interfaces (MAPPIS). Others provide comparison of entire protein structures against a number of user submitted structures or against a large database of protein structures in reasonable time (ProBiS). Unlike global alignment approaches, local structural alignment approaches are suited to detection of locally conserved patterns of functional groups, which often appear in binding sites and have significant involvement in ligand binding. As an example, comparing G-Losa, a local structure alignment tool, with TM-align, a global structure alignment based method. While G-Losa predicts drug-like ligands’ positions in single-chain protein targets more precisely than TM-align, the overall success rate of TM-align is better. However, as algorithmic improvements and computer performance have erased purely technical deficiencies in older approaches, it has become clear that there is no one universal criterion for the 'optimal' structural alignment. TM-align, for instance, is particularly robust in quantifying comparisons between sets of proteins with great disparities in sequence lengths, but it only indirectly captures hydrogen bonding or secondary structure order conservation which might be better metrics for alignment of evolutionarily related proteins. Thus recent developments have focused on optimizing particular attributes such as speed, quantification of scores, correlation to alternative gold standards, or tolerance of imperfection in structural data or ab initio structural models. An alternative methodology that is gaining popularity is to use the consensus of various methods to ascertain proteins structural similarities. RNA structural alignment Structural alignment techniques have traditionally been applied exclusively to proteins, as the primary biological macromolecules that assume characteristic three-dimensional structures. However, large RNA molecules also form characteristic tertiary structures, which are mediated primarily by hydrogen bonds formed between base pairs as well as base stacking. Functionally similar noncoding RNA molecules can be especially difficult to extract from genomics data because structure is more strongly conserved than sequence in RNA as well as in proteins, and the more limited alphabet of RNA decreases the information content of any given nucleotide at any given position. However, because of the increasing interest in RNA structures and because of the growth of the number of experimentally determined 3D RNA structures, few RNA structure similarity methods have been developed recently. One of those methods is, e.g., SETTER which decomposes each RNA structure into smaller parts called general secondary structure units (GSSUs). GSSUs are subsequently aligned and these partial alignments are merged into the final RNA structure alignment and scored. The method has been implemented into the SETTER webserver. A recent method for pairwise structural alignment of RNA sequences with low sequence identity has been published and implemented in the program FOLDALIGN. However, this method is not truly analogous to protein structural alignment techniques because it computationally predicts the structures of the RNA input sequences rather than requiring experimentally determined structures as input. Although computational prediction of the protein folding process has not been particularly successful to date, RNA structures without pseudoknots can often be sensibly predicted using free energy-based scoring methods that account for base pairing and stacking. Software Choosing a software tool for structural alignment can be a challenge due to the large variety of available packages that differ significantly in methodology and reliability. A partial solution to this problem was presented in and made publicly accessible through the ProCKSI webserver. A more complete list of currently available and freely distributed structural alignment software can be found in structural alignment software. Properties of some structural alignment servers and software packages are summarized and tested with examples at Structural Alignment Tools in Proteopedia.Org. See also Multiple sequence alignment List of sequence alignment software Structural Classification of Proteins SuperPose Protein superfamily References Further reading Bourne PE, Shindyalov IN. (2003): Structure Comparison and Alignment. In: Bourne, P.E., Weissig, H. (Eds): Structural Bioinformatics. Hoboken NJ: Wiley-Liss. Yuan X, Bystroff C. (2004) "Non-sequential Structure-based Alignments Reveal Topology-independent Core Packing Arrangements in Proteins", Bioinformatics. Nov 5, 2004 Protein methods NP-complete problems
Structural alignment
[ "Chemistry", "Mathematics", "Biology" ]
4,623
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Computational problems", "Mathematical problems", "NP-complete problems" ]
474,931
https://en.wikipedia.org/wiki/Diving%20physics
Diving physics, or the physics of underwater diving, is the basic aspects of physics which describe the effects of the underwater environment on the underwater diver and their equipment, and the effects of blending, compressing, and storing breathing gas mixtures, and supplying them for use at ambient pressure. These effects are mostly consequences of immersion in water, the hydrostatic pressure of depth and the effects of pressure and temperature on breathing gases. An understanding of the physics behind is useful when considering the physiological effects of diving, breathing gas planning and management, diver buoyancy control and trim, and the hazards and risks of diving. Changes in density of breathing gas affect the ability of the diver to breathe effectively, and variations in partial pressure of breathing gas constituents have profound effects on the health and ability to function underwater of the diver. Aspects of physics with particular relevance to diving The main laws of physics that describe the influence of the underwater diving environment on the diver and diving equipment include: Buoyancy Archimedes' principle (Buoyancy) - Ignoring the minor effect of surface tension, an object, wholly or partially immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object. Thus, when in water, the weight of the volume of water displaced as compared to the weight of the diver's body and the diver's equipment, determine whether the diver floats or sinks. Buoyancy control, and being able to maintain neutral buoyancy in particular, is an important safety skill. The diver needs to understand buoyancy to effectively and safely operate drysuits, buoyancy compensators, diving weighting systems and lifting bags. Pressure The concept of pressure as force distributed over area, and the variation of pressure with immersed depth are central to the understanding of the physiology of diving, particularly the physiology of decompression and of barotrauma. The absolute pressure on an ambient pressure diver is the sum of the local atmospheric pressure and hydrostatic pressure. Hydrostatic pressure is the component of ambient pressure due to the weight of the water column above the depth, and is commonly described in terms of metres or feet of sea water. The partial pressures of the component gases in a breathing gas mixture control the rate of diffusion into and out of the blood in the lungs, and their concentration in the arterial blood, and the concentration of blood gases affects their physiological effects in the body tissues. Partial pressure calculations are used in breathing gas blending and analysis A class of diving hazards commonly referred to as delta-P hazards are caused by a pressure difference other than variation of ambient pressure with depth. This pressure difference causes flow that may entrain the diver and carry them to places where injury could occur, such as the intake to a marine thruster or a sluice gate. Gas property changes Gas equations of state, which may be expressed in combination as the Combined gas law, or the Ideal gas law within the range of pressures normally encountered by divers, or as the traditionally expressed gas laws relating the relationships between two properties when the others are held constant, are used to calculate variations of pressure, volume and temperature, such as: Boyle's law, which describes the change in volume with a change in pressure at a constant temperature. For example, the volume of gas in a non-rigid container (such as a diver's lungs or buoyancy compensation device), decreases as external pressure increases while the diver descends in the water. Likewise, the volume of gas in such non-rigid containers increases on the ascent. Changes in the volume of gases in the diver and the diver's equipment affect buoyancy. This creates a positive feedback loop on both ascent and descent. The quantity of open circuit gas breathed by a diver increases with pressure and depth. Charles's law, which describes the change in volume with a change in temperature at a fixed pressure, Gay-Lussac's second law, which describes the change of pressure with a change of temperature for a fixed volume, (originally described by Guillaume Amontons, and sometimes called Amontons's law). This explains why a diver who enters cold water with a warm diving cylinder, for instance after a recent quick fill, finds the gas pressure of the cylinder drops by an unexpectedly large amount during the early part of the dive as the gas in the cylinder cools. In mixtures of breathing gases the concentration of the individual components of the gas mix is proportional to their partial pressures and volumetric gas fraction. Gas fraction is constant for the components of a mixture, but partial pressure changes in proportion to changes in the total pressure. Partial pressure is a useful measure for expressing limits for avoiding nitrogen narcosis and oxygen toxicity. Dalton's law describes the combination of partial pressures to form the total pressure of the mixture. Gases are highly compressible but liquids are almost incompressible. Gas spaces in the diver's body and gas held in flexible equipment contract as the diver descends and expand as the diver ascends. When constrained from free expansion and contraction, gases will exert unbalanced pressure on the walls of their containment, which can cause damage or injury known as barotraum if excessive. Solubility of gases and diffusion Henry's law describes how as pressure increases the quantity of gas that can be dissolved in the tissues of the body increases. This effect is involved in nitrogen narcosis, oxygen toxicity and decompression sickness. Concentration of gases dissolved in the body tissues affects a number of physiological processed and is influenced by diffusion rates, solubility of the components of the breathing gas in the tissues of the body and pressure. Given sufficient time under a specific pressure, tissues will saturate with the gases, and no more will be absorbed until the pressure increases. When the pressure decreases faster than the dissolved gas can be eliminated, the concentration rises and supersaturation occurs, and pre-existing bubble nuclei may grow. Bubble formation and growth in decompression sickness is affected by surface tension of the bubbles, as well as pressure changes and supersaturation. Density effects The density of the breathing gas is proportional to absolute pressure, and affects the breathing performance of regulators and the work of breathing, which affect the capacity of the diver to work, and in extreme cases, to breathe. Density of the water, the diver's body, and equipment, determines the diver's apparent weight in water, and therefore their buoyancy, and influences the use of buoyant equipment. Density and the force of gravity are the factors in the generation of hydrostatic pressure. Divers use high density materials such as lead for diving weighting systems and low density materials such as air in buoyancy compensators and lifting bags. Viscosity effects The absolute (dynamic) viscosity of water is higher (order of 100 times) than that of air. This increases the drag on an object moving through water, and more effort is required for propulsion in water than air relative to the speed of movement. Viscosity also affects the work of breathing. Heat balance Thermal conductivity of water is higher than that of air. As water conducts heat 20 times more than air, and has a much higher thermal capacity, heat transfer from a diver's body to water is faster than to air, and to avoid excessive heat loss leading to hypothermia, thermal insulation in the form of diving suits, or active heating is used. Gases used in diving have very different thermal conductivities; Heliox, and to a lesser extent, trimix, conducts heat faster than air because of the helium content, and argon conducts heat slower than air, so technical divers breathing gases containing helium may inflate their dry suits with argon. Some thermal conductivity values at 25 °C and sea level atmospheric pressure: argon: 16 mW/m/K; air: 26 mW/m/K; neoprene: 50 mW/m/K; wool felt: 70 mW/m/K; helium: 142 mW/m/K; water: 600 mW/m/K. Underwater vision Underwater vision is affected by the refractive index of water, which is similar to that of the cornea of the eye, and which is about 30% greater than air. Snell's law describes the angle of refraction relative to the angle of incidence. This similarity in refractive index is the reason a diver cannot see clearly underwater without a diving mask with an internal airspace. Absorption of light depends on wavelength, this causes loss of colour underwater. The red end of the spectrum of light is absorbed over a short distance, and is lost even in shallow water. Divers use artificial light underwater to reveal these absorbed colours. In deeper water no light from the surface penetrates, and artificial lighting is necessary to see at all. Underwater vision is also affected by turbidity, which causes scattering, and dissolved materials which absorb light. Underwater acoustics Underwater acoustics affect the ability of the diver to hear through the hood of the diving suit or the helmet, and the ability to judge the direction of a source of sound. Environmental physical phenomena of interest to divers The physical phenomena found in large bodies of water that may have a practical influence on divers include: Effects of weather such as wind, which causes waves, and changes of temperature and atmospheric pressure on and in the water. Even moderately high winds can prevent diving because of the increased risk of becoming lost at sea or injured. Low water temperatures make it necessary for divers to wear diving suits and can cause problems such as freezing of diving regulators. Haloclines, or strong, vertical salinity gradients. For instance, where fresh water enters the sea, the fresh water floats over the denser saline water and may not mix immediately. Sometimes visual effects, such as shimmering and reflection, occur at the boundary between the layers, because the refractive indices differ. Ocean currents can transport water over thousands of kilometres, and may bring water with different temperature and salinity into a region. Some ocean currents have a huge effect on local climate, for instance the warm water of the North Atlantic drift moderates the climate of the north west coast of Europe. The speed of water movement can affect dive planning and safety. Thermoclines, or sudden changes in temperature. Where the air temperature is higher than the water temperature, shallow water may be warmed by the air and the sunlight but deeper water remains cold resulting in a lowering of temperature as the diver descends. This temperature change may be concentrated over a small vertical interval, when it is called a thermocline. Where cold, fresh water enters a warmer sea the fresh water may float over the denser saline water, so the temperature rises as the diver descends. In lakes exposed to geothermal activity, the temperature of the deeper water may be warmer than the surface water. This will usually lead to convection currents. Water at near-freezing temperatures is less dense than slightly warmer water - maximum density of water is at about 4 °C - so when near freezing, water may be slightly warmer at depth than at the surface. Tidal currents and changes in sea level caused by gravitational forces and the Earth's rotation. Some dive sites can only be dived safely at slack water when the tidal cycle reverses and the current slows. Strong currents can cause problems for divers. Buoyancy control can be difficult when a strong current meets a vertical surface. Divers consume more breathing gas when swimming against currents. Divers on the surface can be separated from their boat cover by currents. On the other hand, drift diving is only possible when there is a reasonable current. See also References Underwater diving physics Sports science
Diving physics
[ "Physics" ]
2,373
[ "Applied and interdisciplinary physics", "Underwater diving physics" ]
474,936
https://en.wikipedia.org/wiki/Self-oscillation
Self-oscillation is the generation and maintenance of a periodic motion by a source of power that lacks any corresponding periodicity. The oscillator itself controls the phase with which the external power acts on it. Self-oscillators are therefore distinct from forced and parametric resonators, in which the power that sustains the motion must be modulated externally. In linear systems, self-oscillation appears as an instability associated with a negative damping term, which causes small perturbations to grow exponentially in amplitude. This negative damping is due to a positive feedback between the oscillation and the modulation of the external source of power. The amplitude and waveform of steady self-oscillations are determined by the nonlinear characteristics of the system. Self-oscillations are important in physics, engineering, biology, and economics. History of the subject The study of self-oscillators dates back to the early 1830s, with the work of Robert Willis and George Biddell Airy on the mechanism by which the vocal cords produce the human voice. Another instance of self-oscillation, associated with the unstable operation of centrifugal governors, was studied mathematically by James Clerk Maxwell in 1867. In the second edition of his treatise on The Theory of Sound, published in 1896, Lord Rayleigh considered various instances of mechanical and acoustic self-oscillations (which he called "maintained vibration") and offered a simple mathematical model for them. Interest in the subject of self-oscillation was also stimulated by the work of Heinrich Hertz, starting in 1887, in which he used a spark-gap transmitter to generate radio waves that he showed correspond to electrical oscillations with frequencies of hundreds of millions of cycles per second. Hertz's work led to the development of wireless telegraphy. The first detailed theoretical work on such electrical self-oscillation was carried out by Henri Poincaré in the early 20th century. The term "self-oscillation" (also translated as "auto-oscillation") was coined by the Soviet physicist Aleksandr Andronov, who studied them in the context of the mathematical theory of the structural stability of dynamical systems. Other important work on the subject, both theoretical and experimental, was due to André Blondel, Balthasar van der Pol, Alfred-Marie Liénard, and Philippe Le Corbeiller in the 20th century. The same phenomenon is sometimes labelled as "maintained", "sustained", "self-exciting", "self-induced", "spontaneous", or "autonomous" oscillation. Unwanted self-oscillations are known in the mechanical engineering literature as hunting, and in electronics as parasitic oscillations. Mathematical basis Self-oscillation is manifested as a linear instability of a dynamical system's static equilibrium. Two mathematical tests that can be used to diagnose such an instability are the Routh–Hurwitz and Nyquist criteria. The amplitude of the oscillation of an unstable system grows exponentially with time (i.e., small oscillations are negatively damped), until nonlinearities become important and limit the amplitude. This can produce a steady and sustained oscillation. In some cases, self-oscillation can be seen as resulting from a time lag in a closed loop system, which makes the change in variable xt dependent on the variable xt-1 evaluated at an earlier time. Simple mathematical models of self-oscillators involve negative linear damping and positive non-linear damping terms, leading to a Hopf bifurcation and the appearance of limit cycles. The van der Pol oscillator is one such model that has been used extensively in the mathematical literature. Examples in engineering Railway and automotive wheels Hunting oscillation in railway wheels and shimmy in automotive tires can cause an uncomfortable wobbling effect, which in extreme cases can derail trains and cause cars to lose grip. Central heating thermostats Early central heating thermostats were guilty of self-exciting oscillation because they responded too quickly. The problem was overcome by hysteresis, i.e., making them switch state only when the temperature varied from the target by a specified minimum amount. Automatic transmissions Self-exciting oscillation occurred in early automatic transmission designs when the vehicle was traveling at a speed which was between the ideal speeds of 2 gears. In these situations the transmission system would switch almost continuously between the 2 gears, which was both annoying and hard on the transmission. Such behavior is now inhibited by introducing hysteresis into the system. Steering of vehicles when course corrections are delayed There are many examples of self-exciting oscillation caused by delayed course corrections, ranging from light aircraft in a strong wind to erratic steering of road vehicles by a driver who is inexperienced or drunk. SEIG (self-excited induction generator) If an induction motor is connected to a capacitor and the shaft turns above synchronous speed, it operates as a self-excited induction generator. Self-exciting transmitters Many early radio systems tuned their transmitter circuit, so the system automatically created radio waves of the desired frequency. This design has given way to designs that use a separate oscillator to provide a signal that is then amplified to the desired power. Examples in other fields Population cycles in biology For example, a reduction in population of an herbivore species because of predation, this makes the populations of predators of that species decline, the reduced level of predation allows the herbivore population to increase, this allows the predator population to increase, etc. Closed loops of time-lagged differential equations are a sufficient explanation for such cycles - in this case the delays are caused mainly by the breeding cycles of the species involved. See also Hopf bifurcation Limit cycle Van der Pol oscillator Hidden oscillation References . Oscillators Amplifiers Systems theory Dynamical systems Nonlinear systems Mechanical vibrations Physical phenomena Ordinary differential equations Feedback
Self-oscillation
[ "Physics", "Mathematics", "Technology", "Engineering" ]
1,238
[ "Structural engineering", "Physical phenomena", "Nonlinear systems", "Mechanics", "Mechanical vibrations", "Amplifiers", "Dynamical systems" ]
475,008
https://en.wikipedia.org/wiki/Stiffness
Stiffness is the extent to which an object resists deformation in response to an applied force. The complementary concept is flexibility or pliability: the more flexible an object is, the less stiff it is. Calculations The stiffness, of a body is a measure of the resistance offered by an elastic body to deformation. For an elastic body with a single degree of freedom (DOF) (for example, stretching or compression of a rod), the stiffness is defined as where, is the force on the body is the displacement produced by the force along the same degree of freedom (for instance, the change in length of a stretched spring) Stiffness is usually defined under quasi-static conditions, but sometimes under dynamic loading. In the International System of Units, stiffness is typically measured in newtons per meter (). In Imperial units, stiffness is typically measured in pounds (lbs) per inch. Generally speaking, deflections (or motions) of an infinitesimal element (which is viewed as a point) in an elastic body can occur along multiple DOF (maximum of six DOF at a point). For example, a point on a horizontal beam can undergo both a vertical displacement and a rotation relative to its undeformed axis. When there are degrees of freedom a matrix must be used to describe the stiffness at the point. The diagonal terms in the matrix are the direct-related stiffnesses (or simply stiffnesses) along the same degree of freedom and the off-diagonal terms are the coupling stiffnesses between two different degrees of freedom (either at the same or different points) or the same degree of freedom at two different points. In industry, the term influence coefficient is sometimes used to refer to the coupling stiffness. It is noted that for a body with multiple DOF, the equation above generally does not apply since the applied force generates not only the deflection along its direction (or degree of freedom) but also those along with other directions. For a body with multiple DOF, to calculate a particular direct-related stiffness (the diagonal terms), the corresponding DOF is left free while the remaining should be constrained. Under such a condition, the above equation can obtain the direct-related stiffness for the degree of unconstrained freedom. The ratios between the reaction forces (or moments) and the produced deflection are the coupling stiffnesses. The elasticity tensor is a generalization that describes all possible stretch and shear parameters. A single spring may intentionally be designed to have variable (non-linear) stiffness throughout its displacement. Compliance The inverse of stiffness is or , typically measured in units of metres per newton. In rheology, it may be defined as the ratio of strain to stress, and so take the units of reciprocal stress, for example, 1/Pa. Rotational stiffness A body may also have a rotational stiffness, given by where is the applied moment is the rotation angle In the SI system, rotational stiffness is typically measured in newton-metres per radian. In the SAE system, rotational stiffness is typically measured in inch-pounds per degree. Further measures of stiffness are derived on a similar basis, including: shear stiffness - the ratio of applied shear force to shear deformation torsional stiffness - the ratio of applied torsion moment to the angle of twist Relationship to elasticity The elastic modulus of a material is not the same as the stiffness of a component made from that material. Elastic modulus is a property of the constituent material; stiffness is a property of a structure or component of a structure, and hence it is dependent upon various physical dimensions that describe that component. That is, the modulus is an intensive property of the material; stiffness, on the other hand, is an extensive property of the solid body that is dependent on the material its shape and boundary conditions. For example, for an element in tension or compression, the axial stiffness is where is the (tensile) elastic modulus (or Young's modulus), is the cross-sectional area, is the length of the element. Similarly, the torsional stiffness of a straight section is where is the rigidity modulus of the material, is the torsion constant for the section. Note that the torsional stiffness has dimensions [force] * [length] / [angle], so that its SI units are N*m/rad. For the special case of unconstrained uniaxial tension or compression, Young's modulus be thought of as a measure of the stiffness of a structure. Applications The stiffness of a structure is of principal importance in many engineering applications, so the modulus of elasticity is often one of the primary properties considered when selecting a material. A high modulus of elasticity is sought when deflection is undesirable, while a low modulus of elasticity is required when flexibility is needed. In biology, the stiffness of the extracellular matrix is important for guiding the migration of cells in a phenomenon called durotaxis. Another application of stiffness finds itself in skin biology. The skin maintains its structure due to its intrinsic tension, contributed to by collagen, an extracellular protein that accounts for approximately 75% of its dry weight. The pliability of skin is a parameter of interest that represents its firmness and extensibility, encompassing characteristics such as elasticity, stiffness, and adherence. These factors are of functional significance to patients. This is of significance to patients with traumatic injuries to the skin, whereby the pliability can be reduced due to the formation and replacement of healthy skin tissue by a pathological scar. This can be evaluated both subjectively, or objectively using a device such as the Cutometer. The Cutometer applies a vacuum to the skin and measures the extent to which it can be vertically distended. These measurements are able to distinguish between healthy skin, normal scarring, and pathological scarring, and the method has been applied within clinical and industrial settings to monitor both pathophysiological sequelae, and the effects of treatments on skin. See also References Physical quantities Continuum mechanics Structural analysis
Stiffness
[ "Physics", "Mathematics", "Engineering" ]
1,275
[ "Structural engineering", "Physical phenomena", "Physical quantities", "Continuum mechanics", "Quantity", "Structural analysis", "Classical mechanics", "Mechanical engineering", "Aerospace engineering", "Physical properties" ]
475,199
https://en.wikipedia.org/wiki/Low-pressure%20area
In meteorology, a low-pressure area, low area or low is a region where the atmospheric pressure is lower than that of surrounding locations. It is the opposite of a high-pressure area. Low-pressure areas are commonly associated with inclement weather (such as cloudy, windy, with possible rain or storms), while high-pressure areas are associated with lighter winds and clear skies. Winds circle anti-clockwise around lows in the northern hemisphere, and clockwise in the southern hemisphere, due to opposing Coriolis forces. Low-pressure systems form under areas of wind divergence that occur in the upper levels of the atmosphere (aloft). The formation process of a low-pressure area is known as cyclogenesis. In meteorology, atmospheric divergence aloft occurs in two kinds of places: The first is in the area on the east side of upper troughs, which form half of a Rossby wave within the Westerlies (a trough with large wavelength that extends through the troposphere). A second is an area where wind divergence aloft occurs ahead of embedded shortwave troughs, which are of smaller wavelength. Diverging winds aloft, ahead of these troughs, cause atmospheric lift within the troposphere below as air flows upwards away from the surface, which lowers surface pressures as this upward motion partially counteracts the force of gravity packing the air close to the ground. Thermal lows form due to localized heating caused by greater solar incidence over deserts and other land masses. Since localized areas of warm air are less dense than their surroundings, this warmer air rises, which lowers atmospheric pressure near that portion of the Earth's surface. Large-scale thermal lows over continents help drive monsoon circulations. Low-pressure areas can also form due to organized thunderstorm activity over warm water. When this occurs over the tropics in concert with the Intertropical Convergence Zone, it is known as a monsoon trough. Monsoon troughs reach their northerly extent in August and their southerly extent in February. When a convective low acquires a well-hot circulation in the tropics it is termed a tropical cyclone. Tropical cyclones can form during any month of the year globally but can occur in either the northern or southern hemisphere during December. Atmospheric lift will also generally produce cloud cover through adiabatic cooling once the air temperature drops below the dew point as it rises, the cloudy skies typical of low-pressure areas act to dampen diurnal temperature extremes. Since clouds reflect sunlight, incoming shortwave solar radiation decreases, which causes lower temperatures during the day. At night the absorptive effect of clouds on outgoing longwave radiation, such as heat energy from the surface, allows for warmer night-time minimums in all seasons. The stronger the area of low pressure, the stronger the winds experienced in its vicinity. Globally, low-pressure systems are most frequently located over the Tibetan Plateau and in the lee of the Rocky Mountains. In Europe (particularly in the British Isles and Netherlands), recurring low-pressure weather systems are typically known as "low levels". Formation Cyclogenesis is the development and strengthening of cyclonic circulations, or low-pressure areas, within the atmosphere. Cyclogenesis is the opposite of , and has an anticyclonic (high-pressure system) equivalent which deals with the formation of high-pressure areas—anticyclogenesis. Cyclogenesis is an umbrella term for several different processes, all of which result in the development of some sort of cyclone. Meteorologists use the term "cyclone" where circular pressure systems flow in the direction of the Earth's rotation, which normally coincides with areas of low pressure. The largest low-pressure systems are cold-core polar cyclones and extratropical cyclones which lie on the synoptic scale. Warm-core cyclones such as tropical cyclones, mesocyclones, and polar lows lie within the smaller mesoscale. Subtropical cyclones are of intermediate size. Cyclogenesis can occur at various scales, from the microscale to the synoptic scale. Larger-scale troughs, also called Rossby waves, are synoptic in scale. Shortwave troughs embedded within the flow around larger scale troughs are smaller in scale, or mesoscale in nature. Both Rossby waves and shortwaves embedded within the flow around Rossby waves migrate equatorward of the polar cyclones located in both the Northern and Southern hemispheres. All share one important aspect, that of upward vertical motion within the troposphere. Such upward motions decrease the mass of local atmospheric columns of air, which lowers surface pressure. Extratropical cyclones form as waves along weather fronts due to a passing by shortwave aloft or upper-level jet streak before occluding later in their life cycle as cold-core cyclones. Polar lows are small-scale, short-lived atmospheric low-pressure systems that occur over the ocean areas poleward of the main polar front in both the Northern and Southern Hemispheres. They are part of the larger class of mesoscale weather-systems. Polar lows can be difficult to detect using conventional weather reports and are a hazard to high-latitude operations, such as shipping and offshore platforms. They are vigorous systems that have near-surface winds of at least . Tropical cyclones form due to latent heat driven by significant thunderstorm activity, and are warm-core with well-defined circulations. Certain criteria need to be met for their formation. In most situations, water temperatures of at least are needed down to a depth of at least ; waters of this temperature cause the overlying atmosphere to be unstable enough to sustain convection and thunderstorms. Another factor is rapid cooling with height, which allows the release of the heat of condensation that powers a tropical cyclone. High humidity is needed, especially in the lower-to-mid troposphere; when there is a great deal of moisture in the atmosphere, conditions are more favorable for disturbances to develop. Low amounts of wind shear are needed, as high shear is disruptive to the storm's circulation. Lastly, a formative tropical cyclone needs a pre-existing system of disturbed weather, although without a circulation no cyclonic development will take place. Mesocyclones form as warm core cyclones over land, and can lead to tornado formation. Waterspouts can also form from mesocyclones, but more often develop from environments of high instability and low vertical wind shear. In deserts, lack of ground and plant moisture that would normally provide evaporative cooling can lead to intense, rapid solar heating of the lower layers of air. The hot air is less dense than surrounding cooler air. This, combined with the rising of the hot air, results in a low-pressure area called a thermal low. Monsoon circulations are caused by thermal lows which form over large areas of land and their strength is driven by how land heats more quickly than the surrounding nearby ocean. This generates a steady wind blowing toward the land, bringing the moist near-surface air over the oceans with it. Similar rainfall is caused by the moist ocean-air being lifted upwards by mountains, surface heating, convergence at the surface, divergence aloft, or from storm-produced outflows at the surface. However the lifting occurs, the air cools due to expansion in lower pressure, which in turn produces condensation. In winter, the land cools off quickly, but the ocean keeps the heat longer due to its higher specific heat. The hot air over the ocean rises, creating a low-pressure area and a breeze from land to ocean while a large area of drying high pressure is formed over the land, increased by wintertime cooling. Monsoons resemble sea and land breezes, terms usually referring to the localized, diurnal (daily) cycle of circulation near coastlines everywhere, but they are much larger in scale - also stronger and seasonal. Climatology Mid-latitudes and subtropics Large polar cyclones help determine the steering of systems moving through the mid-latitudes, south of the Arctic and north of the Antarctic. The Arctic oscillation provides an index used to gauge the magnitude of this effect in the Northern Hemisphere. Extratropical cyclones tend to form east of climatological trough positions aloft near the east coast of continents, or west side of oceans. A study of extratropical cyclones in the Southern Hemisphere shows that between the 30th and 70th parallels there are an average of 37 cyclones in existence during any 6-hour period. A separate study in the Northern Hemisphere suggests that approximately 234 significant extratropical cyclones form each winter. In Europe, particularly in the United Kingdom and in the Netherlands, recurring extratropical low-pressure weather systems are typically known as depressions. These tend to bring wet weather throughout the year. Thermal lows also occur during the summer over continental areas across the subtropics - such as the Sonoran Desert, the Mexican Plateau, the Sahara, South America, and Southeast Asia. The lows are most commonly located over the Tibetan Plateau and in the lee of the Rocky Mountains. Monsoon trough Elongated areas of low pressure form at the monsoon trough or Intertropical Convergence Zone as part of the Hadley cell circulation. Monsoon troughing in the western Pacific reaches its zenith in latitude during the late summer when the wintertime surface ridge in the opposite hemisphere is the strongest. It can reach as far as the 40th parallel in East Asia during August and 20th parallel in Australia during February. Its poleward progression is accelerated by the onset of the summer monsoon which is characterized by the development of lower air pressure over the warmest part of the various continents. The large-scale thermal lows over continents help create pressure gradients which drive monsoon circulations. In the southern hemisphere, the monsoon trough associated with the Australian monsoon reaches its most southerly latitude in February, oriented along a west-northwest/east-southeast axis. Many of the world's rainforests are associated with these climatological low-pressure systems. Tropical cyclone Tropical cyclones generally need to form more than or poleward of the 5th parallel north and 5th parallel south, allowing the Coriolis effect to deflect winds blowing towards the low-pressure center and creating a circulation. Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest. However, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month while September is the most active month. Nearly one-third of the world's tropical cyclones form within the western Pacific Ocean, making it the most active tropical cyclone basin on Earth. Associated weather Wind is initially accelerated from areas of high pressure to areas of low pressure. This is due to density (or temperature and moisture) differences between two air masses. Since stronger high-pressure systems contain cooler or drier air, the air mass is denser and flows towards areas that are warm or moist, which are in the vicinity of low-pressure areas in advance of their associated cold fronts. The stronger the pressure difference, or pressure gradient, between a high-pressure system and a low-pressure system, the stronger the wind. Thus, stronger areas of low pressure are associated with stronger winds. The Coriolis force caused by the Earth's rotation is what gives winds around low-pressure areas (such as in hurricanes, cyclones, and typhoons) their counter-clockwise (anticlockwise) circulation in the northern hemisphere (as the wind moves inward and is deflected right from the center of high pressure) and clockwise circulation in the southern hemisphere (as the wind moves inward and is deflected left from the center of high pressure). A tropical cyclone differs from a hurricane or typhoon based only on geographic location. A tropical cyclone is fundamentally different from a mid-latitude cyclone. A hurricane is a storm that occurs in the Atlantic Ocean and northeastern Pacific Ocean, a typhoon occurs in the northwestern Pacific Ocean, and a tropical cyclone occurs in the south Pacific or Indian Ocean. Friction with land slows down the wind flowing into low-pressure systems and causes wind to flow more inward, or flowing more ageostrophically, toward their centers. Tornadoes are often too small, and of too short duration, to be influenced by the Coriolis force, but may be so-influenced when arising from a low-pressure system. See also East Asian Monsoon High-pressure area Intertropical Convergence Zone North American Monsoon Surface weather analysis Tropical wave Trough (meteorology) Weather map References Meteorological phenomena Types of cyclone Atmospheric dynamics Vortices Atmospheric pressure
Low-pressure area
[ "Physics", "Chemistry", "Mathematics" ]
2,574
[ "Physical phenomena", "Earth phenomena", "Physical quantities", "Atmospheric dynamics", "Vortices", "Meteorological quantities", "Atmospheric pressure", "Meteorological phenomena", "Dynamical systems", "Fluid dynamics" ]
475,393
https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock%20method
In computational physics and chemistry, the Hartree–Fock (HF) method is a method of approximation for the determination of the wave function and the energy of a quantum many-body system in a stationary state. The Hartree–Fock method often assumes that the exact N-body wave function of the system can be approximated by a single Slater determinant (in the case where the particles are fermions) or by a single permanent (in the case of bosons) of N spin-orbitals. By invoking the variational method, one can derive a set of N-coupled equations for the N spin orbitals. A solution of these equations yields the Hartree–Fock wave function and energy of the system. Hartree–Fock approximation is an instance of mean-field theory, where neglecting higher-order fluctuations in order parameter allows interaction terms to be replaced with quadratic terms, obtaining exactly solvable Hamiltonians. Especially in the older literature, the Hartree–Fock method is also called the self-consistent field method (SCF). In deriving what is now called the Hartree equation as an approximate solution of the Schrödinger equation, Hartree required the final field as computed from the charge distribution to be "self-consistent" with the assumed initial field. Thus, self-consistency was a requirement of the solution. The solutions to the non-linear Hartree–Fock equations also behave as if each particle is subjected to the mean field created by all other particles (see the Fock operator below), and hence the terminology continued. The equations are almost universally solved by means of an iterative method, although the fixed-point iteration algorithm does not always converge. This solution scheme is not the only one possible and is not an essential feature of the Hartree–Fock method. The Hartree–Fock method finds its typical application in the solution of the Schrödinger equation for atoms, molecules, nanostructures and solids but it has also found widespread use in nuclear physics. (See Hartree–Fock–Bogoliubov method for a discussion of its application in nuclear structure theory). In atomic structure theory, calculations may be for a spectrum with many excited energy levels, and consequently, the Hartree–Fock method for atoms assumes the wave function is a single configuration state function with well-defined quantum numbers and that the energy level is not necessarily the ground state. For both atoms and molecules, the Hartree–Fock solution is the central starting point for most methods that describe the many-electron system more accurately. The rest of this article will focus on applications in electronic structure theory suitable for molecules with the atom as a special case. The discussion here is only for the restricted Hartree–Fock method, where the atom or molecule is a closed-shell system with all orbitals (atomic or molecular) doubly occupied. Open-shell systems, where some of the electrons are not paired, can be dealt with by either the restricted open-shell or the unrestricted Hartree–Fock methods. Brief history Early semi-empirical methods The origin of the Hartree–Fock method dates back to the end of the 1920s, soon after the discovery of the Schrödinger equation in 1926. Douglas Hartree's methods were guided by some earlier, semi-empirical methods of the early 1920s (by E. Fues, R. B. Lindsay, and himself) set in the old quantum theory of Bohr. In the Bohr model of the atom, the energy of a state with principal quantum number n is given in atomic units as . It was observed from atomic spectra that the energy levels of many-electron atoms are well described by applying a modified version of Bohr's formula. By introducing the quantum defect d as an empirical parameter, the energy levels of a generic atom were well approximated by the formula , in the sense that one could reproduce fairly well the observed transitions levels observed in the X-ray region (for example, see the empirical discussion and derivation in Moseley's law). The existence of a non-zero quantum defect was attributed to electron–electron repulsion, which clearly does not exist in the isolated hydrogen atom. This repulsion resulted in partial screening of the bare nuclear charge. These early researchers later introduced other potentials containing additional empirical parameters with the hope of better reproducing the experimental data. Hartree method In 1927, D. R. Hartree introduced a procedure, which he called the self-consistent field method, to calculate approximate wave functions and energies for atoms and ions. Hartree sought to do away with empirical parameters and solve the many-body time-independent Schrödinger equation from fundamental physical principles, i.e., ab initio. His first proposed method of solution became known as the Hartree method, or Hartree product. However, many of Hartree's contemporaries did not understand the physical reasoning behind the Hartree method: it appeared to many people to contain empirical elements, and its connection to the solution of the many-body Schrödinger equation was unclear. However, in 1928 J. C. Slater and J. A. Gaunt independently showed that the Hartree method could be couched on a sounder theoretical basis by applying the variational principle to an ansatz (trial wave function) as a product of single-particle functions. In 1930, Slater and V. A. Fock independently pointed out that the Hartree method did not respect the principle of antisymmetry of the wave function. The Hartree method used the Pauli exclusion principle in its older formulation, forbidding the presence of two electrons in the same quantum state. However, this was shown to be fundamentally incomplete in its neglect of quantum statistics. Hartree–Fock A solution to the lack of anti-symmetry in the Hartree method came when it was shown that a Slater determinant, a determinant of one-particle orbitals first used by Heisenberg and Dirac in 1926, trivially satisfies the antisymmetric property of the exact solution and hence is a suitable ansatz for applying the variational principle. The original Hartree method can then be viewed as an approximation to the Hartree–Fock method by neglecting exchange. Fock's original method relied heavily on group theory and was too abstract for contemporary physicists to understand and implement. In 1935, Hartree reformulated the method to be more suitable for the purposes of calculation. The Hartree–Fock method, despite its physically more accurate picture, was little used until the advent of electronic computers in the 1950s due to the much greater computational demands over the early Hartree method and empirical models. Initially, both the Hartree method and the Hartree–Fock method were applied exclusively to atoms, where the spherical symmetry of the system allowed one to greatly simplify the problem. These approximate methods were (and are) often used together with the central field approximation to impose the condition that electrons in the same shell have the same radial part and to restrict the variational solution to be a spin eigenfunction. Even so, calculating a solution by hand using the Hartree–Fock equations for a medium-sized atom was laborious; small molecules required computational resources far beyond what was available before 1950. Hartree–Fock algorithm The Hartree–Fock method is typically used to solve the time-independent Schrödinger equation for a multi-electron atom or molecule as described in the Born–Oppenheimer approximation. Since there are no known analytic solutions for many-electron systems (there are solutions for one-electron systems such as hydrogenic atoms and the diatomic hydrogen cation), the problem is solved numerically. Due to the nonlinearities introduced by the Hartree–Fock approximation, the equations are solved using a nonlinear method such as iteration, which gives rise to the name "self-consistent field method." Approximations The Hartree–Fock method makes five major simplifications to deal with this task: The Born–Oppenheimer approximation is inherently assumed. The full molecular wave function is actually a function of the coordinates of each of the nuclei, in addition to those of the electrons. Typically, relativistic effects are completely neglected. The momentum operator is assumed to be completely non-relativistic. The variational solution is assumed to be a linear combination of a finite number of basis functions, which are usually (but not always) chosen to be orthogonal. The finite basis set is assumed to be approximately complete. Each energy eigenfunction is assumed to be describable by a single Slater determinant, an antisymmetrized product of one-electron wave functions (i.e., orbitals). The mean-field approximation is implied. Effects arising from deviations from this assumption are neglected. These effects are often collectively used as a definition of the term electron correlation. However, the label "electron correlation" strictly spoken encompasses both the Coulomb correlation and Fermi correlation, and the latter is an effect of electron exchange, which is fully accounted for in the Hartree–Fock method. Stated in this terminology, the method only neglects the Coulomb correlation. However, this is an important flaw, accounting for (among others) Hartree–Fock's inability to capture London dispersion. Relaxation of the last two approximations give rise to many so-called post-Hartree–Fock methods. Variational optimization of orbitals The variational theorem states that for a time-independent Hamiltonian operator, any trial wave function will have an energy expectation value that is greater than or equal to the true ground-state wave function corresponding to the given Hamiltonian. Because of this, the Hartree–Fock energy is an upper bound to the true ground-state energy of a given molecule. In the context of the Hartree–Fock method, the best possible solution is at the Hartree–Fock limit; i.e., the limit of the Hartree–Fock energy as the basis set approaches completeness. (The other is the full-CI limit, where the last two approximations of the Hartree–Fock theory as described above are completely undone. It is only when both limits are attained that the exact solution, up to the Born–Oppenheimer approximation, is obtained.) The Hartree–Fock energy is the minimal energy for a single Slater determinant. The starting point for the Hartree–Fock method is a set of approximate one-electron wave functions known as spin-orbitals. For an atomic orbital calculation, these are typically the orbitals for a hydrogen-like atom (an atom with only one electron, but the appropriate nuclear charge). For a molecular orbital or crystalline calculation, the initial approximate one-electron wave functions are typically a linear combination of atomic orbitals (LCAO). The orbitals above only account for the presence of other electrons in an average manner. In the Hartree–Fock method, the effect of other electrons are accounted for in a mean-field theory context. The orbitals are optimized by requiring them to minimize the energy of the respective Slater determinant. The resultant variational conditions on the orbitals lead to a new one-electron operator, the Fock operator. At the minimum, the occupied orbitals are eigensolutions to the Fock operator via a unitary transformation between themselves. The Fock operator is an effective one-electron Hamiltonian operator being the sum of two terms. The first is a sum of kinetic-energy operators for each electron, the internuclear repulsion energy, and a sum of nuclear–electronic Coulombic attraction terms. The second are Coulombic repulsion terms between electrons in a mean-field theory description; a net repulsion energy for each electron in the system, which is calculated by treating all of the other electrons within the molecule as a smooth distribution of negative charge. This is the major simplification inherent in the Hartree–Fock method and is equivalent to the fifth simplification in the above list. Since the Fock operator depends on the orbitals used to construct the corresponding Fock matrix, the eigenfunctions of the Fock operator are in turn new orbitals, which can be used to construct a new Fock operator. In this way, the Hartree–Fock orbitals are optimized iteratively until the change in total electronic energy falls below a predefined threshold. In this way, a set of self-consistent one-electron orbitals is calculated. The Hartree–Fock electronic wave function is then the Slater determinant constructed from these orbitals. Following the basic postulates of quantum mechanics, the Hartree–Fock wave function can then be used to compute any desired chemical or physical property within the framework of the Hartree–Fock method and the approximations employed. Mathematical formulation Derivation According to the Slater–Condon rules, the energy expectation value of the molecular electronic Hamiltonian for a Slater determinant is where is the one electron operator including electronic kinetic energy and electron-nucleus Coulombic interaction, and To derive the Hartree-Fock equation we minimize the energy functional for N electrons with orthonormal constraints. We choose a basis set in which the Lagrange multiplier matrix becomes diagonal, i.e. . Performing the variation, we obtain The factor 1/2 before the double integrals in the molecular Hamiltonian drops out due to symmetry and the product rule. We may define the Fock operator to rewrite the equation where the Coulomb operator and the exchange operator are defined as follows The exchange operator has no classical analogue and can only be defined as an integral operator. The solution and are called molecular orbital and orbital energy respectively. Although Hartree-Fock equation appears in the form of a eigenvalue problem, the Fock operator itself depends on and must be solved by a different technique. Total energy The optimal total energy can be written in terms of molecular orbitals. and are matrix elements of the Coulomb and exchange operators respectively, and is the total electrostatic repulsion between all the nuclei in the molecule. The total energy is not equal to the sum of orbital energies. If the atom or molecule is closed shell, the total energy according to the Hartree-Fock method is Linear combination of atomic orbitals Typically, in modern Hartree–Fock calculations, the one-electron wave functions are approximated by a linear combination of atomic orbitals. These atomic orbitals are called Slater-type orbitals. Furthermore, it is very common for the "atomic orbitals" in use to actually be composed of a linear combination of one or more Gaussian-type orbitals, rather than Slater-type orbitals, in the interests of saving large amounts of computation time. Various basis sets are used in practice, most of which are composed of Gaussian functions. In some applications, an orthogonalization method such as the Gram–Schmidt process is performed in order to produce a set of orthogonal basis functions. This can in principle save computational time when the computer is solving the Roothaan–Hall equations by converting the overlap matrix effectively to an identity matrix. However, in most modern computer programs for molecular Hartree–Fock calculations this procedure is not followed due to the high numerical cost of orthogonalization and the advent of more efficient, often sparse, algorithms for solving the generalized eigenvalue problem, of which the Roothaan–Hall equations are an example. Numerical stability Numerical stability can be a problem with this procedure and there are various ways of combatting this instability. One of the most basic and generally applicable is called F-mixing or damping. With F-mixing, once a single-electron wave function is calculated, it is not used directly. Instead, some combination of that calculated wave function and the previous wave functions for that electron is used, the most common being a simple linear combination of the calculated and immediately preceding wave function. A clever dodge, employed by Hartree, for atomic calculations was to increase the nuclear charge, thus pulling all the electrons closer together. As the system stabilised, this was gradually reduced to the correct charge. In molecular calculations a similar approach is sometimes used by first calculating the wave function for a positive ion and then to use these orbitals as the starting point for the neutral molecule. Modern molecular Hartree–Fock computer programs use a variety of methods to ensure convergence of the Roothaan–Hall equations. Weaknesses, extensions, and alternatives Of the five simplifications outlined in the section "Hartree–Fock algorithm", the fifth is typically the most important. Neglect of electron correlation can lead to large deviations from experimental results. A number of approaches to this weakness, collectively called post-Hartree–Fock methods, have been devised to include electron correlation to the multi-electron wave function. One of these approaches, Møller–Plesset perturbation theory, treats correlation as a perturbation of the Fock operator. Others expand the true multi-electron wave function in terms of a linear combination of Slater determinants—such as multi-configurational self-consistent field, configuration interaction, quadratic configuration interaction, and complete active space SCF (CASSCF). Still others (such as variational quantum Monte Carlo) modify the Hartree–Fock wave function by multiplying it by a correlation function ("Jastrow" factor), a term which is explicitly a function of multiple electrons that cannot be decomposed into independent single-particle functions. An alternative to Hartree–Fock calculations used in some cases is density functional theory, which treats both exchange and correlation energies, albeit approximately. Indeed, it is common to use calculations that are a hybrid of the two methods—the popular B3LYP scheme is one such hybrid functional method. Another option is to use modern valence bond methods. Software packages For a list of software packages known to handle Hartree–Fock calculations, particularly for molecules and solids, see the list of quantum chemistry and solid state physics software. See also Related fields Quantum chemistry Molecular physics Quantum chemistry computer programs Concepts Roothaan equations Koopmans' theorem Post-Hartree–Fock Direct inversion of iterative subspace People Vladimir Aleksandrovich Fock Clemens Roothaan George G. Hall John Pople Reinhart Ahlrichs References Sources External links An Introduction to Hartree-Fock Molecular Orbital Theory by C. David Sherrill (June 2000) Mean-Field Theory: Hartree-Fock and BCS in E. Pavarini, E. Koch, J. van den Brink, and G. Sawatzky: Quantum materials: Experiments and Theory, Jülich 2016, Electronic structure methods Quantum chemistry Theoretical chemistry Computational chemistry Computational physics 1927 in science
Hartree–Fock method
[ "Physics", "Chemistry" ]
3,897
[ "Quantum chemistry", "Quantum mechanics", "Computational physics", "Theoretical chemistry", "Electronic structure methods", "Computational chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
475,449
https://en.wikipedia.org/wiki/Oscillation%20%28mathematics%29
In mathematics, the oscillation of a function or a sequence is a number that quantifies how much that sequence or function varies between its extreme values as it approaches infinity or a point. As is the case with limits, there are several definitions that put the intuitive concept into a form suitable for a mathematical treatment: oscillation of a sequence of real numbers, oscillation of a real-valued function at a point, and oscillation of a function on an interval (or open set). Definitions Oscillation of a sequence Let be a sequence of real numbers. The oscillation of that sequence is defined as the difference (possibly infinite) between the limit superior and limit inferior of : . The oscillation is zero if and only if the sequence converges. It is undefined if and are both equal to +∞ or both equal to −∞, that is, if the sequence tends to +∞ or −∞. Oscillation of a function on an open set Let be a real-valued function of a real variable. The oscillation of on an interval in its domain is the difference between the supremum and infimum of : More generally, if is a function on a topological space (such as a metric space), then the oscillation of on an open set is Oscillation of a function at a point The oscillation of a function of a real variable at a point is defined as the limit as of the oscillation of on an -neighborhood of : This is the same as the difference between the limit superior and limit inferior of the function at , provided the point is not excluded from the limits. More generally, if is a real-valued function on a metric space, then the oscillation is Examples has oscillation ∞ at = 0, and oscillation 0 at other finite and at −∞ and +∞. (the topologist's sine curve) has oscillation 2 at = 0, and 0 elsewhere. has oscillation 0 at every finite , and 2 at −∞ and +∞. or 1, −1, 1, −1, 1, −1... has oscillation 2. In the last example the sequence is periodic, and any sequence that is periodic without being constant will have non-zero oscillation. However, non-zero oscillation does not usually indicate periodicity. Geometrically, the graph of an oscillating function on the real numbers follows some path in the xy-plane, without settling into ever-smaller regions. In well-behaved cases the path might look like a loop coming back on itself, that is, periodic behaviour; in the worst cases quite irregular movement covering a whole region. Continuity Oscillation can be used to define continuity of a function, and is easily equivalent to the usual ε-δ definition (in the case of functions defined everywhere on the real line): a function ƒ is continuous at a point x0 if and only if the oscillation is zero; in symbols, A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point. For example, in the classification of discontinuities: in a removable discontinuity, the distance that the value of the function is off by is the oscillation; in a jump discontinuity, the size of the jump is the oscillation (assuming that the value at the point lies between these limits from the two sides); in an essential discontinuity, oscillation measures the failure of a limit to exist. This definition is useful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than ε (hence a Gδ set) – and gives a very quick proof of one direction of the Lebesgue integrability condition. The oscillation is equivalent to the ε-δ definition by a simple re-arrangement, and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given ε0 there is no δ that satisfies the ε-δ definition, then the oscillation is at least ε0, and conversely if for every ε there is a desired δ, the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space. Generalizations More generally, if f : X → Y is a function from a topological space X into a metric space Y, then the oscillation of f is defined at each x ∈ X by See also Wave equation Wave envelope Grandi's series Bounded mean oscillation References Further reading Real analysis Limits (mathematics) Sequences and series Functions and mappings Oscillation
Oscillation (mathematics)
[ "Physics", "Mathematics" ]
1,012
[ "Sequences and series", "Mathematical analysis", "Functions and mappings", "Mathematical structures", "Mathematical objects", "Mechanics", "Mathematical relations", "Oscillation" ]
475,955
https://en.wikipedia.org/wiki/Fluid%20parcel
In fluid dynamics, a fluid parcel, also known as a fluid element or material element, is an infinitesimal volume of fluid, identifiable throughout its dynamic history while moving with the fluid flow. As it moves, the mass of a fluid parcel remains constant, while—in a compressible flow—its volume may change, and its shape changes due to distortion by the flow. In an incompressible flow, the volume of the fluid parcel is also a constant (isochoric flow). Material surfaces and material lines are the corresponding notions for surfaces and lines, respectively. The mathematical concept of a fluid parcel is closely related to the description of fluid motion—its kinematics and dynamics—in a Lagrangian frame of reference. In this reference frame, fluid parcels are labelled and followed through space and time. But also in the Eulerian frame of reference the notion of fluid parcels can be advantageous, for instance in defining the material derivative, streamlines, streaklines, and pathlines; or for determining the Stokes drift. The fluid parcels, as used in continuum mechanics, are to be distinguished from microscopic particles (molecules and atoms) in physics. Fluid parcels describe the average velocity and other properties of fluid particles, averaged over a length scale which is large compared to the mean free path, but small compared to the typical length scales of the specific flow under consideration. This requires the Knudsen number to be small, as is also a pre-requisite for the continuum hypothesis to be a valid one. Further note, that unlike the mathematical concept of a fluid parcel which can be uniquely identified—as well as exclusively distinguished from its direct neighbouring parcels—in a real fluid such a parcel would not always consist of the same particles. Molecular diffusion will slowly evolve the parcel properties. References Bibliography Fluid dynamics Continuum mechanics
Fluid parcel
[ "Physics", "Chemistry", "Engineering" ]
372
[ "Continuum mechanics", "Chemical engineering", "Classical mechanics", "Piping", "Fluid dynamics" ]
476,351
https://en.wikipedia.org/wiki/Ekman%20number
The Ekman number (Ek) is a dimensionless number used in fluid dynamics to describe the ratio of viscous forces to Coriolis forces. It is frequently used in describing geophysical phenomena in the oceans and atmosphere in order to characterise the ratio of viscous forces to the Coriolis forces arising from planetary rotation. It is named after the Swedish oceanographer Vagn Walfrid Ekman. When the Ekman number is small, disturbances are able to propagate before decaying owing to low frictional effects. The Ekman number also describes the order of magnitude for the thickness of an Ekman layer, a boundary layer in which viscous diffusion is balanced by Coriolis effects, rather than the usual convective inertia. Definitions It is defined as: - where D is a characteristic (usually vertical) length scale of a phenomenon; ν, the kinematic eddy viscosity; Ω, the angular velocity of planetary rotation; and φ, the latitude. The term 2 Ω sin φ is the Coriolis frequency. It is given in terms of the kinematic viscosity, ν; the angular velocity, Ω; and a characteristic length scale, L. There do appear to be some differing conventions in the literature. Tritton gives: In contrast, the NRL Plasma Formulary gives: where Ro is the Rossby number and Re is the Reynolds number. These equations can generally not be used in oceanography. An estimation of the viscous terms of Navier-Stokes equation (with eventually the Eddy Viscosity) and of the Coriolis terms needs to be done. References Dimensionless numbers of fluid mechanics Fluid dynamics
Ekman number
[ "Chemistry", "Engineering" ]
339
[ "Piping", "Chemical engineering", "Fluid dynamics stubs", "Fluid dynamics" ]
476,416
https://en.wikipedia.org/wiki/Lichtenberg%20figure
A Lichtenberg figure (German Lichtenberg-Figur), or Lichtenberg dust figure, is a branching electric discharge that sometimes appears on the surface or in the interior of insulating materials. Lichtenberg figures are often associated with the progressive deterioration of high-voltage components and equipment. The study of planar Lichtenberg figures along insulating surfaces and 3D electrical trees within insulating materials often provides engineers with valuable insights for improving the long-term reliability of high-voltage equipment. Lichtenberg figures are now known to occur on or within solids, liquids, and gases during electrical breakdown. Lichtenberg figures are natural phenomena that exhibit fractal properties. History Lichtenberg figures are named after the German physicist Georg Christoph Lichtenberg, who originally discovered and studied them. When they were first discovered, it was thought that their characteristic shapes might help to reveal the nature of positive and negative electric "fluids". In 1777, Lichtenberg built a large electrophorus to generate high-voltage static electricity through induction. After discharging a high-voltage point to the surface of an insulator, he recorded the resulting radial patterns by sprinkling various powdered materials onto the surface. By then pressing blank sheets of paper onto these patterns, Lichtenberg was able to transfer and record these images, thereby discovering the basic principle of modern xerography. This discovery was also the forerunner of the modern day science of plasma physics. Although Lichtenberg only studied two-dimensional (2D) figures, modern high-voltage researchers study 2D and 3D figures (electrical trees) on, and within, insulating materials. Formation Two-dimensional (2D) Lichtenberg figures can be produced by placing a sharp-pointed needle perpendicular to the surface of a non-conducting plate, such as of resin, ebonite, or glass. The point is positioned very near or contacting the plate. A source of high voltage such as a Leyden jar (a type of capacitor) or a static electricity generator is applied to the needle, typically through a spark gap. This creates a sudden, small electrical discharge along the surface of the plate. This deposits stranded areas of charge onto the surface of the plate. These electrified areas are then tested by sprinkling a mixture of powdered flowers of sulfur and red lead (Pb3O4 or lead tetroxide) onto the plate. During handling, powdered sulfur tends to acquire a slight negative charge, while red lead tends to acquire a slight positive charge. The negatively electrified sulfur is attracted to the positively electrified areas of the plate, while the positively electrified red lead is attracted to the negatively electrified areas. In addition to the distribution of colors thereby produced, there is also a marked difference in the form of the figure, according to the polarity of the electrical charge that was applied to the plate. If the charge areas were positive, a widely extending patch is seen on the plate, consisting of a dense nucleus from which branches radiate in all directions. Negatively charged areas are considerably smaller and have a sharp circular or fan-like boundary entirely devoid of branches. Heinrich Rudolf Hertz employed Lichtenberg dust figures in his seminal work proving Maxwell's electromagnetic wave theories. If the plate receives a mixture of positive and negative charges as, for example, from an induction coil, a mixed figure results, consisting of a large red central nucleus, corresponding to the negative charge, surrounded by yellow rays, corresponding to the positive charge. The difference between positive and negative figures seems to depend on the presence of air, for the difference tends to disappear when the experiment is conducted in a vacuum. Peter T. Riess (a 19th-century researcher) theorized that the negative electrification of the plate was caused by the friction of the water vapour, etc., driven along the surface by the explosion that accompanies the disruptive discharge at the point. This electrification would favor the spread of a positive, but hinder that of a negative discharge. It is now known that electrical charges are transferred to the insulator's surface through small spark discharges that occur along the boundary between the gas and insulator surface. Once transferred to the insulator, these excess charges become temporarily stranded. The shapes of the resulting charge distributions reflect the shape of the spark discharges which, in turn, depend on the high voltage polarity and pressure of the gas. Using a higher applied voltage will generate larger-diameter and more branched figures. It is now known that positive Lichtenberg figures have longer, branching structures because long sparks within air can more easily form and propagate from positively charged high-voltage terminals. This property has been used to measure the transient voltage polarity and magnitude of lightning surges on electrical power lines. Another type of 2D Lichtenberg figure can be created when an insulating surface becomes contaminated with semiconducting material. When a high voltage is applied across the surface, leakage currents may cause localized heating and progressive degradation and charring of the underlying material. Over time, branching, tree-like carbonized patterns are formed upon the surface of the insulator, called electrical trees. This degradation process is called tracking. If the conductive paths ultimately bridge the insulating space, the result is catastrophic failure of the insulating material. Some artists purposely apply salt water to the surface of wood or cardboard and then apply a high voltage across the surface to generate complex carbonized 2D Lichtenberg figures on the surface. Fractal similarities The branching, self-similar patterns observed in Lichtenberg figures exhibit fractal properties. Lichtenberg figures often develop during the dielectric breakdown of solids, liquids, and even gases. Their appearance and growth appear to be related to a process called diffusion-limited aggregation (DLA). A useful macroscopic model that combines an electric field with DLA was developed by Niemeyer, Pietronero, and Weismann in 1984, and is known as the dielectric breakdown model (DBM). Although the electrical breakdown mechanisms of air and PMMA plastic are considerably different, the branching discharges turn out to be related. The branching forms taken by natural lightning also have fractal characteristics. Constructal law Lichtenberg figures are examples of natural phenomena that exhibit fractal properties. The emergence and evolution of these and the other tree-like structures that abound in nature are summarized by the constructal law. First published by Duke professor Adrian Bejan in 1996, the constructal law is a first principle of physics that summarizes the tendency in nature to generate configurations (patterns, designs) that facilitate the free movement of the imposed currents that flow through it. The constructal law predicts that the tree-like designs described in this article should emerge and evolve to facilitate the movement (point-to-area) of the electrical currents flowing through them. Natural occurrences Lichtenberg figures are fern-like patterns that may appear on the skin of lightning strike victims and typically disappear in 24 hours. They are also known as Keraunographic markings. A lightning strike can also create a large Lichtenberg figure in grass surrounding the point struck. These are sometimes found on golf courses or in grassy meadows. Branching root-shaped "fulgurite" mineral deposits may also be created as sand and soil are fused into glassy tubes by the intense heat of the current. Electrical treeing often occurs in high-voltage equipment prior to causing complete breakdown. Following these Lichtenberg figures within the insulation during post-accident investigation of an insulation failure can be useful in finding the cause of breakdown. From the direction and shape of the trees and their branches, an experienced high-voltage engineer can see exactly the point where the insulation began to break down, and using that knowledge, possibly find the initial cause as well. Broken-down transformers, high-voltage cables, bushings, and other equipment can usefully be investigated in this manner. The insulation is unrolled (in the case of paper insulation) or sliced in thin slices (in the case of solid insulating materials). The results are then sketched or photographed to create a record of the breakdown process. In insulating materials Modern Lichtenberg figures can also be created within solid insulating materials, such as acrylic (polymethyl methacrylate, or PMMA) or glass by injecting them with a beam of high energy electrons from a linear electron beam accelerator (or Linac, a type of particle accelerator). Inside the Linac, electrons are focused and accelerated to form a beam of high-speed particles. Electrons emerging from the accelerator have energies up to 25 MeV and are moving at an appreciable fraction (95 – 99+ percent) of the speed of light (relativistic velocities). If the electron beam is aimed towards a thick acrylic specimen, the electrons easily penetrate the surface of the acrylic, rapidly decelerating as they collide with molecules inside the plastic, finally coming to rest deep inside the specimen. Since acrylic is an excellent electrical insulator, these electrons become temporarily trapped within the specimen, forming a plane of excess negative charge. Under continued irradiation, the amount of trapped charge builds until the effective voltage inside the specimen reaches millions of volts. Once the electrical stress exceeds the dielectric strength of the plastic, some portions suddenly become conductive in a process called dielectric breakdown. During breakdown, branching tree or fern-like conductive channels rapidly form and propagate through the plastic, allowing the trapped charge to suddenly rush out in a miniature lightning-like flash and bang. Breakdown of a charged specimen may also be manually triggered by poking the plastic with a pointed conductive object to create a point of excessive voltage stress. During the discharge, the powerful electric sparks leave thousands of branching chains of fractures behind, creating a permanent Lichtenberg figure inside the specimen. Although the internal charge within the specimen is negative, the discharge is initiated from the positively charged exterior surfaces of the specimen, so that the resulting discharge creates a positive Lichtenberg figure. These objects are sometimes called electron trees, beam trees, or lightning trees. As the electrons rapidly decelerate inside the acrylic, they also generate powerful X-rays. Residual electrons and X-rays darken the acrylic by introducing defects (color centers) in a process called solarization. Solarization initially turns acrylic specimens a lime green color, which then changes to an amber color after the specimen has been discharged. The color usually fades over time, and gentle heating, combined with oxygen, accelerates the fading process. On wood Lichtenberg figures can also be produced on wood. The types of wood and grain patterns affect the shape of the Lichtenberg figure produced. By applying a coat of electrolytic solution to the surface of the wood, the resistance of the surface drops considerably. Two electrodes are then placed on the wood and a high voltage is passed across them. Current from the electrodes will cause the surface of the wood to heat up until the electrolyte boils and the wooden surface burns. Because the charred surface of the wood is mildly conductive, the surface of the wood will burn in a pattern outwards from the electrodes. The process can be dangerous, resulting in deaths every year from electrocution. See also Crown shyness Dielectric breakdown model Fractal curve Kirlian photography Lightning burn Patterns in nature Diffusion-limited aggregation References External links What are Lichtenberg Figures and how are they created? Lichtenberg Figures, Glass and Gemstones 1927 General Electric Review Article about Lichtenberg Figures Dielectric Breakdown Model (DBM) Trap Lightning in a Block. (DIY Lichtenberg Figure at Popular Science) Lichtenbergs in acrylic in 3d. 1 2 3 (Requires QuickTime VR to view.) Bibliography of Fulgurites Lichtenberg wood burning With a Welder Electricity Electrical breakdown Lightning Dielectrics Fractals Georg Christoph Lichtenberg
Lichtenberg figure
[ "Physics", "Mathematics" ]
2,459
[ "Physical phenomena", "Functions and mappings", "Mathematical analysis", "Mathematical objects", "Fractals", "Materials", "Electrical phenomena", "Mathematical relations", "Electrical breakdown", "Lightning", "Dielectrics", "Matter" ]
477,175
https://en.wikipedia.org/wiki/Radiant%20energy
In physics, and in particular as measured by radiometry, radiant energy is the energy of electromagnetic and gravitational radiation. As energy, its SI unit is the joule (J). The quantity of radiant energy may be calculated by integrating radiant flux (or power) with respect to time. The symbol Qe is often used throughout literature to denote radiant energy ("e" for "energetic", to avoid confusion with photometric quantities). In branches of physics other than radiometry, electromagnetic energy is referred to using E or W. The term is used particularly when electromagnetic radiation is emitted by a source into the surrounding environment. This radiation may be visible or invisible to the human eye. Terminology use and history The term "radiant energy" is most commonly used in the fields of radiometry, solar energy, heating and lighting, but is also sometimes used in other fields (such as telecommunications). In modern applications involving transmission of power from one location to another, "radiant energy" is sometimes used to refer to the electromagnetic waves themselves, rather than their energy (a property of the waves). In the past, the term "electro-radiant energy" has also been used. The term "radiant energy" also applies to gravitational radiation. For example, the first gravitational waves ever observed were produced by a black hole collision that emitted about 5.3 joules of gravitational-wave energy. Analysis Because electromagnetic (EM) radiation can be conceptualized as a stream of photons, radiant energy can be viewed as photon energy – the energy carried by these photons. Alternatively, EM radiation can be viewed as an electromagnetic wave, which carries energy in its oscillating electric and magnetic fields. These two views are completely equivalent and are reconciled to one another in quantum field theory (see wave-particle duality). EM radiation can have various frequencies. The bands of frequency present in a given EM signal may be sharply defined, as is seen in atomic spectra, or may be broad, as in blackbody radiation. In the particle picture, the energy carried by each photon is proportional to its frequency. In the wave picture, the energy of a monochromatic wave is proportional to its intensity. This implies that if two EM waves have the same intensity, but different frequencies, the one with the higher frequency "contains" fewer photons, since each photon is more energetic. When EM waves are absorbed by an object, the energy of the waves is converted to heat (or converted to electricity in case of a photoelectric material). This is a very familiar effect, since sunlight warms surfaces that it irradiates. Often this phenomenon is associated particularly with infrared radiation, but any kind of electromagnetic radiation will warm an object that absorbs it. EM waves can also be reflected or scattered, in which case their energy is redirected or redistributed as well. Open systems Radiant energy is one of the mechanisms by which energy can enter or leave an open system. Such a system can be man-made, such as a solar energy collector, or natural, such as the Earth's atmosphere. In geophysics, most atmospheric gases, including the greenhouse gases, allow the Sun's short-wavelength radiant energy to pass through to the Earth's surface, heating the ground and oceans. The absorbed solar energy is partly re-emitted as longer wavelength radiation (chiefly infrared radiation), some of which is absorbed by the atmospheric greenhouse gases. Radiant energy is produced in the sun as a result of nuclear fusion. Applications Radiant energy is used for radiant heating. It can be generated electrically by infrared lamps, or can be absorbed from sunlight and used to heat water. The heat energy is emitted from a warm element (floor, wall, overhead panel) and warms people and other objects in rooms rather than directly heating the air. Because of this, the air temperature may be lower than in a conventionally heated building, even though the room appears just as comfortable. Various other applications of radiant energy have been devised. These include treatment and inspection, separating and sorting, medium of control, and medium of communication. Many of these applications involve a source of radiant energy and a detector that responds to that radiation and provides a signal representing some characteristic of the radiation. Radiant energy detectors produce responses to incident radiant energy either as an increase or decrease in electric potential or current flow or some other perceivable change, such as exposure of photographic film. SI radiometry units See also Luminous energy Luminescence Power Radiometry Federal Standard 1037C Transmission Open system Photoelectric effect Photodetector Photocell Photoelectric cell Notes and references Further reading Caverly, Donald Philip, Primer of Electronics and Radiant Energy. New York, McGraw-Hill, 1952. Electromagnetic radiation Radiometry Forms of energy
Radiant energy
[ "Physics", "Engineering" ]
971
[ "Physical phenomena", "Telecommunications engineering", "Physical quantities", "Electromagnetic radiation", "Forms of energy", "Energy (physics)", "Radiation", "Radiometry" ]
477,513
https://en.wikipedia.org/wiki/Helmholtz%27s%20theorems
In fluid mechanics, Helmholtz's theorems, named after Hermann von Helmholtz, describe the three-dimensional motion of fluid in the vicinity of vortex lines. These theorems apply to inviscid flows and flows where the influence of viscous forces are small and can be ignored. Helmholtz's three theorems are as follows: Helmholtz's first theorem The strength of a vortex line is constant along its length. Helmholtz's second theorem A vortex line cannot end in a fluid; it must extend to the boundaries of the fluid or form a closed path. Helmholtz's third theorem A fluid element that is initially irrotational remains irrotational. Helmholtz's theorems apply to inviscid flows. In observations of vortices in real fluids the strength of the vortices always decays gradually due to the dissipative effect of viscous forces. Alternative expressions of the three theorems are as follows: The strength of a vortex tube does not vary with time. Fluid elements lying on a vortex line at some instant continue to lie on that vortex line. More simply, vortex lines move with the fluid. Also vortex lines and tubes must appear as a closed loop, extend to infinity or start/end at solid boundaries. Fluid elements initially free of vorticity remain free of vorticity. Helmholtz's theorems have application in understanding: Generation of lift on an airfoil Starting vortex Horseshoe vortex Wingtip vortices. Helmholtz's theorems are now generally proven with reference to Kelvin's circulation theorem. However Helmholtz's theorems were published in 1858, nine years before the 1867 publication of Kelvin's theorem. Notes References M. J. Lighthill, An Informal Introduction to Theoretical Fluid Mechanics, Oxford University Press, 1986, P. G. Saffman, Vortex Dynamics, Cambridge University Press, 1995, G. K. Batchelor, An Introduction to Fluid Dynamics, Cambridge University Press (1967, reprinted in 2000). Kundu, P and Cohen, I, Fluid Mechanics, 2nd edition, Academic Press 2002. George B. Arfken and Hans J. Weber, Mathematical Methods for Physicists, 4th edition, Academic Press: San Diego (1995) pp. 92–93 A.M. Kuethe and J.D. Schetzer (1959), Foundations of Aerodynamics, 2nd edition. John Wiley & Sons, Inc. New York Aerodynamics Vortices Theorems in mathematical physics Hermann von Helmholtz
Helmholtz's theorems
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
528
[ "Mathematical theorems", "Equations of physics", "Vortices", "Aerodynamics", "Theorems in mathematical physics", "Aerospace engineering", "Dynamical systems", "Physics theorems", "Mathematical problems", "Fluid dynamics" ]
477,554
https://en.wikipedia.org/wiki/Gene%20duplication
Gene duplication (or chromosomal duplication or gene amplification) is a major mechanism through which new genetic material is generated during molecular evolution. It can be defined as any duplication of a region of DNA that contains a gene. Gene duplications can arise as products of several types of errors in DNA replication and repair machinery as well as through fortuitous capture by selfish genetic elements. Common sources of gene duplications include ectopic recombination, retrotransposition event, aneuploidy, polyploidy, and replication slippage. Mechanisms of duplication Ectopic recombination Duplications arise from an event termed unequal crossing-over that occurs during meiosis between misaligned homologous chromosomes. The chance of it happening is a function of the degree of sharing of repetitive elements between two chromosomes. The products of this recombination are a duplication at the site of the exchange and a reciprocal deletion. Ectopic recombination is typically mediated by sequence similarity at the duplicate breakpoints, which form direct repeats. Repetitive genetic elements such as transposable elements offer one source of repetitive DNA that can facilitate recombination, and they are often found at duplication breakpoints in plants and mammals. Replication slippage Replication slippage is an error in DNA replication that can produce duplications of short genetic sequences. During replication DNA polymerase begins to copy the DNA. At some point during the replication process, the polymerase dissociates from the DNA and replication stalls. When the polymerase reattaches to the DNA strand, it aligns the replicating strand to an incorrect position and incidentally copies the same section more than once. Replication slippage is also often facilitated by repetitive sequences, but requires only a few bases of similarity. Retrotransposition Retrotransposons, mainly L1, can occasionally act on cellular mRNA. Transcripts are reverse transcribed to DNA and inserted into random place in the genome, creating retrogenes. Resulting sequence usually lack introns and often contain poly(A) sequences that are also integrated into the genome. Many retrogenes display changes in gene regulation in comparison to their parental gene sequences, which sometimes results in novel functions. Retrogenes can move between different chromosomes to shape chromosomal evolution. Aneuploidy Aneuploidy occurs when nondisjunction at a single chromosome results in an abnormal number of chromosomes. Aneuploidy is often harmful and in mammals regularly leads to spontaneous abortions (miscarriages). Some aneuploid individuals are viable, for example trisomy 21 in humans, which leads to Down syndrome. Aneuploidy often alters gene dosage in ways that are detrimental to the organism; therefore, it is unlikely to spread through populations. Polyploidy Polyploidy, or whole genome duplication is a product of nondisjunction during meiosis which results in additional copies of the entire genome. Polyploidy is common in plants, but it has also occurred in animals, with two rounds of whole genome duplication (2R event) in the vertebrate lineage leading to humans. It has also occurred in the hemiascomycete yeasts ~100 mya. After a whole genome duplication, there is a relatively short period of genome instability, extensive gene loss, elevated levels of nucleotide substitution and regulatory network rewiring. In addition, gene dosage effects play a significant role. Thus, most duplicates are lost within a short period, however, a considerable fraction of duplicates survive. Interestingly, genes involved in regulation are preferentially retained. Furthermore, retention of regulatory genes, most notably the Hox genes, has led to adaptive innovation. Rapid evolution and functional divergence have been observed at the level of the transcription of duplicated genes, usually by point mutations in short transcription factor binding motifs. Furthermore, rapid evolution of protein phosphorylation motifs, usually embedded within rapidly evolving intrinsically disordered regions is another contributing factor for survival and rapid adaptation/neofunctionalization of duplicate genes. Thus, a link seems to exist between gene regulation (at least at the post-translational level) and genome evolution. Polyploidy is also a well known source of speciation, as offspring, which have different numbers of chromosomes compared to parent species, are often unable to interbreed with non-polyploid organisms. Whole genome duplications are thought to be less detrimental than aneuploidy as the relative dosage of individual genes should be the same. As an evolutionary event Rate of gene duplication Comparisons of genomes demonstrate that gene duplications are common in most species investigated. This is indicated by variable copy numbers (copy number variation) in the genome of humans or fruit flies. However, it has been difficult to measure the rate at which such duplications occur. Recent studies yielded a first direct estimate of the genome-wide rate of gene duplication in C. elegans, the first multicellular eukaryote for which such as estimate became available. The gene duplication rate in C. elegans is on the order of 10−7 duplications/gene/generation, that is, in a population of 10 million worms, one will have a gene duplication per generation. This rate is two orders of magnitude greater than the spontaneous rate of point mutation per nucleotide site in this species. Older (indirect) studies reported locus-specific duplication rates in bacteria, Drosophila, and humans ranging from 10−3 to 10−7/gene/generation. Neofunctionalization Gene duplications are an essential source of genetic novelty that can lead to evolutionary innovation. Duplication creates genetic redundancy, where the second copy of the gene is often free from selective pressure—that is, mutations of it have no deleterious effects to its host organism. If one copy of a gene experiences a mutation that affects its original function, the second copy can serve as a 'spare part' and continue to function correctly. Thus, duplicate genes accumulate mutations faster than a functional single-copy gene, over generations of organisms, and it is possible for one of the two copies to develop a new and different function. Some examples of such neofunctionalization is the apparent mutation of a duplicated digestive gene in a family of ice fish into an antifreeze gene and duplication leading to a novel snake venom gene and the synthesis of 1 beta-hydroxytestosterone in pigs. Gene duplication is believed to play a major role in evolution; this stance has been held by members of the scientific community for over 100 years. Susumu Ohno was one of the most famous developers of this theory in his classic book Evolution by gene duplication (1970). Ohno argued that gene duplication is the most important evolutionary force since the emergence of the universal common ancestor. Major genome duplication events can be quite common. It is believed that the entire yeast genome underwent duplication about 100 million years ago. Plants are the most prolific genome duplicators. For example, wheat is hexaploid (a kind of polyploid), meaning that it has six copies of its genome. Subfunctionalization Another possible fate for duplicate genes is that both copies are equally free to accumulate degenerative mutations, so long as any defects are complemented by the other copy. This leads to a neutral "subfunctionalization" (a process of constructive neutral evolution) or DDC (duplication-degeneration-complementation) model, in which the functionality of the original gene is distributed among the two copies. Neither gene can be lost, as both now perform important non-redundant functions, but ultimately neither is able to achieve novel functionality. Subfunctionalization can occur through neutral processes in which mutations accumulate with no detrimental or beneficial effects. However, in some cases subfunctionalization can occur with clear adaptive benefits. If an ancestral gene is pleiotropic and performs two functions, often neither one of these two functions can be changed without affecting the other function. In this way, partitioning the ancestral functions into two separate genes can allow for adaptive specialization of subfunctions, thereby providing an adaptive benefit. Loss Often the resulting genomic variation leads to gene dosage dependent neurological disorders such as Rett-like syndrome and Pelizaeus–Merzbacher disease. Such detrimental mutations are likely to be lost from the population and will not be preserved or develop novel functions. However, many duplications are, in fact, not detrimental or beneficial, and these neutral sequences may be lost or may spread through the population through random fluctuations via genetic drift. Identifying duplications in sequenced genomes Criteria and single genome scans The two genes that exist after a gene duplication event are called paralogs and usually code for proteins with a similar function and/or structure. By contrast, orthologous genes present in different species which are each originally derived from the same ancestral sequence. (See Homology of sequences in genetics). It is important (but often difficult) to differentiate between paralogs and orthologs in biological research. Experiments on human gene function can often be carried out on other species if a homolog to a human gene can be found in the genome of that species, but only if the homolog is orthologous. If they are paralogs and resulted from a gene duplication event, their functions are likely to be too different. One or more copies of duplicated genes that constitute a gene family may be affected by insertion of transposable elements that causes significant variation between them in their sequence and finally may become responsible for divergent evolution. This may also render the chances and the rate of gene conversion between the homologs of gene duplicates due to less or no similarity in their sequences. Paralogs can be identified in single genomes through a sequence comparison of all annotated gene models to one another. Such a comparison can be performed on translated amino acid sequences (e.g. BLASTp, tBLASTx) to identify ancient duplications or on DNA nucleotide sequences (e.g. BLASTn, megablast) to identify more recent duplications. Most studies to identify gene duplications require reciprocal-best-hits or fuzzy reciprocal-best-hits, where each paralog must be the other's single best match in a sequence comparison. Most gene duplications exist as low copy repeats (LCRs), rather highly repetitive sequences like transposable elements. They are mostly found in pericentronomic, subtelomeric and interstitial regions of a chromosome. Many LCRs, due to their size (>1Kb), similarity, and orientation, are highly susceptible to duplications and deletions. Genomic microarrays detect duplications Technologies such as genomic microarrays, also called array comparative genomic hybridization (array CGH), are used to detect chromosomal abnormalities, such as microduplications, in a high throughput fashion from genomic DNA samples. In particular, DNA microarray technology can simultaneously monitor the expression levels of thousands of genes across many treatments or experimental conditions, greatly facilitating the evolutionary studies of gene regulation after gene duplication or speciation. Next generation sequencing Gene duplications can also be identified through the use of next-generation sequencing platforms. The simplest means to identify duplications in genomic resequencing data is through the use of paired-end sequencing reads. Tandem duplications are indicated by sequencing read pairs which map in abnormal orientations. Through a combination of increased sequence coverage and abnormal mapping orientation, it is possible to identify duplications in genomic sequencing data. Nomenclature The International System for Human Cytogenomic Nomenclature (ISCN) is an international standard for human chromosome nomenclature, which includes band names, symbols and abbreviated terms used in the description of human chromosome and chromosome abnormalities. Abbreviations include dup for duplications of parts of a chromosome. For example, dup(17p12) causes Charcot–Marie–Tooth disease type 1A. As amplification Gene duplication does not necessarily constitute a lasting change in a species' genome. In fact, such changes often don't last past the initial host organism. From the perspective of molecular genetics, gene amplification is one of many ways in which a gene can be overexpressed. Genetic amplification can occur artificially, as with the use of the polymerase chain reaction technique to amplify short strands of DNA in vitro using enzymes, or it can occur naturally, as described above. If it's a natural duplication, it can still take place in a somatic cell, rather than a germline cell (which would be necessary for a lasting evolutionary change). Role in cancer Duplications of oncogenes are a common cause of many types of cancer. In such cases the genetic duplication occurs in a somatic cell and affects only the genome of the cancer cells themselves, not the entire organism, much less any subsequent offspring. Recent comprehensive patient-level classification and quantification of driver events in TCGA cohorts revealed that there are on average 12 driver events per tumor, of which 1.5 are amplifications of oncogenes. Whole-genome duplications are also frequent in cancers, detected in 30% to 36% of tumors from the most common cancer types. Their exact role in carcinogenesis is unclear, but they in some cases lead to loss of chromatin segregation leading to chromatin conformation changes that in turn lead to oncogenic epigenetic and transcriptional modifications. See also References External links A bibliography on gene and genome duplication A brief overview of mutation, gene duplication and translocation Evolutionary biology concepts Genetics concepts Modification of genetic information Molecular evolution Mutation
Gene duplication
[ "Chemistry", "Biology" ]
2,884
[ "Evolutionary processes", "Genetics concepts", "Molecular evolution", "Modification of genetic information", "Evolutionary biology concepts", "Molecular genetics", "Molecular biology" ]
2,606,851
https://en.wikipedia.org/wiki/Glass%20knife
A glass knife is a knife with a blade made of glass, with a fracture line forming an extremely sharp cutting edge. Glass knives were used in antiquity due to their natural sharpness and the ease with which they could be manufactured. In modern electron microscopy glass knives are used to make the ultrathin sections needed for imaging. History In the Stone Age, bladed tools were made by chipping suitable stones which broke with a conchoidal fracture, a process known as knapping or lithic reduction. The same technique was used to make tools, including knives, out of obsidian, natural volcanic glass. From the 1920s until the 1940s, Dur-X glass fruit and cake knives were sold for use in kitchens under a 1938 US Patent. Before the wide availability of inexpensive stainless steel cutlery, they were used for cutting citrus fruit, tomatoes and other acidic foods, the flavor of which would be tainted by steel knives and which would stain ordinary steel knives. They were molded in tempered glass, with a cutting edge ground sharp. Modern use While glass knives as such are no longer in general use, knives with ceramic blades made from zirconium dioxide have been available since the mid-1980s, with a very sharp and long-lasting edge produced by grinding rather than fracturing. Modern glass knives were once the blade of choice for the ultra-thin sectioning required in transmission electron microscopy because they can be manufactured by hand and are sharper than softer metal blades because the crystalline structure of metals makes it impossible to obtain a continuous sharp edge. The advent of diamond knives, which keep their edge much longer and are more suitable for cutting hard materials, quickly relegated glass knives to a second-rate status. However, some labs still use glass knives because they are significantly less expensive than diamond knives. A common practice is to use a glass knife to cut the block which contains the sample to near the location of the specimen to be examined. Then the glass knife is replaced by a diamond blade for the actual ultrathin sectioning. This extends the life of the diamond blade which is used only when its superior performance is critical. Obsidian, a naturally occurring volcanic glass, is used to make extremely sharp surgical scalpels, significantly sharper than is possible with steel. The blades are brittle and very easily broken. Manufacture Glass knives can be produced by hand using pliers with two raised bumps on one jaw and a single bump between the two bumps on the opposing jaw, but special machines called "knife-makers" are used in most electron microscopy laboratories to ensure repeatable results. The glass used typically starts out as strips of plate glass, which is cut into squares. The glass square is then scored across the diagonal with a steel or tungsten carbide glass-cutting wheel to determine where the square will break, and pressure is then applied gradually across the opposite diagonal until the square breaks. This technique provides two usable knife edges, one on each of the two resulting triangles. The better the break is aligned with the diagonal, the better the cutting edge. In popular culture Glass knives are the weapon of choice of the antagonist Dmitri "Raven" Ravinoff in the 1992 novel Snow Crash because they are undetectable by security systems and are said in the book to be molecule-thin at the edges, sharp enough to penetrate bulletproof vests. Glass knives are the weapon of choice for most Mistborn in Brandon Sanderson's fantasy novel series of the same name (Mistborn), due to the lack of metal that other Allomancers would be able to Push or Pull on. Glass weapons are usable by either the protagonist or enemies in The Elder Scrolls series of video games. The weapons are depicted as having a green, semi-transparent color and are generally end-game equipment, having higher statistics than most weapons. There is also glass armor, end-game light armor. Game of Thrones: Obsidian, colloquially known as dragon glass, is a recurring element in the setting. Blades made from it are among the few things that can kill a White Walker. The titular weapon in Larry Niven's short story What Good is a Glass Dagger? A hardened glass knife is the tool of choice for the protagonist Joshua Valiente in the Long Earth series by Terry Pratchett and Stephen Baxter, as it can travel through the various parallel worlds without hindrance due to its lack of ferrous components. A Murano glass dagger features in Ruth Rendell's mystery "The Bridesmaid". References Microscopy Knives Laboratory glassware
Glass knife
[ "Chemistry" ]
913
[ "Microscopy" ]
2,606,916
https://en.wikipedia.org/wiki/Cross-conjugation
Cross-conjugation is a special type of conjugation in a molecule, when in a set of three pi bonds only two pi bonds interact with each other by conjugation, while the third one is excluded from interaction. Whereas a normal conjugated system such as a polyene typically has alternating single and double bonds along consecutive atoms, a cross-conjugated system has an alkene unit bonded to one of the middle atoms of another conjugated chain through a single bond. In classical terms, one of the double-bonds branches off rather than continuing consecutively: the main chain is conjugated, and part of that same main chain is conjugated with the side group, but all parts are not conjugated together as strongly. Examples of cross-conjugation can be found in molecules such as benzophenone, , p-quinones, dendralenes, radialenes, fullerene, and Indigo dye. The type of conjugation affects reactivity and molecular electronic transitions. References Chemical bonding
Cross-conjugation
[ "Physics", "Chemistry", "Materials_science" ]
215
[ "Chemical bonding", "Condensed matter physics", "nan" ]
2,610,033
https://en.wikipedia.org/wiki/Cascade%20%28chemical%20engineering%29
In chemical engineering, a cascade is a plant consisting of several similar stages with each processing the output from the previous stage. Cascades are most commonly used in isotope separation, distillation, flotation and other separation or purification processes. Cascade process Cascade process is any process that takes place in a number of steps, usually because the single step is too inefficient to produce the desired result. For example, in some uranium-enrichment processes the separation of the desired isotope is only poorly achieved in a single stage; to achieve better separation the process has to be repeated a number of times, in a series, with the enriched fraction of one stage being fed to the succeeding stage for further enrichment. Another example of cascade process is that operating in a cascade liquefier. Examples If a still removes 99% of impurities from water (leaving .01 the original amount of impurities), a cascade of three stills will leave (1-0.99)3 = 0.000001 = 0.0001% the amount of impurities (99.9999% removed). References Chemical process engineering
Cascade (chemical engineering)
[ "Chemistry", "Engineering" ]
229
[ "Chemical process engineering", "Chemical engineering" ]
2,611,971
https://en.wikipedia.org/wiki/Bioinformatic%20Harvester
The Bioinformatic Harvester was a bioinformatic meta search engine created by the European Molecular Biology Laboratory and subsequently hosted and further developed by KIT Karlsruhe Institute of Technology for genes and protein-associated information. Harvester currently works for human, mouse, rat, zebrafish, drosophila and arabidopsis thaliana based information. Harvester cross-links >50 popular bioinformatic resources and allows cross searches. Harvester serves tens of thousands of pages every day to scientists and physicians. Since 2014 the service is down. How Harvester works Harvester collects information from protein and gene databases along with information from so called "prediction servers." Prediction server e.g. provide online sequence analysis for a single protein. Harvesters search index is based on the IPI and UniProt protein information collection. The collections consists of: ~72.000 human, ~57.000 mouse, ~41.000 rat, ~51.000 zebrafish, ~35.000 arabidopsis protein pages, which cross-link ~50 major bioinformatic resources. Harvester crosslinks several types of information Text based information From the following databases: UniProt, one of the largest protein databases SOURCE, convenient gene information overview Simple Modular Architecture Research Tool (SMART) SOSUI, predicts transmembrane domains PSORT, predicts protein localisation HomoloGene, compares proteins from different species gfp-cdna, protein localisation with fluorescence microscopy International Protein Index (IPI) Databases rich in graphical elements These databases are not collected, but are crosslinked, being displayed via iframes. An iframe is a window within an HTML page for an embedded view of and interactive access to the linked database. Several such iframes are combined on a single Harvester protein page. This allows simultaneous, convenient comparison of information from several databases. NCBI-BLAST, an algorithm for comparing biological sequences from the NCBI Ensembl, automatic gene annotation by the EMBL-EBI and Sanger Institute FlyBase is a database of model organism Drosophila melanogaster GoPubMed is a knowledge-based search engine for biomedical texts iHOP, information hyperlinked over proteins via gene/protein synonyms Mendelian Inheritance in Man project catalogues all the known diseases RZPD, German resources Center for genome research in Berlin/Heidelberg STRING, Search Tool for the Retrieval of Interacting Genes/Proteins, developed by EMBL, SIB and UZH Zebrafish Information Network LOCATE subcellular localisation database (mouse) Access from external application Genome browser, working draft assemblies for genomes UCSC Google Scholar Mitocheck PolyMeta, meta search engine for Google, Yahoo, MSN, Ask, Exalead, AllTheWeb, GigaBlast What one can find Harvester allows a combination of different search terms and single words. Search Examples: Gene-name: "golga3" Gene-alias: "ADAP-S ADAS ADHAPS ADPS" (one gene name is sufficient) Gene-Ontologies: "Enzyme linked receptor protein signaling pathway" Unigene-Cluster: "Hs.449360" Go-annotation: "intra-Golgi transport" Molecular function: "protein kinase binding" Protein: "Q9NPD3" Protein domain: "SH2 sar" Protein Localisation: "endoplasmic reticulum" Chromosome: "2q31" Disease relevant: use the word "diseaselink" Combinations: "golgi diseaselink" (finds all golgi proteins associated with a disease) mRNA: "AL136897" Word: "Cancer" Comment: "highly expressed in heart" Author: "Merkel, Schmidt" Publication or project: "cDNA sequencing project" See also List of academic databases and search engines Biological databases Entrez European Bioinformatics Institute Human Protein Reference Database Metadata Sequence profiling tool Literature Notes and references External links Bioinformatic Harvester V at KIT Karlsruhe Institute of Technology Bioinformatics software Biological databases Biology websites Internet search engines Science and technology in Cambridgeshire South Cambridgeshire District
Bioinformatic Harvester
[ "Biology" ]
858
[ "Bioinformatics", "Biological databases", "Bioinformatics software" ]
2,612,253
https://en.wikipedia.org/wiki/Alloy%2020
Alloy 20 is an austenitic stainless steel containing less than 50% iron developed for applications involving sulfuric acid. Its corrosion resistance also finds other uses in the chemical, petrochemical, power generation, and plastics industries. Alloy 20 resists pitting and chloride ion corrosion, better than 304 stainless steel and on par with 316L stainless steel. Its copper content protects it from sulfuric acid. Alloy 20 is often chosen to solve stress corrosion cracking problems, which may occur with 316L stainless. Alloy of the same name with the designation "Cb-3" indicates niobium (also known as columbium) stabilized. Composition Nickel, 32–38% Chromium, 19–21% Carbon, 0.06% maximum Copper, 3–4% Molybdenum, 2–3% Manganese, 2% maximum Silicon, 1.0% maximum Niobium, (8.0 X C), 1% maximum Iron, 31–44% (balance) Other names UNS N08020 DIN 2.4660 CN7M Carpenter 20 CB-3 AL 20 Carlson Alloy C20 Nickelvac 23 Nicrofer 3620 Nb, also known as VDM Alloy 20 Specifications ASTM B729, B464, B366, B463, B473, B462, A182, A351, ASTM A743 ASME SB729, SB464, SB366, SB473, SB462,SA 182, SA351 ANSI / ASTM A555-79 EN 2.4660 UNS N08020 Werkstoff 2.4660 Castings are designated CN7M References Steels Chromium alloys Nickel alloys
Alloy 20
[ "Chemistry" ]
361
[ "Nickel alloys", "Alloy stubs", "Steels", "Alloys", "Chromium alloys" ]
6,259,941
https://en.wikipedia.org/wiki/Biomaterial
A biomaterial is a substance that has been engineered to interact with biological systems for a medical purpose – either a therapeutic (treat, augment, repair, or replace a tissue function of the body) or a diagnostic one. The corresponding field of study, called biomaterials science or biomaterials engineering, is about fifty years old. It has experienced steady growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science. A biomaterial is different from a biological material, such as bone, that is produced by a biological system. However, "biomaterial" and "biological material" are often used interchangeably. Further, the word "bioterial" has been proposed as a potential alternate word for biologically-produced materials such as bone, or fungal biocomposites. Additionally, care should be exercised in defining a biomaterial as biocompatible, since it is application-specific. A biomaterial that is biocompatible or suitable for one application may not be biocompatible in another. Introduction Biomaterials can be derived either from nature or synthesized in the laboratory using a variety of chemical approaches utilizing metallic components, polymers, ceramics or composite materials. They are often used and/or adapted for a medical application, and thus comprise the whole or part of a living structure or biomedical device which performs, augments, or replaces a natural function. Such functions may be relatively passive, like being used for a heart valve, or maybe bioactive with a more interactive functionality such as hydroxy-apatite coated hip implants. Biomaterials are also commonly used in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as a transplant material. Bioactivity The ability of an engineered biomaterial to induce a physiological response that is supportive of the biomaterial's function and performance is known as bioactivity. Most commonly, in bioactive glasses and bioactive ceramics this term refers to the ability of implanted materials to bond well with surrounding tissue in either osteo conductive or osseo productive roles. Bone implant materials are often designed to promote bone growth while dissolving into surrounding body fluid. Thus for many biomaterials good biocompatibility along with good strength and dissolution rates are desirable. Commonly, bioactivity of biomaterials is gauged by the surface biomineralization in which a native layer of hydroxyapatite is formed at the surface. These days, the development of clinically useful biomaterials is greatly enhanced by the advent of computational routines that can predict the molecular effects of biomaterials in a therapeutic setting based on limited in vitro experimentation. Self-assembly Self-assembly is the most common term in use in the modern scientific community to describe the spontaneous aggregation of particles (atoms, molecules, colloids, micelles, etc.) without the influence of any external forces. Large groups of such particles are known to assemble themselves into thermodynamically stable, structurally well-defined arrays, quite reminiscent of one of the seven crystal systems found in metallurgy and mineralogy (e.g., face-centered cubic, body-centered cubic, etc.). The fundamental difference in equilibrium structure is in the spatial scale of the unit cell (lattice parameter) in each particular case. Molecular self assembly is found widely in biological systems and provides the basis of a wide variety of complex biological structures. This includes an emerging class of mechanically superior biomaterials based on microstructural features and designs found in nature. Thus, self-assembly is also emerging as a new strategy in chemical synthesis and nanotechnology. Molecular crystals, liquid crystals, colloids, micelles, emulsions, phase-separated polymers, thin films and self-assembled monolayers all represent examples of the types of highly ordered structures, which are obtained using these techniques. The distinguishing feature of these methods is self-organization. Structural hierarchy Nearly all materials could be seen as hierarchically structured, since the changes in spatial scale bring about different mechanisms of deformation and damage. However, in biological materials, this hierarchical organization is inherent to the microstructure. One of the first examples of this, in the history of structural biology, is the early X-ray scattering work on the hierarchical structure of hair and wool by Astbury and Woods. In bone, for example, collagen is the building block of the organic matrix, a triple helix with diameter of 1.5 nm. These tropocollagen molecules are intercalated with the mineral phase (hydroxyapatite, calcium phosphate) forming fibrils that curl into helicoids of alternating directions. These "osteons" are the basic building blocks of bones, with the volume fraction distribution between organic and mineral phase being about 60/40. In another level of complexity, the hydroxyapatite crystals are mineral platelets that have a diameter of approximately 70 to 100 nm and thickness of 1 nm. They originally nucleate at the gaps between collagen fibrils. Similarly, the hierarchy of abalone shell begins at the nanolevel, with an organic layer having a thickness of 20 to 30 nm. This layer proceeds with single crystals of aragonite (a polymorph of CaCO3) consisting of "bricks" with dimensions of 0.5 and finishing with layers approximately 0.3 mm (mesostructure). Crabs are arthropods, whose carapace is made of a mineralized hard component (exhibits brittle fracture) and a softer organic component composed primarily of chitin. The brittle component is arranged in a helical pattern. Each of these mineral "rods" (1 μm diameter) contains chitin–protein fibrils with approximately 60 nm diameter. These fibrils are made of 3 nm diameter canals that link the interior and exterior of the shell. Applications Biomaterials are used in: Joint replacements Bone plates Intraocular lenses (IOLs) for eye surgery Bone cement Artificial ligaments and tendons Dental implants for tooth fixation Blood vessel prostheses Heart valves Skin repair devices (artificial tissue) Cochlear replacements Contact lenses Breast implants Drug delivery mechanisms Sustainable materials Vascular grafts Stents Nerve conduits Surgical sutures, clips, and staples for wound closure Pins and screws for fracture stabilisation Surgical mesh Biomaterials must be compatible with the body, and there are often issues of biocompatibility, which must be resolved before a product can be placed on the market and used in a clinical setting. Because of this, biomaterials are usually subjected to the same requirements as those undergone by new drug therapies. All manufacturing companies are also required to ensure traceability of all of their products, so that if a defective product is discovered, others in the same batch may be traced. Bone grafts Calcium sulfate (its α- and β-hemihydrates) is a well known biocompatible material that is widely used as a bone graft substitute in dentistry or as its binder. Heart valves In the United States, 49% of the 250,000 valve replacement procedures performed annually involve a mechanical valve implant. The most widely used valve is a bileaflet disc heart valve or St. Jude valve. The mechanics involve two semicircular discs moving back and forth, with both allowing the flow of blood as well as the ability to form a seal against backflow. The valve is coated with pyrolytic carbon and secured to the surrounding tissue with a mesh of woven fabric called Dacron (du Pont's trade name for polyethylene terephthalate). The mesh allows for the body's tissue to grow, while incorporating the valve. Skin repair Most of the time, artificial tissue is grown from the patient's own cells. However, when the damage is so extreme that it is impossible to use the patient's own cells, artificial tissue cells are grown. The difficulty is in finding a scaffold that the cells can grow and organize on. The characteristics of the scaffold must be that it is biocompatible, cells can adhere to the scaffold, mechanically strong and biodegradable. One successful scaffold is a copolymer of lactic acid and glycolic acid. Properties As discussed previously, biomaterials are used in medical devices to treat, assist, or replace a function within the human body. The application of a specific biomaterial must combine the necessary composition, material properties, structure, and desired in vivo reaction in order to perform the desired function. Categorizations of different desired properties are defined in order to maximize functional results. Host response Host response is defined as the "response of the host organism (local and systemic) to the implanted material or device". Most materials will have a reaction when in contact with the human body. The success of a biomaterial relies on the host tissue's reaction with the foreign material. Specific reactions between the host tissue and the biomaterial can be generated through the biocompatibility of the material. Biomaterial and tissue interactions The in vivo functionality and longevity of any implantable medical device is affected by the body's response to the foreign material. The body undergoes a cascade of processes defined under the foreign body response (FBR) in order to protect the host from the foreign material. The interactions between the device upon the host tissue/blood as well as the host tissue/blood upon the device must be understood in order to prevent complications and device failure. Tissue injury caused by device implantation causes inflammatory and healing responses during FBR. The inflammatory response occurs within two time periods: the acute phase, and the chronic phase. The acute phase occurs during the initial hours to days of implantation, and is identified by fluid and protein exudation along with a neutrophilic reaction. During the acute phase, the body attempts to clean and heal the wound by delivering excess blood, proteins, and monocytes are called to the site. Continued inflammation leads to the chronic phase, which can be categorized by the presence of monocytes, macrophages, and lymphocytes. In addition, blood vessels and connective tissue form in order to heal the wounded area. Compatibility Biocompatibility is related to the behavior of biomaterials in various environments under various chemical and physical conditions. The term may refer to specific properties of a material without specifying where or how the material is to be used. For example, a material may elicit little or no immune response in a given organism, and may or may not able to integrate with a particular cell type or tissue. Immuno-informed biomaterials that direct the immune response rather than attempting to circumvent the process is one approach that shows promise. The ambiguity of the term reflects the ongoing development of insights into "how biomaterials interact with the human body" and eventually "how those interactions determine the clinical success of a medical device (such as pacemaker or hip replacement)". Modern medical devices and prostheses are often made of more than one material, so it might not always be sufficient to talk about the biocompatibility of a specific material. Surgical implantation of a biomaterial into the body triggers an organism-inflammatory reaction with the associated healing of the damaged tissue. Depending upon the composition of the implanted material, the surface of the implant, the mechanism of fatigue, and chemical decomposition there are several other reactions possible. These can be local as well as systemic. These include immune response, foreign body reaction with the isolation of the implant with a vascular connective tissue, possible infection, and impact on the lifespan of the implant. Graft-versus-host disease is an auto- and alloimmune disorder, exhibiting a variable clinical course. It can manifest in either acute or chronic form, affecting multiple organs and tissues and causing serious complications in clinical practice, both during transplantation and implementation of biocompatible materials. Toxicity A biomaterial should perform its intended function within the living body without negatively affecting other bodily tissues and organs. In order to prevent unwanted organ and tissue interactions, biomaterials should be non-toxic. The toxicity of a biomaterial refers to the substances that are emitted from the biomaterial while in vivo. A biomaterial should not give off anything to its environment unless it is intended to do so. Nontoxicity means that biomaterial is: noncarcinogenic, nonpyrogenic, nonallergenic, blood compatible, and noninflammatory. However, a biomaterial can be designed to include toxicity for an intended purpose. For example, application of toxic biomaterial is studied during in vivo and in vitro cancer immunotherapy testing. Toxic biomaterials offer an opportunity to manipulate and control cancer cells. One recent study states: "Advanced nanobiomaterials, including liposomes, polymers, and silica, play a vital role in the codelivery of drugs and immunomodulators. These nanobiomaterial-based delivery systems could effectively promote antitumor immune responses and simultaneously reduce toxic adverse effects." This is a prime example of how the biocompatibility of a biomaterial can be altered to produce any desired function. Biodegradable biomaterials Biodegradable biomaterials refers to materials that are degradable through natural enzymatic reactions. The application of biodegradable synthetic polymers began in the later 1960s. Biodegradable materials have an advantage over other materials, as they have lower risk of harmful effects long term. In addition to ethical advancements using biodegradable materials, they also improve biocompatibility for materials used for implantation. Several properties including biocompatibility are important when considering different biodegradable biomaterials. Biodegradable biomaterials can be synthetic or natural depending on their source and type of extracellular matrix (ECM). Biocompatible plastics Some of the most commonly-used biocompatible materials (or biomaterials) are polymers due to their inherent flexibility and tunable mechanical properties. Medical devices made of plastics are often made of a select few including: cyclic olefin copolymer (COC), polycarbonate (PC), polyetherimide (PEI), medical grade polyvinylchloride (PVC), polyethersulfone (PES), polyethylene (PE), polyetheretherketone (PEEK) and even polypropylene (PP). To ensure biocompatibility, there are a series of regulated tests that material must pass to be certified for use. These include the United States Pharmacopoeia IV (USP Class IV) Biological Reactivity Test and the International Standards Organization 10993 (ISO 10993) Biological Evaluation of Medical Devices. The main objective of biocompatibility tests is to quantify the acute and chronic toxicity of material and determine any potential adverse effects during use conditions, thus the tests required for a given material are dependent on its end-use (i.e. blood, central nervous system, etc.). Surface and bulk properties Two properties that have a large effect on the functionality of a biomaterial is the surface and bulk properties. Bulk properties refers to the physical and chemical properties that compose the biomaterial for its entire lifetime. They can be specifically generated to mimic the physiochemical properties of the tissue that the material is replacing. They are mechanical properties that are generated from a material's atomic and molecular construction. Important bulk properties: Chemical Composition Microstructure Elasticity Tensile Strength Density Hardness Electrical Conductivity Thermal Conductivity Surface properties refers to the chemical and topographical features on the surface of the biomaterial that will have direct interaction with the host blood/tissue. Surface engineering and modification allows clinicians to better control the interactions of a biomaterial with the host living system. Important surface properties: Wettability (surface energy) Surface chemistry Surface textures (smooth/rough) Topographical factors including: size, shape, alignment, structure determine the roughness of a material. Surface Tension Surface Charge Mechanical properties In addition to a material being certified as biocompatible, biomaterials must be engineered specifically to their target application within a medical device. This is especially important in terms of mechanical properties which govern the way that a given biomaterial behaves. One of the most relevant material parameters is the Young's Modulus, E, which describes a material's elastic response to stresses. The Young's Moduli of the tissue and the device that is being coupled to it must closely match for optimal compatibility between device and body, whether the device is implanted or mounted externally. Matching the elastic modulus makes it possible to limit movement and delamination at the biointerface between implant and tissue as well as avoiding stress concentration that can lead to mechanical failure. Other important properties are the tensile and compressive strengths which quantify the maximum stresses a material can withstand before breaking and may be used to set stress limits that a device may be subject to within or external to the body. Depending on the application, it may be desirable for a biomaterial to have high strength so that it is resistant to failure when subjected to a load, however in other applications it may be beneficial for the material to be low strength. There is a careful balance between strength and stiffness that determines how robust to failure the biomaterial device is. Typically, as the elasticity of the biomaterial increases, the ultimate tensile strength will decrease and vice versa. One application where a high-strength material is undesired is in neural probes; if a high-strength material is used in these applications the tissue will always fail before the device does (under applied load) because the Young's Modulus of the dura mater and cerebral tissue is on the order of 500 Pa. When this happens, irreversible damage to the brain can occur, thus the biomaterial must have an elastic modulus less than or equal to brain tissue and a low tensile strength if an applied load is expected. For implanted biomaterials that may experience temperature fluctuations, e.g., dental implants, ductility is important. The material must be ductile for a similar reason that the tensile strength cannot be too high, ductility allows the material to bend without fracture and also prevents the concentration of stresses in the tissue when the temperature changes. The material property of toughness is also important for dental implants as well as any other rigid, load-bearing implant such as a replacement hip joint. Toughness describes the material's ability to deform under applied stress without fracturing and having a high toughness allows biomaterial implants to last longer within the body, especially when subjected to large stress or cyclically loaded stresses, like the stresses applied to a hip joint during running. For medical devices that are implanted or attached to the skin, another important property requiring consideration is the flexural rigidity, D. Flexural rigidity will determine how well the device surface can maintain conformal contact with the tissue surface, which is especially important for devices that are measuring tissue motion (strain), electrical signals (impedance), or are designed to stick to the skin without delaminating, as in epidermal electronics. Since flexural rigidity depends on the thickness of the material, h, to the third power (h3), it is very important that a biomaterial can be formed into thin layers in the previously mentioned applications where conformality is paramount. Structure The molecular composition of a biomaterial determines the physical and chemical properties of a biomaterial. These compositions create complex structures that allow the biomaterial to function, and therefore are necessary to define and understand in order to develop a biomaterial. biomaterials can be designed to replicate natural organisms, a process known as biomimetics. The structure of a biomaterial can be observed at different at different levels to better understand a materials properties and function. Atomic structure The arrangement of atoms and ions within a material is one of the most important structural properties of a biomaterial. The atomic structure of a material can be viewed at different levels, the sub atomic level, atomic or molecular level, as well as the ultra-structure created by the atoms and molecules. Intermolecular forces between the atoms and molecules that compose the material will determine its material and chemical properties. The sub atomic level observes the electrical structure of an individual atom to define its interactions with other atoms and molecules. The molecular structure observes the arrangement of atoms within the material. Finally the ultra-structure observes the 3-D structure created from the atomic and molecular structures of the material. The solid-state of a material is characterized by the intramolecular bonds between the atoms and molecules that comprise the material. Types of intramolecular bonds include: ionic bonds, covalent bonds, and metallic bonds. These bonds will dictate the physical and chemical properties of the material, as well as determine the type of material (ceramic, metal, or polymer). Microstructure The microstructure of a material refers to the structure of an object, organism, or material as viewed at magnifications exceeding 25 times. It is composed of the different phases of form, size, and distribution of grains, pores, precipitates, etc. The majority of solid microstructures are crystalline, however some materials such as certain polymers will not crystallize when in the solid state. Crystalline structure Crystalline structure is the composition of ions, atoms, and molecules that are held together and ordered in a 3D shape. The main difference between a crystalline structure and an amorphous structure is the order of the components. Crystalline has the highest level of order possible in the material where amorphous structure consists of irregularities in the ordering pattern. One way to describe crystalline structures is through the crystal lattice, which is a three-dimensional representation of the location of a repeating factor (unit cell) in the structure denoted with lattices. There are 14 different configurations of atom arrangement in a crystalline structure, and are all represented under Bravais lattices. Defects of crystalline structure During the formation of a crystalline structure, different impurities, irregularities, and other defects can form. These imperfections can form through deformation of the solid, rapid cooling, or high energy radiation. Types of defects include point defects, line defects, as well as edge dislocation. Macrostructure Macrostructure refers to the overall geometric properties that will influence the force at failure, stiffness, bending, stress distribution, and the weight of the material. It requires little to no magnification to reveal the macrostructure of a material. Observing the macrostructure reveals properties such as cavities, porosity, gas bubbles, stratification, and fissures. The material's strength and elastic modulus are both independent of the macrostructure. Natural biomaterials Biomaterials can be constructed using only materials sourced from plants and animals in order to alter, replace, or repair human tissue/organs. Use of natural biomaterials were used as early as ancient Egypt, where indigenous people used animal skin as sutures. A more modern example is a hip replacement using ivory material which was first recorded in Germany 1891. Valuable criteria for viable natural biomaterials: Biodegradable Biocompatible Able to promote cell attachment and growth Non-toxic Examples of natural biomaterials: Alginate Matrigel Fibrin Collagen Myocardial tissue engineering Biopolymers Biopolymers are polymers produced by living organisms. Cellulose and starch, proteins and peptides, and DNA and RNA are all examples of biopolymers, in which the monomeric units, respectively, are sugars, amino acids, and nucleotides. Cellulose is both the most common biopolymer and the most common organic compound on Earth. About 33% of all plant matter is cellulose. On a similar manner, silk (proteinaceous biopolymer) has garnered tremendous research interest in a myriad of domains including tissue engineering and regenerative medicine, microfluidics, drug delivery. See also Bionics Hydrogel Polymeric surface Surface modification of biomaterials with proteins Synthetic biodegradable polymer List of biomaterials Footnotes References External links Journal of Biomaterials Applications CREB – Biomedical Engineering Research Centre Department of Biomaterials at the Max Planck Institute of Colloids and Interfaces in Potsdam-Golm, Germany Open Innovation Campus for Biomaterials Biomolecules sv:Biomassa
Biomaterial
[ "Physics", "Chemistry", "Biology" ]
5,203
[ "Biomaterials", "Natural products", "Biochemistry", "Organic compounds", "Materials", "Biomolecules", "Molecular biology", "Structural biology", "Matter", "Medical technology" ]
6,261,834
https://en.wikipedia.org/wiki/Half-value%20layer
A material's half-value layer (HVL), or half-value thickness, is the thickness of the material at which the intensity of radiation entering it is reduced by one half. HVL can also be expressed in terms of air kerma rate (AKR), rather than intensity: the half-value layer is the thickness of specified material that, "attenuates the beam of radiation to an extent such that the AKR is reduced to one-half of its original value. In this definition the contribution of all scattered radiation, other than any [...] present initially in the beam concerned, is deemed to be excluded." Rather than AKR, measurements of air kerma, exposure, or exposure rate can be used to determine half value layer, as long as it is given in the description. Half-value layer refers to the first half-value layer, where subsequent (i.e. second) half-value layers refer to the amount of specified material that will reduce the air kerma rate by one-half after material has been inserted into the beam that is equal to the sum of all previous half-value layers. Quarter-value layer is the amount of specified material that reduces the air kerma rate (or exposure rate, exposure, air kerma, etc...) to one fourth of the value obtained without any test filters. The quarter-value layer is equal to the sum of the first and second half-value layers. The homogeneity factor (HF) describes the polychromatic nature of the beam and is given by: The HF for a narrow beam will always be less than or equal to one (it is only equal to one in the case of a monoenergetic beam). In case of a narrow polychromatic beam, the HF is less than one because of beam hardening. HVL is related to Mean free path, however the mean free path is the average distance a unit of radiation can travel in the material before being absorbed, whereas HVL is the average amount of material needed to absorb 50% of all radiation (i.e., to reduce the intensity of the incident radiation by half). In the case of sound waves, HVL is the distance that it takes for the intensity of a sound wave to be reduced to one-half of its original value. The HVL of sound waves is determined by both the medium through which it travels, and the frequency of the beam. A "thin" half-value layer (or a quick drop of -3 dB) results from a high frequency sound wave and a medium with a high rate of attenuation, such as bone. HVL is measured in units of length. A similar concept is the tenth-value layer or TVL. The TVL is the average amount of material needed to absorb 90% of all radiation, i.e., to reduce it to a tenth of the original intensity. 1 TVL is greater than or equal to log2(10) or approximately 3.32 HVLs, with equality achieved for a monoenergetic beam. Here are example approximate half-value layers for a variety of materials against a source of gamma rays, according to Iridium-192 radiation decay energy (MeV): Beta 0.67 (49%); Gamma 0.308 (28%); Gamma 0.468 (46%): Concrete: 44.5 mm Steel: 12.7 mm Lead: 4.8 mm Tungsten: 3.3 mm Uranium: 2.8 mm See also Attenuation coefficient Radiation protection References Radiation health effects Radiometry
Half-value layer
[ "Chemistry", "Materials_science", "Engineering" ]
754
[ "Radiation health effects", "Telecommunications engineering", "Radiation effects", "Radioactivity", "Radiometry" ]
6,262,236
https://en.wikipedia.org/wiki/Secure%20two-party%20computation
Secure two-party computation (2PC) a.k.a. Secure function evaluation is sub-problem of secure multi-party computation (MPC) that has received special attention by researchers because of its close relation to many cryptographic tasks. The goal of 2PC is to create a generic protocol that allows two parties to jointly compute an arbitrary function on their inputs without sharing the value of their inputs with the opposing party. One of the most well known examples of 2PC is Yao's Millionaires' problem, in which two parties, Alice and Bob, are millionaires who wish to determine who is wealthier without revealing their wealth. Formally, Alice has wealth , Bob has wealth , and they wish to compute without revealing the values or . Yao's garbled circuit protocol for two-party computation only provided security against passive adversaries. One of the first general solutions for achieving security against active adversary was introduced by Goldreich, Micali and Wigderson by applying Zero-Knowledge Proof to enforce semi-honest behavior. This approach was known to be impractical for years due to high complexity overheads. However, significant improvements have been made toward applying this method in 2PC and Abascal, Faghihi Sereshgi, Hazay, Yuval Ishai and Venkitasubramaniam gave the first efficient protocol based on this approach. Another type of 2PC protocols that are secure against active adversaries were proposed by Yehuda Lindell and Benny Pinkas, Ishai, Manoj Prabhakaran and Amit Sahai and Jesper Buus Nielsen and Claudio Orlandi. Another solution for this problem, that explicitly works with committed input was proposed by Stanisław Jarecki and Vitaly Shmatikov. Secure multi-party computation Security The security of a two-party computation protocol is usually defined through a comparison with an idealised scenario that is secure by definition. The idealised scenario involves a trusted party that collects the input of the two parties mostly client and server over secure channels and returns the result if none of the parties chooses to abort. The cryptographic two-party computation protocol is secure, if it behaves no worse than this ideal protocol, but without the additional trust assumptions. This is usually modeled using a simulator. The task of the simulator is to act as a wrapper around the idealised protocol to make it appear like the cryptographic protocol. The simulation succeeds with respect to an information theoretic, respectively computationally bounded adversary if the output of the simulator is statistically close to, respectively computationally indistinguishable from the output of the cryptographic protocol. A two-party computation protocol is secure if for all adversaries there exists a successful simulator. See also An important primitive in 2PC is oblivious transfer. Universal composability References Cryptography
Secure two-party computation
[ "Mathematics", "Engineering" ]
583
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]