id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
22,586,949
https://en.wikipedia.org/wiki/Vortex%20stretching
In fluid dynamics, vortex stretching is the lengthening of vortices in three-dimensional fluid flow, associated with a corresponding increase of the component of vorticity in the stretching direction—due to the conservation of angular momentum. Vortex stretching is associated with a particular term in the vorticity equation. For example, vorticity transport in an incompressible inviscid flow is governed by where D/Dt is the material derivative. The source term on the right hand side is the vortex stretching term. It amplifies the vorticity when the velocity is diverging in the direction parallel to . A simple example of vortex stretching in a viscous flow is provided by the Burgers vortex. Vortex stretching is at the core of the description of the turbulence energy cascade from the large scales to the small scales in turbulence. In general, in turbulence fluid elements are more lengthened than squeezed, on average. In the end, this results in more vortex stretching than vortex squeezing. For incompressible flow—due to volume conservation of fluid elements—the lengthening implies thinning of the fluid elements in the directions perpendicular to the stretching direction. This reduces the radial length scale of the associated vorticity. Finally, at the small scales of the order of the Kolmogorov microscales, the turbulence kinetic energy is dissipated into heat through the action of molecular viscosity. Notes References Fluid dynamics Turbulence
Vortex stretching
[ "Chemistry", "Engineering" ]
290
[ "Turbulence", "Chemical engineering", "Piping", "Fluid dynamics stubs", "Fluid dynamics" ]
25,472,715
https://en.wikipedia.org/wiki/Gliese%20649%20b
Gliese 649 b , or Gl 649 b is an extrasolar planet, orbiting the 10th magnitude M-type star Gliese 649, 10 parsecs from earth. This planet is a sub-Jupiter, massing 0.328 Jupiter mass and orbits at 1.135 AU. References Exoplanets discovered in 2009 Exoplanets detected by radial velocity Giant planets Hercules (constellation) 6
Gliese 649 b
[ "Astronomy" ]
88
[ "Hercules (constellation)", "Constellations" ]
25,474,577
https://en.wikipedia.org/wiki/N-slit%20interferometric%20equation
Quantum mechanics was first applied to optics, and interference in particular, by Paul Dirac. Richard Feynman, in his Lectures on Physics, uses Dirac's notation to describe thought experiments on double-slit interference of electrons. Feynman's approach was extended to -slit interferometers for either single-photon illumination, or narrow-linewidth laser illumination, that is, illumination by indistinguishable photons, by Frank Duarte. The -slit interferometer was first applied in the generation and measurement of complex interference patterns. In this article the generalized -slit interferometric equation, derived via Dirac's notation, is described. Although originally derived to reproduce and predict -slit interferograms, this equation also has applications to other areas of optics. Probability amplitudes and the -slit interferometric equation In this approach the probability amplitude for the propagation of a photon from a source to an interference plane , via an array of slits , is given using Dirac's bra–ket notation as This equation represents the probability amplitude of a photon propagating from to via an array of slits. Using a wavefunction representation for probability amplitudes, and defining the probability amplitudes as where and are the incidence and diffraction phase angles, respectively. Thus, the overall probability amplitude can be rewritten as where and after some algebra, the corresponding probability becomes where is the total number of slits in the array, or transmission grating, and the term in parentheses represents the phase that is directly related to the exact path differences derived from the geometry of the -slit array (), the intra interferometric distance, and the interferometric plane . In its simplest version, the phase term can be related to the geometry using where is the wavenumber, and and represent the exact path differences. Here the Dirac–Duarte (DD) interferometric equation is a probability distribution that is related to the intensity distribution measured experimentally. The calculations are performed numerically. The DD interferometric equation applies to the propagation of a single photon, or the propagation of an ensemble of indistinguishable photons, and enables the accurate prediction of measured -slit interferometric patterns continuously from the near to the far field. Interferograms generated with this equation have been shown to compare well with measured interferograms for both even () and odd () values of from 2 to 1600. Applications At a practical level, the -slit interferometric equation was introduced for imaging applications and is routinely applied to predict -slit laser interferograms, both in the near and far field. Thus, it has become a valuable tool in the alignment of large, and very large, -slit laser interferometers used in the study of clear air turbulence and the propagation of interferometric characters for secure laser communications in space. Other analytical applications are described below. Generalized diffraction and refraction The -slit interferometric equation has been applied to describe classical phenomena such as interference, diffraction, refraction (Snell's law), and reflection, in a rational and unified approach, using quantum mechanics principles. In particular, this interferometric approach has been used to derive generalized refraction equations for both positive and negative refraction, thus providing a clear link between diffraction theory and generalized refraction. From the phase term, of the interferometric equation, the expression can be obtained, where . For , this equation can be written as which is the generalized diffraction grating equation. Here, is the angle of incidence, is the angle of diffraction, is the wavelength, and is the order of diffraction. Under certain conditions, , which can be readily obtained experimentally, the phase term becomes which is the generalized refraction equation, where is the angle of incidence, and now becomes the angle of refraction. Cavity linewidth equation Furthermore, the -slit interferometric equation has been applied to derive the cavity linewidth equation applicable to dispersive oscillators, such as the multiple-prism grating laser oscillators: In this equation, is the beam divergence and the overall intracavity angular dispersion is the quantity in parentheses. Fourier transform imaging Researchers working on Fourier-transform ghost imaging consider the -slit interferometric equation as an avenue to investigate the quantum nature of ghost imaging. Also, the -slit interferometric approach is one of several approaches applied to describe basic optical phenomena in a cohesive and unified manner. Note: given the various terminologies in use, for -slit interferometry, it should be made explicit that the -slit interferometric equation applies to two-slit interference, three-slit interference, four-slit interference, etc. Quantum entanglement The Dirac principles and probabilistic methodology used to derive the -slit interferometric equation have also been used to derive the polarization quantum entanglement probability amplitude and corresponding probability amplitudes depicting the propagation of multiple pairs of quanta. Comparison with classical methods A comparison of the Dirac approach with classical methods, in the performance of interferometric calculations, has been done by Travis S. Taylor et al. These authors concluded that the interferometric equation, derived via the Dirac formalism, was advantageous in the very near field. Some differences between the DD interferometric equation and classical formalisms can be summarized as follows: The classical Fresnel approach is used for near-field applications and the classical Fraunhofer approach is used for far-field applications. That division is not necessary when using the DD interferometric approach as this formalism applies to both the near and the far-field cases. The Fraunhofer approach works for plane-wave illumination. The DD approach works for both, plane wave illumination or highly diffractive illumination patterns. The DD interferometric equation is statistical in character. This is not the case of the classical formulations. So far there has been no published comparison with more general classical approaches based on the Huygens–Fresnel principle or Kirchhoff's diffraction formula. See also Beam expander Dirac's notation Fraunhofer diffraction (mathematics) Free-space optical communications Grating equation Laser communication in space Laser linewidth Multiple-prism dispersion theory -slit interferometer References Equations Interference Interferometers Interferometry Quantum mechanics Wave mechanics
N-slit interferometric equation
[ "Physics", "Mathematics", "Technology", "Engineering" ]
1,337
[ "Physical phenomena", "Theoretical physics", "Mathematical objects", "Classical mechanics", "Quantum mechanics", "Equations", "Measuring instruments", "Waves", "Wave mechanics", "Interferometers" ]
25,480,269
https://en.wikipedia.org/wiki/Dredge-up
A dredge-up is any one of several stages in the evolution of some stars. By definition, during a dredge-up, a convection zone extends all the way from the star's surface down to the layers of material that have undergone fusion. Consequently, the fusion products are mixed into the outer layers of the star's atmosphere, where they can be seen in stellar spectra. Multiple stages The first dredge-up The first dredge-up occurs when a main-sequence star enters the red-giant branch. As a result of the convective mixing, the outer atmosphere will display the spectral signature of hydrogen fusion: The C/C and C/N ratios are lowered, and the surface abundances of lithium and beryllium may be reduced. The counter-intuitive existence of lithium-rich red giant stars that have gone through first dredge-up may be explained by scenarios such as mass transfer. The second dredge-up The second dredge-up occurs in stars with 4–8 solar masses. When helium fusion comes to an end at the core, convection mixes the products of the CNO cycle. This second dredge-up causes an increase in the surface abundance of He and N, whereas the amount of C and O decreases. The third dredge-up The third dredge-up occurs after a star enters the asymptotic giant branch, after a flash occurs in a helium-burning shell. The third dredge-up brings helium, carbon, and the s-process products to the surface, increasing the abundance of carbon relative to oxygen; in some larger stars this is the process that turns the star into a carbon star. Note: The names of the dredge-ups are set by the evolutionary and structural state of the star in which each occurs, not by the sequence in which they occur in any one star. Some lower-mass stars experience the first and third dredge-ups in their evolution without ever having gone through the second. References Stellar evolution
Dredge-up
[ "Physics" ]
409
[ "Astrophysics", "Stellar evolution" ]
18,826,785
https://en.wikipedia.org/wiki/Oxipurinol
Oxipurinol (INN, or oxypurinol USAN) is an inhibitor of xanthine oxidase. It is an active metabolite of allopurinol and it is cleared renally. In cases of renal disease, this metabolite will accumulate to toxic levels. By inhibiting xanthine oxidase, it reduces uric acid production. High serum uric acid levels may result in gout, kidney stones, and other medical conditions. References Pyrazolopyrimidines Xanthine oxidase inhibitors Human drug metabolites
Oxipurinol
[ "Chemistry" ]
124
[ "Chemicals in medicine", "Human drug metabolites" ]
18,827,079
https://en.wikipedia.org/wiki/Aluminium%20glycinate
Aluminium glycinate (or dihydroxyaluminium aminoacetate) is an antacid. See also Aceglutamide References Aluminium compounds Antacids Glycinates Metal-amino acid complexes
Aluminium glycinate
[ "Chemistry" ]
47
[ "Coordination chemistry", "Metal-amino acid complexes" ]
18,827,698
https://en.wikipedia.org/wiki/Torque%20effect
Torque effect is an effect experienced in helicopters and single propeller-powered aircraft is an example of Isaac Newton's third law of motion, that "for every action, there is an equal and opposite reaction." In helicopters, the torque effect causes the main rotor to turn the fuselage in the opposite direction from the rotor's spin. A small tail rotor is the most common configuration to counter this phenomenon. In a single-propeller plane, the torque effect causes the plane to turn upwards and left in response to the propeller turning the plane in the opposite direction of the propeller's clockwise spin. External links Aerospaceweb.org page on Torque effect Aerospace engineering
Torque effect
[ "Engineering" ]
134
[ "Aerospace engineering" ]
18,829,483
https://en.wikipedia.org/wiki/Formal%20scheme
In mathematics, specifically in algebraic geometry, a formal scheme is a type of space which includes data about its surroundings. Unlike an ordinary scheme, a formal scheme includes infinitesimal data that, in effect, points in a direction off of the scheme. For this reason, formal schemes frequently appear in topics such as deformation theory. But the concept is also used to prove a theorem such as the theorem on formal functions, which is used to deduce theorems of interest for usual schemes. A locally Noetherian scheme is a locally Noetherian formal scheme in the canonical way: the formal completion along itself. In other words, the category of locally Noetherian formal schemes contains all locally Noetherian schemes. Formal schemes were motivated by and generalize Zariski's theory of formal holomorphic functions. Algebraic geometry based on formal schemes is called formal algebraic geometry. Definition Formal schemes are usually defined only in the Noetherian case. While there have been several definitions of non-Noetherian formal schemes, these encounter technical problems. Consequently, we will only define locally noetherian formal schemes. All rings will be assumed to be commutative and with unit. Let A be a (Noetherian) topological ring, that is, a ring A which is a topological space such that the operations of addition and multiplication are continuous. A is linearly topologized if zero has a base consisting of ideals. An ideal of definition for a linearly topologized ring is an open ideal such that for every open neighborhood V of 0, there exists a positive integer n such that . A linearly topologized ring is preadmissible if it admits an ideal of definition, and it is admissible if it is also complete. (In the terminology of Bourbaki, this is "complete and separated".) Assume that A is admissible, and let be an ideal of definition. A prime ideal is open if and only if it contains . The set of open prime ideals of A, or equivalently the set of prime ideals of , is the underlying topological space of the formal spectrum of A, denoted Spf A. Spf A has a structure sheaf which is defined using the structure sheaf of the spectrum of a ring. Let be a neighborhood basis for zero consisting of ideals of definition. All the spectra of have the same underlying topological space but a different structure sheaf. The structure sheaf of Spf A is the projective limit . It can be shown that if f ∈ A and Df is the set of all open prime ideals of A not containing f, then , where is the completion of the localization Af. Finally, a locally noetherian formal scheme is a topologically ringed space (that is, a ringed space whose sheaf of rings is a sheaf of topological rings) such that each point of admits an open neighborhood isomorphic (as topologically ringed spaces) to the formal spectrum of a noetherian ring. Morphisms between formal schemes A morphism of locally noetherian formal schemes is a morphism of them as locally ringed spaces such that the induced map is a continuous homomorphism of topological rings for any affine open subset U. f is said to be adic or is a -adic formal scheme if there exists an ideal of definition such that is an ideal of definition for . If f is adic, then this property holds for any ideal of definition. Examples For any ideal I and ring A we can define the I-adic topology on A, defined by its basis consisting of sets of the form a+In. This is preadmissible, and admissible if A is I-adically complete. In this case Spf A is the topological space Spec A/I with sheaf of rings instead of . A=k[[t]] and I=(t). Then A/I=k so the space Spf A a single point (t) on which its structure sheaf takes value k[[t]]. Compare this to Spec A/I, whose structure sheaf takes value k at this point: this is an example of the idea that Spf A is a 'formal thickening' of A about I. The formal completion of a closed subscheme. Consider the closed subscheme X of the affine plane over k, defined by the ideal I=(y2-x3). Note that A0=k[x,y] is not I-adically complete; write A for its I-adic completion. In this case, Spf A=X as spaces and its structure sheaf is . Its global sections are A, as opposed to X whose global sections are A/I. See also formal holomorphic function Deformation theory Schlessinger's theorem References External links formal completion Algebraic geometry Scheme theory
Formal scheme
[ "Mathematics" ]
1,005
[ "Fields of abstract algebra", "Algebraic geometry" ]
18,829,759
https://en.wikipedia.org/wiki/Valuative%20criterion
In mathematics, specifically algebraic geometry, the valuative criteria are a collection of results that make it possible to decide whether a morphism of algebraic varieties, or more generally schemes, is universally closed, separated, or proper. Statement of the valuative criteria Recall that a valuation ring A is a domain, so if K is the field of fractions of A, then Spec K is the generic point of Spec A. Let X and Y be schemes, and let f : X → Y be a morphism of schemes. Then the following are equivalent: f is separated (resp. universally closed, resp. proper) f is quasi-separated (resp. quasi-compact, resp. of finite type and quasi-separated) and for every valuation ring A, if Y' = Spec A and X' denotes the generic point of Y' , then for every morphism Y' → Y and every morphism X' → X which lifts the generic point, then there exists at most one (resp. at least one, resp. exactly one) lift Y' → X. The lifting condition is equivalent to specifying that the natural morphism is injective (resp. surjective, resp. bijective). Furthermore, in the special case when Y is (locally) Noetherian, it suffices to check the case that A is a discrete valuation ring. References Algebraic geometry Scheme theory
Valuative criterion
[ "Mathematics" ]
301
[ "Fields of abstract algebra", "Algebraic geometry" ]
18,831,832
https://en.wikipedia.org/wiki/Institut%20des%20mol%C3%A9cules%20et%20de%20la%20mati%C3%A8re%20condens%C3%A9e%20de%20Lille
The Institut des molécules et de la matière condensée de Lille - IMMCL Chevreul ( Institute for molecules and condensed matter in Lille) is a physics and chemistry research institute. It is a member of the University of Lille. Background history Academic researches in chemistry in Lille started in the early days of the 19th century, with Charles Frédéric Kuhlmann's innovations on sulfuric acid production and researches on using platinum catalysis for industrial production of nitric acid from ammonia (from 1823 to 1833). The faculty of sciences of Lille was however formally established in 1854 only, with a chemist as its first dean (Louis Pasteur). Hence, academic and applied researches in chemistry, catalysis, and later molecular physics, were boosted from the 19th century onwards and further developed in the 20th and 21st centuries, both in fundamental research and applied research thanks to industry applications. (Source: History of chemistry education and research in Lille university) Locations Cité Scientifique: IMMCL main site is located in University of Lille Science campus. The institute researchers and the different experimentation labs are hosted in several buildings on the campus, including own IMMCL buildings École nationale supérieure de chimie de Lille École centrale de Lille (catalysis labs and chemical engineering labs). Other campus : Facilities for researchers experimentations are also available in the following remote sites Catalysis labs at campus Artois Condensed matter thermo-physics labs (LTPMC) at campus Littoral INSERM unit 761 – Biostructures and Drug Discovery. Research labs IMMCL research laboratories are accredited as French National Centre for Scientific Research (CNRS) laboratories. The different laboratories of the institute include : Laboratory for structure and properties of solid state (LSPES), UMR CNRS 8008 Laboratory for organic chemistry and macro-molecular chemistry (LCOM), UMR CNRS 8009 Laboratory of catalysis of Lille (LCL)- UMR CNRS 8010 and Unit for catalysis and solid chemistry (UCCS) - UMR CNRS 8181 Laboratory of crystal chemistry and physico-chemistry of solids (LCPS), UMR CNRS 8012 Laboratory for dynamics and structural properties of molecular matters LDSMM, UMR CNRS 8024 Laboratory for metallurgy and material science, UMR CNRS 8517. Research area and doctoral college IMMCL research roadmap include the following area: Polymers and organic functional materials Material oxides and catalysis Complex molecular liquids Organic and bio-organic synthesis Metallurgy and materials for energy Coerced and forced materials . They are integrated into the European Doctoral College Lille Nord de France and especially as part of its doctoral school science of materials, radiations and environment (SMRE) supported along with other research laboratories from the COMUE Lille Nord de France. References and links Chemical industry in France Chemical research institutes Materials science institutes Organizations established in 1854 University of Lille Nord de France 1854 establishments in France
Institut des molécules et de la matière condensée de Lille
[ "Chemistry", "Materials_science" ]
614
[ "Materials science organizations", "Chemical research institutes", "Materials science institutes" ]
18,832,276
https://en.wikipedia.org/wiki/Septimal%20tritone
A septimal tritone is a tritone (about one half of an octave) that involves the factor seven. There are two that are inverses. The lesser septimal tritone (also Huygens' tritone) is the musical interval with ratio 7:5 (582.51 cents). The greater septimal tritone (also Euler's tritone), is an interval with ratio 10:7 (617.49 cents). They are also known as the sub-fifth and super-fourth, or subminor fifth and supermajor fourth, respectively. The 7:5 interval (diminished fifth) is equal to a 6:5 minor third plus a 7:6 subminor third. The 10:7 interval (augmented fourth) is equal to a 5:4 major third plus an 8:7 supermajor second, or a 9:7 supermajor third plus a 10:9 major second. The difference between these two is the septimal sixth tone (50:49, 34.98 cents) . 12 equal temperament and 22 equal temperament do not distinguish between these tritones; 19 equal temperament does distinguish them but doesn't match them closely. 31 equal temperament and 41 equal temperament both distinguish between and closely match them. The lesser septimal tritone is the most consonant tritone when measured by combination tones, harmonic entropy, and period length. Depending on the temperament used, "the" tritone, defined as three whole tones, may be identified as either a lesser septimal tritone (in septimal meantone systems), a greater septimal tritone (when the tempered fifth is around 703 cents), neither (as in 72 equal temperament), or both (in 12 equal temperament only). References Augmented fourths Diminished fifths Tritones 7-limit tuning and intervals
Septimal tritone
[ "Physics" ]
389
[ "Tritones", "Symmetry", "Musical symmetry" ]
18,832,302
https://en.wikipedia.org/wiki/Hall%E2%80%93Littlewood%20polynomials
In mathematics, the Hall–Littlewood polynomials are symmetric functions depending on a parameter t and a partition λ. They are Schur functions when t is 0 and monomial symmetric functions when t is 1 and are special cases of Macdonald polynomials. They were first defined indirectly by Philip Hall using the Hall algebra, and later defined directly by Dudley E. Littlewood (1961). Definition The Hall–Littlewood polynomial P is defined by where λ is a partition of at most n with elements λi, and m(i) elements equal to i, and Sn is the symmetric group of order n!. As an example, Specializations We have that , and where the latter is the Schur P polynomials. Properties Expanding the Schur polynomials in terms of the Hall–Littlewood polynomials, one has where are the Kostka–Foulkes polynomials. Note that as , these reduce to the ordinary Kostka coefficients. A combinatorial description for the Kostka–Foulkes polynomials was given by Lascoux and Schützenberger, where "charge" is a certain combinatorial statistic on semistandard Young tableaux, and the sum is taken over the set of all semi-standard Young tableaux T with shape λ and type μ. See also Hall polynomial References External links Orthogonal polynomials Algebraic combinatorics Symmetric functions
Hall–Littlewood polynomials
[ "Physics", "Mathematics" ]
274
[ "Algebra", "Combinatorics", "Fields of abstract algebra", "Symmetric functions", "Algebraic combinatorics", "Symmetry" ]
18,836,858
https://en.wikipedia.org/wiki/Antiferroelectricity
In electromagnetics and materials science, atiferroelectricity is a physical property of certain materials. It is closely related to ferroelectricity; the relation between antiferroelectricity and ferroelectricity is analogous to the relation between antiferromagnetism and ferromagnetism. An antiferroelectric material consists of an ordered (crystalline) array of electric dipoles (from the ions and electrons in the material), but with adjacent dipoles oriented in opposite (antiparallel) directions (the dipoles of each orientation form interpenetrating sublattices, loosely analogous to a checkerboard pattern). This can be contrasted with a ferroelectric, in which the dipoles all point in the same direction. In an antiferroelectric, unlike a ferroelectric, the total, macroscopic spontaneous polarization is zero, since the adjacent dipoles cancel each other out. Antiferroelectricity is a property of a material, and it can appear or disappear (more generally, strengthen or weaken) depending on temperature, pressure, external electric field, growth method, and other parameters. In particular, at a high enough temperature, antiferroelectricity disappears; this temperature is known as the Néel point or Curie point. References Electrical phenomena Phases of matter
Antiferroelectricity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
279
[ "Physical phenomena", "Materials science stubs", "Phases of matter", "Electric and magnetic fields in matter", "Materials science", "Electrical phenomena", "Condensed matter physics", "Condensed matter stubs", "Electromagnetism stubs", "Matter" ]
18,837,003
https://en.wikipedia.org/wiki/Titanium%20sublimation%20pump
A titanium sublimation pump (TSP) is a type of vacuum pump used to remove residual gas in ultra-high vacuum systems, maintaining the vacuum. Principle of operation Its construction and principle of operation is simple. It consists of a titanium filament through which a high current (typically around 40 A) is passed periodically. This current causes the filament to reach the sublimation temperature of titanium, and hence the surrounding chamber walls become coated with a thin film of clean titanium. Since clean titanium is very reactive, components of the residual gas in the chamber which collide with the chamber wall are likely to react and to form a stable, solid product. Thus the gas pressure in the chamber is reduced. After some time, the titanium film will no longer be clean and hence the effectiveness of the pump is reduced. Therefore, after a certain time, the titanium filament should be heated again, and a new film of titanium re-deposited on the chamber wall. Since the time taken for the titanium film to react depends on a number of factors (such as the composition of the residual gas, the temperature of the chamber and the total pressure), the period between successive sublimations requires some consideration. Typically, the operator does not know all of these factors, so the sublimation period is estimated according to the total pressure and by observing the effectiveness of the outcome. Some TSP controllers use a signal from the pressure gauge to estimate the appropriate period. Since the TSP filament has a finite lifetime, TSPs commonly have multiple filaments to allow the operator to switch to a new one without needing to open the chamber. Replacing used filaments can then be combined with other maintenance jobs. The effectiveness of the TSP depends on a number of factors. Amongst the most critical are; the area of the titanium film, the temperature of the chamber walls and the composition of the residual gas. The area is typically maximised when considering where to mount the TSP. The reactivity of the new titanium film is increased at lower temperatures, so it is desirable to cool the relevant part of the chamber, typically using liquid nitrogen. However, due to the cost of the nitrogen and the need to ensure a continuous supply, TSPs are commonly operated at room temperature. Finally the residual gas composition is important – typically the pump works well with the more reactive components (such as CO and O2), but is very ineffective at pumping inert components such as the noble gases and methane (CH4). Therefore, TSP must be used in conjunction with other pumps. Other pumps which use exactly the same working principle, but using something other than titanium as a source are also relatively common. This family of pumps are usually called getter pumps or getters and typically consist of metals which are reactive with the components of the residual gas which are not pumped by the TSP. By choosing a number of such sources, most constituents of the residual gas, except for the noble gases, can be targeted. Practical considerations When mounting the TSP in the chamber, a number of important considerations must be made. First, it is desirable that the filament can deposit on a large area. However, one must take care that the titanium is not deposited onto anything it can damage. For example, electrical feed-throughs containing ceramic insulators will fail if the titanium forms a conducting film which bridges the ceramic insulator. Samples may become contaminated by titanium if they have line-of-sight to the pump. Also, titanium is a very hard material, so titanium film which builds up on the inside of the chamber may form flakes which fall into mechanical components (typically turbomolecular pumps and valves) and damage them. Many chambers containing TSPs also have an ion pump. Often the ion pump provides a good location for the TSP, and some manufacturers promote the use of both types together. Furthermore, TSPs have been shown to be effective against the regurgitation effects of ion pumps. References Vacuum pumps
Titanium sublimation pump
[ "Physics", "Engineering" ]
815
[ "Vacuum pumps", "Vacuum systems", "Vacuum", "Matter" ]
3,658,649
https://en.wikipedia.org/wiki/Neighbouring%20group%20participation
In organic chemistry, neighbouring group participation (NGP, also known as anchimeric assistance) has been defined by the International Union of Pure and Applied Chemistry (IUPAC) as the interaction of a reaction centre with a lone pair of electrons in an atom or the electrons present in a sigma or pi bond contained within the parent molecule but not conjugated with the reaction centre. When NGP is in operation it is normal for the reaction rate to be increased. It is also possible for the stereochemistry of the reaction to be abnormal (or unexpected) when compared with a normal reaction. While it is possible for neighbouring groups to influence many reactions in organic chemistry (e.g. the reaction of a diene such as 1,3-cyclohexadiene with maleic anhydride normally gives the endo isomer because of a secondary effect {overlap of the carbonyl group π orbitals with the transition state in the Diels-Alder reaction}) this page is limited to neighbouring group effects seen with carbocations and SN2 reactions. NGP by heteroatom lone pairs In this type of substitution reaction, one group of the substrate participates initially in the reaction and thereby affects the reaction. A classic example of NGP is the reaction of a sulfur or nitrogen mustard with a nucleophile, the rate of reaction is much higher for the sulfur mustard and a nucleophile than it would be for a primary or secondary alkyl chloride without a heteroatom. reacts with water 600 times faster than . NGP by an alkene The π orbitals of an alkene can stabilize a transition state by helping to delocalize the positive charge of the carbocation. For instance the unsaturated tosylate will react more quickly (1011 times faster for aqueous solvolysis) with a nucleophile than the saturated tosylate. The carbocationic intermediate will be stabilized by resonance where the positive charge is spread over several atoms. In the diagram below this is shown. Here is a different view of the same intermediates. Even if the alkene is more remote from the reacting center the alkene can still act in this way. For instance in the following alkyl benzenesulfonate the alkene is able to delocalise the carbocation. NGP by a cyclopropane, cyclobutane or a homoallyl group The reaction of cyclopropylmethamine with sodium nitrite in dilute aqueous perchloric acid solution yielded a mixture of 48% cyclopropylmethyl alcohol, 47% cyclobutanol, and 5% homoallylic alcohol (but-3-en-1-ol). In the non-classical perspective, the positive charge is delocalized throughout the carbocation intermediate structure via resonance, resulting in partial (electron-deficient) bonds. Evidently, the relatively low yield of the homoallylic alcohol implies that the homoallylic structure is the weakest resonance contributor. NGP by an aromatic ring An aromatic ring can assist in the formation of a carbocationic intermediate called a phenonium ion by delocalising the positive charge. When the following tosylate reacts with acetic acid in solvolysis then rather than a simple SN2 reaction forming B, a 48:48:4 mixture of A, B (which are enantiomers) and C+D was obtained . The mechanism which forms A and B is shown below. NGP by aliphatic C-C or C-H bonds Aliphatic C-C or C-H bonds can lead to charge delocalization if these bonds are close and antiperiplanar to the leaving group. Corresponding intermediates are referred to a nonclassical ions, with the 2-norbornyl system as the most well known case. External links IUPAC definition References Advanced organic chemistry, page 314, Jerry March (4th Ed), Wiley-Interscience. Studies in Stereochemistry. I. The Stereospecific Wagner-Meerwein rearrangement of the Isomers of 3-Phenyl-2-butanol Donald J. Cram J. Am. Chem. Soc.; 1949; 71(12); 3863-3870. Abstract Studies in Stereochemistry. V. Phenonium Sulfonate Ion-pairs as Intermediates in the Intramolecular Rearrangements and Solvolysis Reactions that Occur in the 3-Phenyl-2-butanol System Donald J. Cram J. Am. Chem. Soc.; 1952; 74(9); 2129-2137 Abstract. Physical organic chemistry Chemical kinetics
Neighbouring group participation
[ "Chemistry" ]
1,001
[ "Chemical kinetics", "Chemical reaction engineering", "Physical organic chemistry" ]
3,659,337
https://en.wikipedia.org/wiki/Centre%20for%20High%20Energy%20Physics
The Centre for High Energy Physics (CHEP) is a federally funded national research laboratory managed by the University of Punjab. The CHEP is dedicated towards the scientific advancement and understanding of high energy physics (or particle physics) — a branch of fundamental physics that is concerned with unraveling the ultimate constituents of matter and with elucidating the forces between them. The site was established in 1982 with efforts by Punjab University with federal funding to support research activities in quantum sciences that started in 1968, and later engaged in the supercomputing that started in 2004. Overview The Centre for High Energy Physics (CHEP) was established by the eminent researcher, Dr. Mohammad Saleem, from the federal funding in November 1982. The University of Punjab in Lahore had been engaged in research output in physics in 1968 but the scope was limited to its physics department. CHEP's initial focused was focused and directed towards the advancement of particle physics but began conducting research on supercomputing when it started its teaching program in computational physics in 2004. CHEP takes participation in Beijing Spectrometer-III (BSE-III) in China and currently hosts a 2.5 GeV linear particle accelerator. Logo, building and research output The CHEP's official logo shows a book as a sign of knowledge, and an Arabic verse from the Holy Qur'ann which translates to: "Why don't you think?". On the top of the logo is the CHEP's spelled name and at the bottom is the name of the Punjab University. The CHEP is located in the campus jurisdiction of the University of Punjab, and has a two-story building with its own library (other than the university main library), seven computer labs: a programming, modeling, and simulation lab, a supercomputer lab. CHEP certifies Punjab University's degree criteria for bachelor, master's, and doctoral programs in computational sciences while master's and doctoral programs in high-energy physics. In 2015, CHEP supported the publication of a textbook on high-energy physics authored by Mohammad Saleem and Dr. Muhammad Rafique. The CHEP also has an international collaboration with Michoacan University in Mexico, University of Pittsburgh and Texas Tech University in the United States, Hamburg University in Germany, and Teikyo University in Japan. See also University of the Punjab Ministry of Energy References External links CHEP Punjab University Facebook page Educational institutions established in 1982 Academic institutions in Pakistan Universities and colleges in Lahore Physics research institutes Particle physics facilities Supercomputing in Pakistan Research institutes in Pakistan Physics laboratories Constituent institutions of Pakistan Atomic Energy Commission University of the Punjab Laboratories in Pakistan Computational particle physics
Centre for High Energy Physics
[ "Physics" ]
543
[ "Particle physics", "Computational particle physics", "Computational physics" ]
3,659,503
https://en.wikipedia.org/wiki/Coefficient%20of%20restitution
In physics, the coefficient of restitution (COR, also denoted by e), can be thought of as a measure of the elasticity of a collision between two bodies. It is a dimensionless parameter defined as the ratio of the relative velocity of separation after a two-body collision to the relative velocity of approach before collision. In most real-word collisions, the value of e lies somewhere between 0 and 1, where 1 represents a perfectly elastic collision (in which the objects rebound with no loss of speed but in the opposite directions) and 0 a perfectly inelastic collision (in which the objects do not rebound at all, and end up touching). The basic equation, sometimes known as Newton's restitution equation was developed by Sir Isaac Newton in 1687. Introduction As a property of paired objects The COR is a property of a pair of objects in a collision, not a single object. If a given object collides with two different objects, each collision has its own COR. When a single object is described as having a given coefficient of restitution, as if it were an intrinsic property without reference to a second object, some assumptions have been made – for example that the collision is with another identical object, or with perfectly rigid wall. Treated as a constant In a basic analysis of collisions, e is generally treated as a dimensionless constant, independent of the mass and relative velocities of the two objects, with the collision being treated as effectively instantaneous. An example often used for teaching is the collision of two idealised billiard balls. Real world interactions may be more complicated, for example where the internal structure of the objects needs to be taken into account, or where there are more complex effects happening during the time between initial contact and final separation. Range of values for e e is usually a positive, real number between 0 and 1: e = 0: This is a perfectly inelastic collision in which the objects do not rebound at all and end up touching. 0 < e < 1: This is a real-world inelastic collision, in which some kinetic energy is dissipated. The objects rebound with a lower separation speed than the speed of approach. e = 1: This is a perfectly elastic collision, in which no kinetic energy is dissipated. The objects rebound with the same relative speed with which they approached. Values outside that range are in principle possible, though in practice they would not normally be analysed with a basic analysis that takes e to be a constant:e < 0: A COR less than zero implies a collision in which the objects pass through one another, for example a bullet passing through a target.e'' > 1: This implies a superelastic collision in which the objects rebound with a greater relative speed than the speed of approach, due to some additional stored energy being released during the collision. Equations In the case of a one-dimensional collision involving two idealised objects, A and B, the coefficient of restitution is given by: where: is the final velocity of object A after impact is the final velocity of object B after impact is the initial velocity of object A before impact is the initial velocity of object B before impact This is sometimes known as the restitution equation. For a perfectly elastic collision, e = 1 and the objects rebound with the same relative speed with which they approached. For a perfectly inelastic collision e = 0 and the objects do not rebound at all. For an object bouncing off a stationary target, e is defined as the ratio of the object's rebound speed after the impact to that prior to impact: where is the speed of the object before impact is the speed of the rebounding object (in the opposite direction) after impact In a case where frictional forces can be neglected and the object is dropped from rest onto a horizontal surface, this is equivalent to: where is the drop height is the bounce height The coefficient of restitution can be thought of as a measure of the extent to which energy is conserved when an object bounces off a surface. In the case of an object bouncing off a stationary target, the change in gravitational potential energy, Ep, during the course of the impact is essentially zero; thus, e is a comparison between the kinetic energy, Ek, of the object immediately before impact with that immediately after impact:In a cases where frictional forces can be neglected (nearly every student laboratory on this subject), and the object is dropped from rest onto a horizontal surface, the above is equivalent to a comparison between the Ep of the object at the drop height with that at the bounce height. In this case, the change in Ek is zero (the object is essentially at rest during the course of the impact and is also at rest at the apex of the bounce); thus: Speeds after impact Although e'' does not vary with the masses of the colliding objects, their final velocities are mass-dependent due to conservation of momentum: and where is the velocity of A after impact is the velocity of B after impact is the velocity of A before impact is the velocity of B before impact is the mass of A is the mass of B Practical issues Measurement In practical situations, the coefficient of restitution between two bodies may have to be determined experimentally, for example using the Leeb rebound hardness test. This uses a tip of tungsten carbide, one of the hardest substances available, dropped onto test samples from a specific height. A comprehensive study of coefficients of restitution in dependence on material properties (elastic moduli, rheology), direction of impact, coefficient of friction and adhesive properties of impacting bodies can be found in Willert (2020). Application in sports Thin-faced golf club drivers utilize a "trampoline effect" that creates drives of a greater distance as a result of the flexing and subsequent release of stored energy which imparts greater impulse to the ball. The USGA (America's governing golfing body) tests drivers for COR and has placed the upper limit at 0.83. COR is a function of rates of clubhead speeds and diminish as clubhead speed increase. In the report COR ranges from 0.845 for 90 mph to as low as 0.797 at 130 mph. The above-mentioned "trampoline effect" shows this since it reduces the rate of stress of the collision by increasing the time of the collision. According to one article (addressing COR in tennis racquets), "[f]or the Benchmark Conditions, the coefficient of restitution used is 0.85 for all racquets, eliminating the variables of string tension and frame stiffness which could add or subtract from the coefficient of restitution." The International Table Tennis Federation specifies that the ball shall bounce up 24–26 cm when dropped from a height of 30.5 cm on to a standard steel block, implying a COR of 0.887 to 0.923. The International Basketball Federation (FIBA) rules require that the ball rebound to a height of between 1035 and 1085 mm when dropped from a height of 1800 mm, implying a COR between 0.758 and 0.776. See also Bouncing ball Collision Damping capacity Resilience References Works cited External links Wolfram Article on COR Chris Hecker's physics introduction "Getting an extra bounce" by Chelsea Wald FIFA Quality Concepts for Footballs – Uniform Rebound Mechanics Classical mechanics Ratios de:Stoß (Physik)#Realer Stoß
Coefficient of restitution
[ "Physics", "Mathematics", "Engineering" ]
1,549
[ "Classical mechanics", "Arithmetic", "Mechanics", "Mechanical engineering", "Ratios" ]
3,662,314
https://en.wikipedia.org/wiki/Principle%20of%20minimum%20energy
The principle of minimum energy is essentially a restatement of the second law of thermodynamics. It states that for a closed system, with constant external parameters and entropy, the internal energy will decrease and approach a minimum value at equilibrium. External parameters generally means the volume, but may include other parameters which are specified externally, such as a constant magnetic field. In contrast, for isolated systems (and fixed external parameters), the second law states that the entropy will increase to a maximum value at equilibrium. An isolated system has a fixed total energy and mass. A closed system, on the other hand, is a system which is connected to another, and cannot exchange matter (i.e. particles), but can transfer other forms of energy (e.g. heat), to or from the other system. If, rather than an isolated system, we have a closed system, in which the entropy rather than the energy remains constant, then it follows from the first and second laws of thermodynamics that the energy of that system will drop to a minimum value at equilibrium, transferring its energy to the other system. To restate: The maximum entropy principle: For a closed system with fixed internal energy (i.e. an isolated system), the entropy is maximized at equilibrium. The minimum energy principle: For a closed system with fixed entropy, the total energy is minimized at equilibrium. Mathematical explanation The total energy of the system is where S is entropy, and the are the other extensive parameters of the system (e.g. volume, particle number, etc.). The entropy of the system may likewise be written as a function of the other extensive parameters as . Suppose that X is one of the which varies as a system approaches equilibrium, and that it is the only such parameter which is varying. The principle of maximum entropy may then be stated as: and at equilibrium. The first condition states that entropy is at an extremum, and the second condition states that entropy is at a maximum. Note that for the partial derivatives, all extensive parameters are assumed constant except for the variables contained in the partial derivative, but only U, S, or X are shown. It follows from the properties of an exact differential (see equation 8 in the exact differential article) and from the energy/entropy equation of state that, for a closed system: It is seen that the energy is at an extremum at equilibrium. By similar but somewhat more lengthy argument it can be shown that which is greater than zero, showing that the energy is, in fact, at a minimum. Examples Consider, for one, the familiar example of a marble on the edge of a bowl. If we consider the marble and bowl to be an isolated system, then when the marble drops, the potential energy will be converted to the kinetic energy of motion of the marble. Frictional forces will convert this kinetic energy to heat, and at equilibrium, the marble will be at rest at the bottom of the bowl, and the marble and the bowl will be at a slightly higher temperature. The total energy of the marble-bowl system will be unchanged. What was previously the potential energy of the marble, will now reside in the increased heat energy of the marble-bowl system. This will be an application of the maximum entropy principle as set forth in the principle of minimum potential energy, since due to the heating effects, the entropy has increased to the maximum value possible given the fixed energy of the system. If, on the other hand, the marble is lowered very slowly to the bottom of the bowl, so slowly that no heating effects occur (i.e. reversibly), then the entropy of the marble and bowl will remain constant, and the potential energy of the marble will be transferred as energy to the surroundings. The surroundings will maximize its entropy given its newly acquired energy, which is equivalent to the energy having been transferred as heat. Since the potential energy of the system is now at a minimum with no increase in the energy due to heat of either the marble or the bowl, the total energy of the system is at a minimum. This is an application of the minimum energy principle. Alternatively, suppose we have a cylinder containing an ideal gas, with cross sectional area A and a variable height x. Suppose that a weight of mass m has been placed on top of the cylinder. It presses down on the top of the cylinder with a force of mg where g is the acceleration due to gravity. Suppose that x is smaller than its equilibrium value. The upward force of the gas is greater than the downward force of the weight, and if allowed to freely move, the gas in the cylinder would push the weight upward rapidly, and there would be frictional forces that would convert the energy to heat. If we specify that an external agent presses down on the weight so as to very slowly (reversibly) allow the weight to move upward to its equilibrium position, then there will be no heat generated and the entropy of the system will remain constant while energy is transferred as work to the external agent. The total energy of the system at any value of x is given by the internal energy of the gas plus the potential energy of the weight: where T is temperature, S is entropy, P is pressure, μ is the chemical potential, N is the number of particles in the gas, and the volume has been written as V=Ax. Since the system is closed, the particle number N is constant and a small change in the energy of the system would be given by: Since the entropy is constant, we may say that dS=0 at equilibrium and by the principle of minimum energy, we may say that dU=0 at equilibrium, yielding the equilibrium condition: which simply states that the upward gas pressure force (PA) on the upper face of the cylinder is equal to the downward force of the mass due to gravitation (mg). Thermodynamic potentials The principle of minimum energy can be generalized to apply to constraints other than fixed entropy. For other constraints, other state functions with dimensions of energy will be minimized. These state functions are known as thermodynamic potentials. Thermodynamic potentials are at first glance just simple algebraic combinations of the energy terms in the expression for the internal energy. For a simple, multicomponent system, the internal energy may be written: where the intensive parameters (T, P, μj) are functions of the internal energy's natural variables via the equations of state. As an example of another thermodynamic potential, the Helmholtz free energy is written: where temperature has replaced entropy as a natural variable. In order to understand the value of the thermodynamic potentials, it is necessary to view them in a different light. They may in fact be seen as (negative) Legendre transforms of the internal energy, in which certain of the extensive parameters are replaced by the derivative of internal energy with respect to that variable (i.e. the conjugate to that variable). For example, the Helmholtz free energy may be written: and the minimum will occur when the variable T  becomes equal to the temperature since The Helmholtz free energy is a useful quantity when studying thermodynamic transformations in which the temperature is held constant. Although the reduction in the number of variables is a useful simplification, the main advantage comes from the fact that the Helmholtz free energy is minimized at equilibrium with respect to any unconstrained internal variables for a closed system at constant temperature and volume. This follows directly from the principle of minimum energy which states that at constant entropy, the internal energy is minimized. This can be stated as: where and are the value of the internal energy and the (fixed) entropy at equilibrium. The volume and particle number variables have been replaced by x which stands for any internal unconstrained variables. As a concrete example of unconstrained internal variables, we might have a chemical reaction in which there are two types of particle, an A atom and an A2 molecule. If and are the respective particle numbers for these particles, then the internal constraint is that the total number of A atoms is conserved: we may then replace the and variables with a single variable and minimize with respect to this unconstrained variable. There may be any number of unconstrained variables depending on the number of atoms in the mixture. For systems with multiple sub-volumes, there may be additional volume constraints as well. The minimization is with respect to the unconstrained variables. In the case of chemical reactions this is usually the number of particles or mole fractions, subject to the conservation of elements. At equilibrium, these will take on their equilibrium values, and the internal energy will be a function only of the chosen value of entropy . By the definition of the Legendre transform, the Helmholtz free energy will be: The Helmholtz free energy at equilibrium will be: where is the (unknown) temperature at equilibrium. Substituting the expression for : By exchanging the order of the extrema: showing that the Helmholtz free energy is minimized at equilibrium. The Enthalpy and Gibbs free energy, are similarly derived. References Thermodynamics
Principle of minimum energy
[ "Physics", "Chemistry", "Mathematics" ]
1,887
[ "Thermodynamics", "Dynamical systems" ]
3,663,289
https://en.wikipedia.org/wiki/Capacitor-spring%20analogy
There are several formal analogies that can be made between electricity, which is invisible to the eye, and more familiar physical behaviors, such as the flowing of water or the motion of mechanical devices. In the case of capacitance, one analogy to a capacitor in mechanical rectilineal terms is a spring where the compliance of the spring is analogous to the capacitance. Thus in electrical engineering, a capacitor may be defined as an ideal electrical component which satisfies the equation where = voltage measured at the terminals of the capacitor, = the capacitance of the capacitor, = current flowing between the terminals of the capacitor, and = time. The equation quoted above has the same form as that describing an ideal massless spring: , where: is the force applied between the two ends of the spring, is the stiffness, or spring constant (inverse of compliance) defined as force/displacement, and is the speed (or velocity) of one end of the spring, the other end being fixed. Note that in the electrical case, current (I) is defined as the rate of change of charge (Q) with respect to time: While in the mechanical case, velocity (v) is defined as the rate of change of displacement (x) with respect to time: Thus, in this analogy: Charge is represented by linear displacement, current is represented by linear velocity, voltage by force. time by time Also, these analogous relationships apply: energy. Energy stored in a spring is , while energy stored in a capacitor is . Electric power. Here there is an analogy between the mechanical concept of power as the scalar product of velocity and displacement, and the electrical concept that in an AC circuit with sinusoidal excitation, power is the product where is the phase angle between and , measured in RMS terms. Electrical resistance (R) is analogous to mechanical viscous drag coefficient (force being proportional to velocity is analogous to Ohm's law - voltage being proportional to current). Mass (m) is analogous to inductance (L), since while . Thus an ideal inductor with inductance L is analogous to a rigid body with mass m. This analogy of the capacitor forms part of the more comprehensive impedance analogy of mechanical to electrical systems. See also Hydraulic analogy Elastance References H.F. Olson, Dynamical Analogies, Van Nostrand, 2 ed, 1958 Classical mechanics Electrical analogies
Capacitor-spring analogy
[ "Physics" ]
512
[ "Classical mechanics stubs", "Mechanics", "Classical mechanics" ]
6,519,036
https://en.wikipedia.org/wiki/Thromboxane%20receptor
The thromboxane receptor (TP) also known as the prostanoid TP receptor is a protein that in humans is encoded by the TBXA2R gene, The thromboxane receptor is one among the five classes of prostanoid receptors and was the first eicosanoid receptor cloned. The TP receptor derives its name from its preferred endogenous ligand thromboxane A2. Gene The gene responsible for directing the synthesis of the thromboxane receptor, TBXA2R, is located on human chromosome 19 at position p13.3, spans 15 kilobases, and contains 5 exons. TBXA2R codes for a member of the G protein-coupled super family of seven-transmembrane receptors. Heterogeneity Molecular biology findings have provided definitive evidence for two human TP receptor subtypes. The originally cloned TP subtype from human placenta  is known as the α isoform and the splice variant cloned from endothelium (with 407 amino acids) is termed the β isoform. The first 328 amino acids are the same for both isoforms, but the β isoform exhibits an extended C-terminal cytoplasmic domain. Both isoforms stimulate cells in part by activating the Gq family of G proteins. In at least certain cell types, however, TPα also stimulates cells by activating the Gs family of G proteins while TPβ also stimulates cells by activating the Gi class of G proteins. This leads to the stimulation or inhibition, respectively, of adenylate cyclase activity and thereby very different cellular responses. Differences in their C-terminal tail sequence also allow for significant differences in the two receptors internalization and thereby desensitization (i.e. loss of G protein- and therefore cell-stimulating ability) after activation by an agonist; TPβ but not TPα undergoes agonist-induced internalization. The expression of α and β isoforms is not equal within or across different cell types. For example, platelets express high concentrations of the α isoform (and possess residual RNA for the β isoform), while expression of the β isoform has not been documented in these cells. The β isoform is expressed in human endothelium. Furthermore, each TP isoform can physically combine with: a) another of its isoforms to make TPα-TPα or TPβ-TPβ homodimers that promote stronger cell signaling than achieved by their monomer counterparts; b) their opposite isoform to make TPα-TPβ heterodimers that activate more cell signaling pathways than either isoform or homodimer; and c) with the prostacyclin receptor (i.e. IP receptor) to form TP-IP heterodimers that, with respect to TPα-IP heterodimers, trigger particularly intense activation of adenyl cyclase. The latter effect on adenyl cyclase may serve to suppress TPα's cell stimulating actions and thereby some of its potentially deleterious actions. Mice and rats express only the TPα isoform. Since these rodents are used as animal models to define the functions of genes and their products, their failure to have two TP isoforms has limited understanding of the individual and different functions of each TP receptor isoform. Tissue distribution Historically, TP receptor involvement in blood platelet function has received the greatest attention. However, it is now clear that TP receptors exhibit a wide distribution in different cell types and among different organ systems. For example, TP receptors have been localized in cardiovascular, reproductive, immune, pulmonary and neurological tissues, among others. TP receptor ligands Activating ligands Standard prostanoids have the following relative efficacies as receptor ligands in binding to and activating TP: TXA2=PGH2>>PGD2=PGE2=PGF2alpha=PGI2. Since TXA2 is highly unstable, receptor binding and biological studies on TP are conducted with stable TXA2 analogs such as I-BOP and U46619. These two analogs have one-half of their maximal binding capacity and cell-stimulating potency at ~1 and 10-20 nanomolar, respectively; it is assumed that TXA2 and PGH2 (which also is unstable) have binding and cell-stimulating potencies within this range. PGD2, PGE2, PGF2alpha, and PGI2 have binding and stimulating potencies that are >1,000-fold weaker than I-BOP and therefore are assumed not to have appreciable ability to stimulate TP in vivo. 20-Hydroxyeicosatetraenoic acid (20-HETE) is a full agonist and certain isoprostanes, e.g. 8-iso-PGF2 alpha and 8-iso-PGE2, are partial agonists of the TP receptor. In animal models and human tissues, they act through TP to promote platelet responses and stimulate blood vessel contraction. Synthetic analogs of TXA2 that activate TP but are relatively resistant to spontaneous and metabolic degradation include SQ 26655, AGN192093, and EP 171, all of which have binding and activating potencies for TP similar to I-BOP. Inhibiting ligands Several synthetic compounds bind to, but do not activate, TP and thereby inhibit its activation by activating ligands. These receptor antagonists include I-SAP, SQ-29548, S-145, domitroban, and vapiprost, all of which have affinities for binding TP similar to that of I-BOP. Other notable TP receptor antagonists are Seratrodast (AA-2414), Terutroban (S18886), PTA2, 13-APA, GR-32191, Sulotroban (BM-13177), SQ-29,548, SQ-28,668, ONO-3708, Bay U3405, EP-045, BMS-180,291, and S-145. Many of these TP receptor antagonists have been evaluated as potential therapeutic agents for asthma, thrombosis and hypertension. These evaluations indicate that TP receptor antagonists can be more effective than drugs which selectively block the production of TXA2 thromboxane synthase inhibitors. This seemingly paradoxical result may reflect the ability of PGH2, whose production is not blocked by the inhibitors, to substitute for TXA2 in activating TP. Novel TP receptor antagonists that also have activity in reducing TXA2 production by inhibiting cyclooxygenases have been discovered and are in development for testing in animal models. Mechanism of cell stimulation TP is classified as a contractile type of prostenoid receptor based on its ability to contract diverse types of smooth muscle-containing tissues such as those of the lung, intestines, and uterus. TP contracts smooth muscle and stimulates various response in a wide range of other cell types by coupling with and mobilizing one or more families of the G protein class of receptor-regulated cell signaling molecules. When bound to TXA2, PGH2, or other of its agonists, TP mobilizes members of the: a) Gq alpha subunit family (i.e. G11, G15, and G16 types of Gq proteins) which activates phospholipase C, IP3, cell Ca2+ mobilization, protein kinase Cs, calmodulin-modulated myosin light chain kinase, Mitogen-activated protein kinases, and Calcineurin; b) G12/G13 family which activates Rho GTPases that control cell migration and intracellular organelle movements; c) Gs alpha subunit family which stimulates adenyl cyclase to raise intracellular levels of cAMP and thereby activate cAMP-regulated protein kinases A and thereby protein kinases A-dependent cell signaling pathways (see PKA) d) atypical G protein complex Gh/transglutaminase-2-calreticulin which activates phospholipase C, IP3, cell Ca2+ mobilization, protein kinase C, and Mitogen-activated protein kinase but inhibits adenyl cyclase. Following its activation of these pathways, the TP receptors's cell-stimulating ability rapidly reverses by a process termed homologous desensitization, i.e. TP is no longer able to mobilize its G protein targets or further stimulate cell function. Subsequently, the β but not α isoform of TP undergoes receptor internalization. These receptor down regulating events are triggered by the G protein-coupled receptor kinases mobilized during TP receptor activation. TP receptor-independent agents that stimulate cells to activate protein kinases C or protein kinases A can also down-regulate TP in a process termed heterologous desensitization. For example, prostacyclin I2 (PGI2)-induced activation of its prostacyclin receptor (IP) and prostaglandin D2-induced activation of its prostaglandin DP1 receptor cause TP receptor desensitization by activating protein kinases A while prostaglandin F2alpha-induced activation of its prostaglandin F receptor and prostaglandin E2-induced activation of its prostaglandin EP1 receptor receptor desensitizes TP by activating protein kinases C. These desensitization responses serve to limit the action of receptor agonists as well as the overall extent of cell excitation. In addition to its ability to down-regulate TPα, the IP receptor activates cell signaling pathways that counteract those activated by TP. Furthermore, the IP receptor can physically unite with the TPα receptor to form an IP-TPα heterodimer complex which, when bound by TXA2, activates predominantly IP-coupled cell signal pathways. The nature and extent of many cellular responses to TP receptor activation are thereby modulated by the IP receptor and this modulation may serve to limit the potentially deleterious effects of TP receptor activation (see following section on Functions). Functions Studies using animals genetically engineered to lack the TP receptor and examining the actions of this receptor's agonists and antagonists in animals and on animal and human tissues indicate that TP has various functions in animals and that these functions also occur, or serve as a paradigm for further study, in humans. Platelets Human and animal platelets stimulated by various agents such as thrombin produce TXA2. Inhibition of this production greatly reduces the platelets final adhesion aggregation and degranulation (i.e. secretion of its granule contents) responses to the original stimulus. In addition, the platelets of mice lacking TP receptors have similarly defective adhesion, aggregation, and degranulation responses and these TP deficient mice cannot form stable blood clots and in consequence exhibit bleeding tendencies. TP, as studies show, is part of a positive feedback loop that functions to promote platelet adhesion, aggregation, degranulation, and platelet-induced blood clotting-responses in vitro and in vivo. The platelet-directed functions of TP are in many respects opposite to those of the IP receptor. This further indicates (see previous section) that the balance between the TXA2-TP and PGI2-IP axes contribute to regulating platelet function, blood clotting, and bleeding. Cardiovascular system Animal model studies indicate that TP receptor activation contracts vascular smooth muscle cells and acts on cardiac tissues to increase heart rate, trigger Cardiac arrhythmias, and produce myocardial ischemia. These effects may underlie, at least in part, the protective effects of TP gene knockout in mice. TP(-/-) mice are: a) resistant to the cardiogenic shock caused by infusion of the TP agonist, U46619, or the prostaglandin and thromboxane A2 precursor, arachidonic acid; b) partially protected from the cardiac damage caused by hypertension in IP-receptor deficient mice feed a high salt diet; c) prevented from developing angiotensin II-induced and N-Nitroarginine methyl ester-induced hypertension along with associated cardiac hypertrophy; d) resistant to the vascular damage caused by balloon catheter-induced injury of the external carotid artery; e) less likely to develop severe hepatic microcirculation dysfunction caused by TNFα as well as kidney damage caused by TNFα or bacteria-derived endotoxin; and f) slow in developing vascular atherosclerosis in ApoE gene knockout mice. In addition, TP receptor antagonists lessen myocardial infarct size in various animal models of this disease and block the cardiac dysfunction caused by extensive tissue ischemia in animal models of remote ischemic preconditioning. TP thereby has wide-ranging functions that tend to be detrimental to the cardiovascular network in animals and, most likely, humans. However, TP functions are not uniformly injurious to the cardiovascular system: TP receptor-depleted mice show an increase in cardiac damage as well as mortality due to trypanosoma cruzi infection. The mechanisms behind this putative protective effect and its applicability to humans is not yet known. 20-Hydroxyeicosatetraenoic acid (20-HETE), a product of arachidonic acid formed by Cytochrome P450 omega hydroxylases, and certain isoprostanes, which form by non-enzymatic free radical attack on arachidonic acid, constrict rodent and human artery preparations by directly activating TP. While significantly less potent than thromboxane A2 in activating this receptor, studies on rat and human cerebral artery preparations indicate that increased blood flow through these arteries triggers production of 20-HETE which in turn binds TP receptors to constrict these vessels and thereby reduce their blood blow. Acting in the latter capacity, 20-HETE, it is proposed, functions as a TXA2 analog to regulate blood flow to the brain and possibly other organs. Isoprostanes form in tissues undergoing acute or chronic oxidative stress such as occurs at sites of inflammation and the arteries of diabetic patients. High levels of isoprostanes form in ischemic or otherwise injured blood vessels and acting through TP, can stimulate arterial inflammation and smooth muscle proliferation; this isoprostane-TP axis is proposed to contribute to the development of atherosclerosis and thereby heart attacks and strokes in humans. Lung allergic reactivity TP receptor activation contracts bronchial smooth muscle preparations obtained from animal models as well as humans and contracts airways in animal models. In a mouse model of asthma (i.e. hypersensitivity to ovalabumin), a TP receptor antagonist decreased the number of eosinophils infiltrating lung as judged by their content in Bronchoalveolar lavage fluid and in a mouse model of dust mite-induced astha, deletion of TBXA2R prevented the development of airways contraction and pulmonary eosinophilia responses to allergen. Another TP receptor agonists likewise reduced airway bronchial reactivity to allergen as well as symptoms in volunteers with asthma. The TP receptor appears to play and essential role in the pro-asthmatic actions of leukotriene C4 (LTC4): in ovalbumin-sensitized mice, leukotriene C4 increased the number of eosinophils in bronchoalveolar lavage fluid and simultaneously decreased the percentages of eosinophils in blood but these responses did not occur in TBXA2R-deficient mice. LTC4 also stimulated lung expression of the pro-inflammatory intracellular adhesion molecules, ICAM-1 and VCAM-1 by a TP receptor-dependent mechanism. These findings suggest that TP contributes to asthma in animal models at least in part by mediating the actions of LTC4. Further studies are required to determine if TP receptor antagonists might be useful for treating asthma and other airway constriction syndromes such as chronic obstructive lung diseases in humans. Uterus Along with PGF2α acting through its FP receptor, TXA2 acting through TP contracts uterine smooth muscle preparations from rodents and humans. Since the human uterus loses its sensitivity to PGP2α but not to TXA2 during the early stages of labor in vaginal childbirth, TP agonists, it is suggested, might be useful for treating preterm labor failures. Immune system Activation of TP receptors stimulates vascular endothelial cell pro-inflammatory responses such as increased expression of cell surface adhesion proteins (i.e. ICAM-1, VCAM-1, and E-selectin); stimulates apoptosis (i.e. cell death) of CD4+ and CD8+ lymphocytes; causes the chemokinesis (i.e. cell movement) of native T cells; and impairs the adhesion of dendritic cells to T cells thereby inhibiting dendritic cell-dependent proliferation of T cells. TP deficient mice exhibit an enhanced contact hypersensitivity response to DNFB thymocytes in the thymus of these deficient mice are resistant to lipopolysaccharide-induced apoptosis. TP receptor-depleted mice also gradually develop with age extensive lymphadenopathy and, associated with this, increased immune responses to foreign antigens. These studies indicate that TXA2-TP signaling functions as a negative regulator of DC-T cell interactions and possibly thereby the acquisition of acquired immunity in mice. Further studies are needed to translate these mouse studies to humans. Cancer Increased expression of cyclooxygenases and their potential involvement in the progression of various human cancers have been described. Some studies suggest that the TXA2 downstream metabolite of these cyclooxygenases along with its TP receptor contribute to mediating this progression. TP activation stimulates tumor cell proliferation, migration, neovascularization, invasiveness, and metastasis in animal models, animal and human cell models, and/or human tissue samples in cancers of the prostate, breast, lung, colon, brain, and bladder. These findings, while suggestive, need translational studies to determine their relevancy to the cited human cancers. Clinical significance Isolated cases of humans with mild to moderate bleeding tendencies have been found to have mutations in TP that are associated with defects in this receptors binding of TXA2 analogs, activating cell signal pathways, and/or platelet functional responses not only to TP agonists but also to agents that stimulate platelets by TP-independent mechanisms (see Genomics section below). Drugs in use targeting TP TP receptor antagonist seratrodast is marketed in Japan and China for the treatment of asthma. Picotamide, a dual inhibitor of TP and TXA2 synthesis, is licensed in Italy for the treatment of clinical arterial thrombosis and peripheral artery disease. These drugs are not yet licensed for use in other countries. Clinical trials While functional roles for TP receptor signaling in diverse homeostatic and pathological processes have been demonstrated in animal models, in humans these roles have been demonstrated mainly with respect to platelet function, blood clotting, and hemostasis. TP has also been proposed to be involved in human: blood pressure and organ blood flow regulation; essential and pregnancy-induced hypertension; vascular complications due to sickle cell anemia; other cardiovascular diseases including heart attack, stroke, and peripheral artery diseases; uterine contraction in childbirth; and modulation of innate and adaptive immune responses including those contributing to various allergic and inflammatory diseases of the intestine, lung, and kidney. However, many of the animal model and tissue studies supporting these suggested functions have yet to be proven directly applicable to human diseases. Studies to supply these proofs rest primarily on determining if TP receptor antagonists are clinically useful. However, these studies face issues that drugs which indirectly target TP (e.g. Nonsteroidal anti-inflammatory drugs that block TXA2 production) or which circumvent TP (e.g. P2Y12 antagonists that inhibit platelet activation and corticosteroids and cysteinyl leukotriene receptor 1 antagonists that suppress allergic and/or inflammatory reactions) are effective treatments for many putatively TP-dependent diseases. These drugs are likely to be cheaper and may prove to have more severe side effects that TP-targeting drugs. These considerations may help to explain why relatively few studies have examined the clinical usefulness of TP-targeting drugs. The following translation studies on TP antagonists have been conducted or are underway: In a non-randomized, uncontrolled examination, 4 weeks of treatment with TP receptor antagonist AA-2414 significantly reduced bronchial reactivity in asthmatic patients. A follow-up double-blind placebo controlled study of asthmatic patients found that TP receptor antagonist Seratrodast significantly reduced airway flow (i.e. FEV1), diurnal variation in FEV1, airway responsiveness to contractive stimulation, airway inflammation, and airway content of pro-allergic mediators (i.e. RANTES, CCL3, CCL7, and eotaxin). A phase 3 study, TP antagonist Terutroban was tested against aspirin as a preventative of recurrent as well as new ischemia events in patients with recent strokes or transient ischemic attacks. The study did not meet its primary end points compared to aspirin-treated controls and was stopped; patients on the drug experienced significant increases in minor bleeding episodes. A study comparing the safety and efficacy of TP antagonist ridogrel to aspirin as adjunctive therapy in the emergent treatment of heart attack with the clot dissolving agent streptokinase found that ridogrel gave no significant enhancement of clot resolution but was associated with a lower incidence of recurrent heart attack, recurrent angina, and new strokes without causing excess bleeding **complications. TP antagonist Ifetroban is in phase 2 clinical development for the treatment of kidney failure. In addition to the above TP antagonists, drugs that have dual inhibitory actions in that they block not only TP but also block the enzyme responsible for making TXA22, Thromboxane-A synthase, are in clinical development. These dual inhibitor studies include: A long-term study in diabetic patients compared dual inhibitor picotamide to aspirin for improving ischemia symptoms caused be peripheral artery diseases found not difference in primary end points but also found that picotamide therapy significantly reduced cardiovascular mortality over a 2-year trial. A phase 2 clinical trial of Dual inhibitor Terbogrel to treat vasoconstriction was discontinued due to its induction of leg pain. Dual inhibitor EV-077 is in clinical phase II development. Genomics Several isolated and/or inherited cases of patients suffering a mild to moderately severe bleeding diathesis have been found to be associated with mutations in 'the 'TBXA2R gene that lead to abnormalities in the expression, subcellular location, or function of its TP product. These cases include: A missense mutation causing tryptophan (Trp) to be replaced by cysteine (Cys) as its 29th amino acid (i.e. Trp29Cys) yields a TP which is less responsive to stimulation by a TP agonist, less able to activate its Gq G protein target, and poorly expressed at the cell's surface. Some or perhaps all of these faults may reflect the failure of this mutated TP to form TP-TP dimers. An Asn42Ser mutation yields a TP that remains in the cell's Golgi apparatus and fails to be expressed at the cell surface. An Asp304Asn mutation yields a TP that exhibits decreased binding and responsiveness to a TP agonist. An Arg60Leu mutation yields a TP that is normally expressed and normally binds a TP agonist but fails to activate its Gq G protein target. A missense mutation that replaces thymine (T) with guanine (G) as the 175 nucleotide (c.175C>T) in the TBXA2R gene as well as Cc87G>C and c.125A>G mutations yield TP's that are poorly expressed. A c.190G>A mutation yields a TP that binds a TP agonist poorly. A guanine (G) duplication at the 167th nucleotide causes a Frameshift mutation (c.165dupG) at amino acid #58 to yield a poorly expressed TP mutant. Single nucleotide polymorphism (SNP) variations in the TBXA2R gene have been associated with allergic and cardiovascular diseases; these include: Meta-analysis of several studies done on different population test groups has confirmed an association of TBXA2R single nucleotide polymorphism (SNP) variant 924C>T with an increased risk of developing asthma. The frequency of SNP 795T>C variant in TBXA2R was found in separate studies of South Korean and Japanese test groups and the frequency of the SNP variant -6484C>T preceding the TBXA2R gene in a study of a South Korean test group was found to be elevated in patients suffering a type of severe asthma termed Aspirin-induced asthma. Both 795T>C and 924C>T SNP variants encode a TP receptor that exhibits increased binding and responsiveness to TXA2 analogs. SNP variant -4684T was associated with reduced gene promoter activity in the TBXA2R gene and an increased incidence of developing aspirin-induced urticarial in a Korean test group. SNP variant rs768963 in TBX2R was associated with increased frequency of large artery atherosclerosis, small artery occlusion, and stroke in two separate studies of Chinese test groups. In one of the latter groups, the T-T-G-T haplotype of C795T-T924C-G1686A-rs768963 was significantly less frequent in patients suffering stroke. SNP variant rs13306046 exhibited a reduction in microRNA-induced repression of TBXA2R'' gene expression and was associated with decreased blood pressure in a Scandinavian Caucasian test group. See also Eicosanoid receptor References Further reading External links G protein-coupled receptors
Thromboxane receptor
[ "Chemistry" ]
5,690
[ "G protein-coupled receptors", "Signal transduction" ]
19,997,989
https://en.wikipedia.org/wiki/BREEAM
The Building Research Establishment Environmental Assessment Method (BREEAM), first published by the Building Research Establishment in 1990, is touted as the world's longest established method of identifying the sustainability of buildings. Around 550,000 buildings have been "BREEAM-certified". Additionally, two million homes have registered for certification globally. BREEAM also has a tool which focuses on neighbourhood development. Purpose BREEAM is an assessment undertaken by independent licensed assessors using scientifically-based sustainability metrics and indices which cover a range of environmental issues. Its categories evaluate energy and water use, health and wellbeing, pollution, transport, materials, waste, ecology and management processes. Buildings are rated and certified on a scale of "Pass", "Good", "Very Good", "Excellent" and "Outstanding". It was created to educate home owners and designers of benefits involved in taking its approach, which has a long term focus, and to let these parties make further decisions along the same line. A major focus of the method is on sustainability: It aims to reduce the negative effects of construction and development on the environment. History Work on creating BREEAM began at the Building Research Establishment (based in Watford, England) in 1988. The first version for assessing new office buildings was launched in 1990. This was followed by versions for other buildings including superstores, industrial units and existing offices. In 1998, there was a major revamp of the BREEAM Offices standard, and the scheme's layout, with features such as weighting for different sustainability issues, was established. The development of BREEAM then accelerated with annual updates and variations for other building types such as retail premises being introduced. A version of BREEAM for new homes called EcoHomes was launched in 2000. This scheme was later used as the basis of the Code for Sustainable Homes, which was developed by the Building Research Establishment for the British Government in 2006/7 and replaced Eco Homes in England and Wales. In 2014, the Government in England signalled the winding down the Code for Sustainable Homes. Since then the Building Research Establishment has developed the Home Quality Mark, which is part of the BREEAM family of schemes. An extensive update of all BREEAM schemes in 2008 resulted in the introduction of mandatory post-construction reviews, minimum standards and innovation credits. International versions of BREEAM were also launched that year. Another major update in 2011 resulted in the launch of BREEAM New Construction, which is now used to assess and certify all new UK buildings. This revision included the reclassification and consolidation of issues and criteria to further streamline the BREEAM process. In 2012, a scheme for domestic refurbishment was introduced in the UK, followed by a non-domestic version in 2014 that was expanded to an international scope the following year. In 2015, the Building Research Establishment announced the acquisition of CEEQUAL following a recommendation from their board, with the aim of creating a single sustainability rating scheme for civil engineering and infrastructure projects. The 2018 update of BREEAM UK New Construction was launched in March 2018 at Ecobuild. The BREEAM UK New Construction V6 was released on 24 August 2022 following the updates to building regulations in England that came into force on 15 June 2022 and V6.1 (to incorporate changes to the building regulations for energy performance in Scotland, Wales, and Northern Ireland) on 14 June 2023. Scope BREEAM has expanded from its original focus on individual new buildings at the construction stage to encompass the whole life cycle of buildings from planning to in-use and refurbishment. Its regular revisions and updates are driven by the ongoing need to improve sustainability, respond to feedback from industry and support the UK's sustainability strategies and commitments. Highly flexible, the BREEAM standard can be applied to virtually any building and location, with versions for new buildings, existing buildings, refurbishment projects and large developments: BREEAM New Construction is the BREEAM standard against which the sustainability of new, non-residential buildings in the UK is assessed. Developers and their project teams use the scheme at key stages in the design and procurement process to measure, evaluate, improve and reflect the performance of their buildings. BREEAM International New Construction is the BREEAM standard for assessing the sustainability of new residential and non-residential buildings in countries around the world, except for the UK and other countries with a national BREEAM scheme (see below). This scheme makes use of assessment criteria that take account of the circumstances, priorities, codes and standards of the country or region in which the development is located. BREEAM In-Use is a scheme to help building managers reduce the running costs and improve the environmental performance of existing buildings. It has two parts: building asset and building management. Both parts are relevant to all non-domestic, commercial, industrial, retail and institutional buildings. BREEAM In-Use is widely used by members of the International Sustainability Alliance, which provides a platform for certification against the scheme. The newest version v6, available from 2020 includes also Residential programs. BREEAM Refurbishment provides a design and assessment method for sustainable housing refurbishment projects, helping to cost-effectively improve the sustainability and environmental performance of existing dwellings in a robust way. A scheme for non-housing refurbishment and fit out was launched as "RFO 2014". BREEAM Communities focuses on the masterplanning of whole communities. It is aimed at helping construction industry professionals to design places that people want to live and work in, are good for the environment and are economically successful. BREEAM includes several general sustainability categories for the assessment: Management Energy Health and wellbeing Transport Water Materials Waste Land use and ecology Pollution Home Quality Mark was launched in 2015 as part of the BREEAM family of schemes. It rates new homes on their overall quality and sustainability, then provides further indicators on the homes impact upon the occupants' "running costs", "health and wellbeing" and "environmental footprint". National operators BREEAM is used in more than 70 countries, with several in Europe having gone a stage further to develop country-specific BREEAM schemes operated by national scheme operators. There are currently operators affiliated to BREEAM in: Germany: the German Institute for Sustainable Real Estate operates BREEAM DE. Netherlands: the Dutch Green Building Council operates BREEAM NL Norway: the Norwegian Green Building Council operates BREEAM NOR Spain: the Instituto Tecnológico de Galicia operates BREEAM ES Sweden: the Swedish Green Building Council operates BREEAM SE Schemes developed by national scheme operators can take any format as long as they comply with a set of overarching requirements laid down in the Code for a Sustainable Built Environment. They can be produced from scratch by adapting current BREEAM schemes to the local context, or by developing existing local schemes. The cost and value of sustainability A growing body of research evidence is challenging the perception that sustainable buildings are significantly more costly to design and build than those that simply adhere to regulatory requirements. Research by the Sweett Group into projects using BREEAM, for example, demonstrates that sustainable options often add little or no capital cost to a development project. Where such measures do incur additional costs, these can frequently be paid back through lower running expenses, ultimately leading to saving over the life of the building. Research studies have also highlighted the enhanced value and quality of sustainable buildings. Achieving the standards required by BREEAM requires careful planning, design, specification and detailing, and a good working relationship between the client and project team—the very qualities that can produce better buildings and better conditions for building users. A survey commissioned by Schneider Electric and undertaken by BSRIA examined the experiences of a wide range of companies that had used BREEAM. The findings included, for example, that 88% think it is a good thing, 96% would use the scheme again and 88% would recommend BREEAM to others. The greater efficiency and quality associated with sustainability are also helping to make such building more commercially successful. There is growing evidence, for example, that BREEAM-rated buildings provide increased rates of return for investors, and increased rental rates and sales premiums for developers and owners. A Maastricht University document, published by RICS Research, reported on a study of the effect of BREEAM certification on office buildings in London from 2000–2009. It found, for example, that these buildings achieved a 21% premium on transaction prices and an 18% premium on rents. See also LEED (Leadership in Energy and Environmental Design) Sustainable refurbishment References External links BREEAM website Website of the Building Research Establishment Building energy rating Building engineering Construction Environmental design Environmental engineering Low-energy building in the United Kingdom Science and technology in Hertfordshire Sustainability Sustainable building in the United Kingdom Sustainable building rating systems Sustainable design Sustainable development
BREEAM
[ "Chemistry", "Engineering" ]
1,750
[ "Environmental design", "Building engineering", "Chemical engineering", "Construction", "Civil engineering", "Environmental engineering", "Design", "Architecture" ]
20,000,172
https://en.wikipedia.org/wiki/Diazenylium
Diazenylium is the chemical N2H+, an inorganic cation that was one of the first ions to be observed in interstellar clouds. Since then, it has been observed for in several different types of interstellar environments, observations that have several different scientific uses. It gives astronomers information about the fractional ionization of gas clouds, the chemistry that happens within those clouds, and it is often used as a tracer for molecules that are not as easily detected (such as N2). Its 1–0 rotational transition occurs at 93.174 GHz, a region of the spectrum where Earth's atmosphere is transparent and it has a significant optical depth in both cold and warm clouds so it is relatively easy to observe with ground-based observatories. The results of N2H+ observations can be used not only for determining the chemistry of interstellar clouds, but also for mapping the density and velocity profiles of these clouds. Astronomical detections N2H+ was first observed in 1974 by B.E. Turner. He observed a previously unidentified triplet at 93.174 GHz using the NRAO 11 m telescope. Immediately after this initial observation, Green et al. identified the triplet as the 1–0 rotational transition of N2H+. This was done using a combination of ab initio molecular calculations and comparison of similar molecules, such as N2, CO, HCN, HNC, and HCO+, which are all isoelectronic to N2H+. Based on these calculations, the observed rotational transition would be expected to have seven hyperfine components, but only three of these were observed, since the telescope's resolution was insufficient to distinguish the peaks caused by the hyperfine splitting of the inner Nitrogen atom. Just a year later, Thaddeus and Turner observed the same transition in the Orion molecular cloud 2 (OMC-2) using the same telescope, but this time they integrated for 26 hours, which resulted in a resolution that was good enough to distinguish the smaller hyperfine components. Over the past three decades, N2H+ has been observed quite frequently, and the 1–0 rotational band is almost exclusively the one that astronomers look for. In 1995, the hyperfine structure of this septuplet was observed with an absolute precision of ~7 kHz, which was good enough to determine its molecular constants with an order of magnitude better precision than was possible in the laboratory. This observation was done toward L1512 using the 37 m NEROC Haystack Telescope. In the same year, Sage et al. observed the 1–0 transition of N2H+ in seven out of the nine nearby galaxies that they observed with the NRAO 12 m telescope at Kitt Peak. N2H+ was one of the first few molecular ions to be observed in other galaxies, and its observation helped to show that the chemistry in other galaxies is quite similar to that which we see in our own galaxy. N2H+ is most often observed in dense molecular clouds, where it has proven useful as one of the last molecules to freeze out onto dust grains as the density of the cloud increases toward the center. In 2002, Bergin et al. did a spatial survey of dense cores to see just how far toward the center N2H+ could be observed and found that its abundance drops by at least two orders of magnitude when one moves from the outer edge of the core to the center. This showed that even N2H+ is not an ideal tracer for the chemistry of dense pre-stellar cores, and concluded that H2D+ may be the only good molecular probe of the innermost regions of pre-stellar cores. Laboratory detections Although N2H+ is most often observed by astronomers because of its ease of detection, there have been some laboratory experiments that have observed it in a more controlled environment. The first laboratory spectrum of N2H+ was of the 1–0 rotational band in the ground vibrational level, the same microwave transition that astronomers had recently discovered in space. Ten years later, Owrutsky et al. performed vibrational spectroscopy of N2H+ by observing the plasma created by a discharge of a mixture nitrogen, hydrogen, and argon gas using a color center laser. During the pulsed discharge, the poles were reversed on alternating pulses, so the ions were pulled back and forth through the discharge cell. This caused the absorption features of the ions, but not the neutral molecules, to be shifted back and forth in frequency space, so a lock-in amplifier could be used to observe the spectra of just the ions in the discharge. The lock-in combined with the velocity modulation gave >99.9% discrimination between ions and neutrals. The feed gas was optimized for N2H+ production, and transitions up to J = 41 were observed for both the fundamental N–H stretching band and the bending hot band. Later, Kabbadj et al. observed even more hot bands associated with the fundamental vibrational band using a difference frequency laser to observe a discharge of a mixture of nitrogen, hydrogen, and helium gases. They used velocity modulation in the same way that Owrutsky et al. had, in order to discriminate ions from neutrals. They combined this with a counterpropagating beam technique to aid in noise subtraction, and this greatly increased their sensitivity. They had enough sensitivity to observe OH+, H2O+, and H3O+ that were formed from the minute O2 and H2O impurities in their helium tank. By fitting all observed bands, the rotational constants for N2H+ were determined to be Be = 1.561928 cm−1 and De = , which are the only constants needed to determine the rotational spectrum of this linear molecule in the ground vibrational state, with the exception of determining hyperfine splitting. Given the selection rule ΔJ = ±1, the calculated rotational energy levels, along with their percent population at 30 kelvins, can be plotted. The frequencies of the peaks predicted by this method differ from those observed in the laboratory by at most 700 kHz. Chemistry N2H+ is found mostly in dense molecular clouds, where its presence is closely related to that of many other nitrogen-containing compounds. It is particularly closely tied to the chemistry of N2, which is more difficult to detect (since it lacks a dipole moment). This is why N2H+ is commonly used to indirectly determine the abundance of N2 in molecular clouds. The rates of the dominant formation and destruction reactions can be determined from known rate constants and fractional abundances (relative to H2) in a typical dense molecular cloud. The calculated rates here were for early time (316,000 years) and a temperature of 20 kelvins, which are typical conditions for a relatively young molecular cloud. {|class=wikitable |+Production of diazenylium |- !Reaction !Rate constant !Rate/[H2]2 !Relative rate |- |H2 + → N2H+ + H | || || 1.0 |- | + N2 → N2H+ + H2 | || || 9.1 |} {|class=wikitable |+Destruction of diazenylium |- !Reaction !Rate constant !Rate/[H2]2 !Relative rate |- | N2H+ + O → N2 + OH+ | || || 1.0 |- | N2H+ + CO → N2 + HCO+ | || || 3.2 |- | N2H+ + e– → N2 + H | || || 2.8 |- | N2H+ + e– → NHN | || || 3.7 |} There are dozens more reactions possible, but these are the only ones that are fast enough to affect the abundance of N2H+ in dense molecular clouds. Diazenylium thus plays a critical role in the chemistry of many nitrogen-containing molecules. Although the actual electron density in so-called "dense clouds" is quite low, the destruction of N2H+ is governed mostly by dissociative recombination. References Cations
Diazenylium
[ "Physics", "Chemistry" ]
1,691
[ "Cations", "Ions", "Matter" ]
20,000,238
https://en.wikipedia.org/wiki/Cardboard%20modeling
Cardboard modeling or cardboard engineering is a form of modelling with paper, card stock, paperboard, and corrugated fiberboard. The term cardboard engineering is sometimes used to differentiate from the craft of making decorative cards. It is often referred to as paper modelling although in practice card is generally used. History Originally this was a form of modelling undertaken because of the low cost involved. Card, a means of cutting and glue are all that is needed. Some models are 100% card, while others use items of other materials to reinforce the model. After World War II cardboard models were promoted by a number of model companies. One company, ERG (Bournemouth) Ltd. produced a book "Cardboard Rolling Stock and How to Build It" and Superquick are still well known for their range of printed and pre-cut kits. Books of printed models to cut out and make have been around a long time. Also, specially printed cards were available from which models could be made. In the UK Micromodels were well known for very small card models. Models to cut out were also a feature of paperboard folding cartons. For many years, breakfast cereal makers had models to cut out on their packets. The hobby has been revived through the use of ink-jet and laser colour printers, with the availability of inexpensive cutting plotters and laser engravers also reducing the time, effort, and tedium associated with cutting out the many parts. Using a vector graphics package, it is even possible for anyone to create their own models from scratch, though most use special software. Models to cut out can also be downloaded from the internet. See also Net (polyhedron) Paper model Architectural model References External links Scale modeling Paper toys
Cardboard modeling
[ "Physics", "Engineering" ]
345
[ "Design stubs", "Scale modeling", "Design" ]
860,861
https://en.wikipedia.org/wiki/Doping%20%28semiconductor%29
In semiconductor production, doping is the intentional introduction of impurities into an intrinsic (undoped) semiconductor for the purpose of modulating its electrical, optical and structural properties. The doped material is referred to as an extrinsic semiconductor. Small numbers of dopant atoms can change the ability of a semiconductor to conduct electricity. When on the order of one dopant atom is added per 100 million atoms, the doping is said to be low or light. When many more dopant atoms are added, on the order of one per ten thousand atoms, the doping is referred to as high or heavy. This is often shown as n+ for n-type doping or p+ for p-type doping. (See the article on semiconductors for a more detailed description of the doping mechanism.) A semiconductor doped to such high levels that it acts more like a conductor than a semiconductor is referred to as a degenerate semiconductor. A semiconductor can be considered i-type semiconductor if it has been doped in equal quantities of p and n. In the context of phosphors and scintillators, doping is better known as activation; this is not to be confused with dopant activation in semiconductors. Doping is also used to control the color in some pigments. History The effects of impurities in semiconductors (doping) were long known empirically in such devices as crystal radio detectors and selenium rectifiers. For instance, in 1885 Shelford Bidwell, and in 1930 the German scientist Bernhard Gudden, each independently reported that the properties of semiconductors were due to the impurities they contained. A doping process was formally developed by John Robert Woodyard working at Sperry Gyroscope Company during World War II. Though the word doping is not used in it, his US Patent issued in 1950 describes methods for adding tiny amounts of solid elements from the nitrogen column of the periodic table to germanium to produce rectifying devices. The demands of his work on radar prevented Woodyard from pursuing further research on semiconductor doping. Similar work was performed at Bell Labs by Gordon K. Teal and Morgan Sparks, with a US Patent issued in 1953. Woodyard's prior patent proved to be the grounds of extensive litigation by Sperry Rand. Carrier concentration The concentration of the dopant used affects many electrical properties. Most important is the material's charge carrier concentration. In an intrinsic semiconductor under thermal equilibrium, the concentrations of electrons and holes are equivalent. That is, In a non-intrinsic semiconductor under thermal equilibrium, the relation becomes (for low doping): where n0 is the concentration of conducting electrons, p0 is the conducting hole concentration, and ni is the material's intrinsic carrier concentration. The intrinsic carrier concentration varies between materials and is dependent on temperature. Silicon's ni, for example, is roughly 1.08×1010 cm−3 at 300 kelvins, about room temperature. In general, increased doping leads to increased conductivity due to the higher concentration of carriers. Degenerate (very highly doped) semiconductors have conductivity levels comparable to metals and are often used in integrated circuits as a replacement for metal. Often superscript plus and minus symbols are used to denote relative doping concentration in semiconductors. For example, n+ denotes an n-type semiconductor with a high, often degenerate, doping concentration. Similarly, p− would indicate a very lightly doped p-type material. Even degenerate levels of doping imply low concentrations of impurities with respect to the base semiconductor. In intrinsic crystalline silicon, there are approximately 5×1022 atoms/cm3. Doping concentration for silicon semiconductors may range anywhere from 1013 cm−3 to 1018 cm−3. Doping concentration above about 1018 cm−3 is considered degenerate at room temperature. Degenerately doped silicon contains a proportion of impurity to silicon on the order of parts per thousand. This proportion may be reduced to parts per billion in very lightly doped silicon. Typical concentration values fall somewhere in this range and are tailored to produce the desired properties in the device that the semiconductor is intended for. Effect on band structure Doping a semiconductor in a good crystal introduces allowed energy states within the band gap, but very close to the energy band that corresponds to the dopant type. In other words, electron donor impurities create states near the conduction band while electron acceptor impurities create states near the valence band. The gap between these energy states and the nearest energy band is usually referred to as dopant-site bonding energy or EB and is relatively small. For example, the EB for boron in silicon bulk is 0.045 eV, compared with silicon's band gap of about 1.12 eV. Because EB is so small, room temperature is hot enough to thermally ionize practically all of the dopant atoms and create free charge carriers in the conduction or valence bands. Dopants also have the important effect of shifting the energy bands relative to the Fermi level. The energy band that corresponds with the dopant with the greatest concentration ends up closer to the Fermi level. Since the Fermi level must remain constant in a system in thermodynamic equilibrium, stacking layers of materials with different properties leads to many useful electrical properties induced by band bending, if the interfaces can be made cleanly enough. For example, the p-n junction's properties are due to the band bending that happens as a result of the necessity to line up the bands in contacting regions of p-type and n-type material. This effect is shown in a band diagram. The band diagram typically indicates the variation in the valence band and conduction band edges versus some spatial dimension, often denoted x. The Fermi level is also usually indicated in the diagram. Sometimes the intrinsic Fermi level, Ei, which is the Fermi level in the absence of doping, is shown. These diagrams are useful in explaining the operation of many kinds of semiconductor devices. Relationship to carrier concentration (low doping) For low levels of doping, the relevant energy states are populated sparsely by electrons (conduction band) or holes (valence band). It is possible to write simple expressions for the electron and hole carrier concentrations, by ignoring Pauli exclusion (via Maxwell–Boltzmann statistics): where is the Fermi level, is the minimum energy of the conduction band, and is the maximum energy of the valence band. These are related to the value of the intrinsic concentration via an expression which is independent of the doping level, since (the band gap) does not change with doping. The concentration factors and are given by where and are the density of states effective masses of electrons and holes, respectively, quantities that are roughly constant over temperature. Techniques of doping and synthesis Doping during crystal growth Some dopants are added as the (usually silicon) boule is grown by Czochralski method, giving each wafer an almost uniform initial doping. Alternately, synthesis of semiconductor devices may involve the use of vapor-phase epitaxy. In vapor-phase epitaxy, a gas containing the dopant precursor can be introduced into the reactor. For example, in the case of n-type gas doping of gallium arsenide, hydrogen sulfide is added, and sulfur is incorporated into the structure. This process is characterized by a constant concentration of sulfur on the surface. In the case of semiconductors in general, only a very thin layer of the wafer needs to be doped in order to obtain the desired electronic properties. Post-growth doping To define circuit elements, selected areas — typically controlled by photolithography — are further doped by such processes as diffusion and ion implantation, the latter method being more popular in large production runs because of increased controllability. Spin-on glass Spin-on glass or spin-on dopant doping is a two-step process. First, a mixture of SiO2 and dopants (in a solvent) is applied to a wafer surface by spin-coating. Then it is stripping and baked at a certain temperature in a furnace with constant nitrogen+oxygen flow. Neutron transmutation doping Neutron transmutation doping (NTD) is an unusual doping method for special applications. Most commonly, it is used to dope silicon n-type in high-power electronics and semiconductor detectors. It is based on the conversion of the Si-30 isotope into phosphorus atom by neutron absorption as follows: In practice, the silicon is typically placed near a nuclear reactor to receive the neutrons. As neutrons continue to pass through the silicon, more and more phosphorus atoms are produced by transmutation, and therefore the doping becomes more and more strongly n-type. NTD is a far less common doping method than diffusion or ion implantation, but it has the advantage of creating an extremely uniform dopant distribution. Dopant elements Group IV semiconductors (Note: When discussing periodic table groups, semiconductor physicists always use an older notation, not the current IUPAC group notation. For example, the carbon group is called "Group IV", not "Group 14".) For the Group IV semiconductors such as diamond, silicon, germanium, silicon carbide, and silicon–germanium, the most common dopants are acceptors from Group III or donors from Group V elements. Boron, arsenic, phosphorus, and occasionally gallium are used to dope silicon. Boron is the p-type dopant of choice for silicon integrated circuit production because it diffuses at a rate that makes junction depths easily controllable. Phosphorus is typically used for bulk-doping of silicon wafers, while arsenic is used to diffuse junctions, because it diffuses more slowly than phosphorus and is thus more controllable. By doping pure silicon with Group V elements such as phosphorus, extra valence electrons are added that become unbounded from individual atoms and allow the compound to be an electrically conductive n-type semiconductor. Doping with Group III elements, which are missing the fourth valence electron, creates "broken bonds" (holes) in the silicon lattice that are free to move. The result is an electrically conductive p-type semiconductor. In this context, a Group V element is said to behave as an electron donor, and a Group III element as an acceptor. This is a key concept in the physics of a diode. A very heavily doped semiconductor behaves more like a good conductor (metal) and thus exhibits more linear positive thermal coefficient. Such effect is used for instance in sensistors. Lower dosage of doping is used in other types (NTC or PTC) thermistors. Silicon dopants Acceptors, p-type Boron is a p-type dopant. Its diffusion rate allows easy control of junction depths. Common in CMOS technology. Can be added by diffusion of diborane gas. The only acceptor with sufficient solubility for efficient emitters in transistors and other applications requiring extremely high dopant concentrations. Boron diffuses about as fast as phosphorus. Aluminum, used for deep p-diffusions. Not popular in VLSI and ULSI. Also a common unintentional impurity. Gallium is a dopant used for long-wavelength infrared photoconduction silicon detectors in the 8–14 μm atmospheric window. Gallium-doped silicon is also promising for solar cells, due to its long minority carrier lifetime with no lifetime degradation; as such it is gaining importance as a replacement of boron doped substrates for solar cell applications. Indium is a dopant used for long-wavelength infrared photoconduction silicon detectors in the 3–5 μm atmospheric window. Donors, n-type Phosphorus is a n-type dopant. It diffuses fast, so is usually used for bulk doping, or for well formation. Used in solar cells. Can be added by diffusion of phosphine gas. Bulk doping can be achieved by nuclear transmutation, by irradiation of pure silicon with neutrons in a nuclear reactor. Phosphorus also traps gold atoms, which otherwise quickly diffuse through silicon and act as recombination centers. Arsenic is a n-type dopant. Its slower diffusion allows using it for diffused junctions. Used for buried layers. Has similar atomic radius to silicon, high concentrations can be achieved. Its diffusivity is about a tenth of phosphorus or boron, so it is used where the dopant should stay in place during subsequent thermal processing. Useful for shallow diffusions where well-controlled abrupt boundary is desired. Preferred dopant in VLSI circuits. Preferred dopant in low resistivity ranges. Antimony is a n-type dopant. It has a small diffusion coefficient. Used for buried layers. Has diffusivity similar to arsenic, is used as its alternative. Its diffusion is virtually purely substitutional, with no interstitials, so it is free of anomalous effects. For this superior property, it is sometimes used in VLSI instead of arsenic. Heavy doping with antimony is important for power devices. Heavily antimony-doped silicon has lower concentration of oxygen impurities; minimal autodoping effects make it suitable for epitaxial substrates. Bismuth is a promising dopant for long-wavelength infrared photoconduction silicon detectors, a viable n-type alternative to the p-type gallium-doped material. Lithium is used for doping silicon for radiation hardened solar cells. The lithium presence anneals defects in the lattice produced by protons and neutrons. Lithium can be introduced to boron-doped p+ silicon, in amounts low enough to maintain the p character of the material, or in large enough amount to counterdope it to low-resistivity n type. Other Germanium can be used for band gap engineering. Germanium layer also inhibits diffusion of boron during the annealing steps, allowing ultrashallow p-MOSFET junctions. Germanium bulk doping suppresses large void defects, increases internal gettering, and improves wafer mechanical strength. Silicon, germanium and xenon can be used as ion beams for pre-amorphization of silicon wafer surfaces. Formation of an amorphous layer beneath the surface allows forming ultrashallow junctions for p-MOSFETs. Nitrogen is important for growing defect-free silicon crystal. Improves mechanical strength of the lattice, increases bulk microdefect generation, suppresses vacancy agglomeration. Gold and platinum are used for minority carrier lifetime control. They are used in some infrared detection applications. Gold introduces a donor level 0.35 eV above the valence band and an acceptor level 0.54 eV below the conduction band. Platinum introduces a donor level also at 0.35 eV above the valence band, but its acceptor level is only 0.26 eV below conduction band; as the acceptor level in n-type silicon is shallower, the space charge generation rate is lower and therefore the leakage current is also lower than for gold doping. At high injection levels platinum performs better for lifetime reduction. Reverse recovery of bipolar devices is more dependent on the low-level lifetime, and its reduction is better performed by gold. Gold provides a good tradeoff between forward voltage drop and reverse recovery time for fast switching bipolar devices, where charge stored in base and collector regions must be minimized. Conversely, in many power transistors a long minority carrier lifetime is required to achieve good gain, and the gold/platinum impurities must be kept low. Other semiconductors In the following list the "(substituting X)" refers to all of the materials preceding said parenthesis. Gallium arsenide n-type: tellurium, sulfur (substituting As); tin, silicon, germanium (substituting Ga) p-type: beryllium, zinc, chromium (substituting Ga); silicon, germanium, carbon (substituting As) Gallium phosphide n-type: tellurium, selenium, sulfur (substituting phosphorus) p-type: zinc, magnesium (substituting Ga); tin (substituting P) isoelectric: nitrogen (substituting P) is added to enable luminescence in older green LEDs (GaP has indirect band gap) Gallium nitride, Indium gallium nitride, Aluminium gallium nitride n-type: silicon (substituting Ga), germanium (substituting Ga, better lattice match), carbon (substituting Ga, naturally embedding into MOVPE-grown layers in low concentration) p-type: magnesium (substituting Ga) - challenging due to relatively high ionisation energy above the valence band edge, strong diffusion of interstitial Mg, hydrogen complexes passivating of Mg acceptors and by Mg self-compensation at higher concentrations) Cadmium telluride n-type: indium, aluminium (substituting Cd); chlorine (substituting Te) p-type: phosphorus (substituting Te); lithium, sodium (substituting Cd) Cadmium sulfide n-type: gallium (substituting Cd); iodine, fluorine (substituting S) p-type: lithium, sodium (substituting Cd) Compensation In most cases many types of impurities will be present in the resultant doped semiconductor. If an equal number of donors and acceptors are present in the semiconductor, the extra core electrons provided by the former will be used to satisfy the broken bonds due to the latter, so that doping produces no free carriers of either type. This phenomenon is known as compensation, and occurs at the p-n junction in the vast majority of semiconductor devices. Partial compensation, where donors outnumber acceptors or vice versa, allows device makers to repeatedly reverse (invert) the type of a certain layer under the surface of a bulk semiconductor by diffusing or implanting successively higher doses of dopants, so-called counterdoping. Most modern semiconductor devices are made by successive selective counterdoping steps to create the necessary P and N type areas under the surface of bulk silicon. This is an alternative to successively growing such layers by epitaxy. Although compensation can be used to increase or decrease the number of donors or acceptors, the electron and hole mobility is always decreased by compensation because mobility is affected by the sum of the donor and acceptor ions. Doping in conductive polymers Conductive polymers can be doped by adding chemical reactants to oxidize, or sometimes reduce, the system so that electrons are pushed into the conducting orbitals within the already potentially conducting system. There are two primary methods of doping a conductive polymer, both of which use an oxidation-reduction (i.e., redox) process. Chemical doping involves exposing a polymer such as melanin, typically a thin film, to an oxidant such as iodine or bromine. Alternatively, the polymer can be exposed to a reductant; this method is far less common, and typically involves alkali metals. Electrochemical doping involves suspending a polymer-coated, working electrode in an electrolyte solution in which the polymer is insoluble along with separate counter and reference electrodes. An electric potential difference is created between the electrodes that causes a charge and the appropriate counter ion from the electrolyte to enter the polymer in the form of electron addition (i.e., n-doping) or removal (i.e., p-doping). N-doping is much less common because the Earth's atmosphere is oxygen-rich, thus creating an oxidizing environment. An electron-rich, n-doped polymer will react immediately with elemental oxygen to de-dope (i.e., reoxidize to the neutral state) the polymer. Thus, chemical n-doping must be performed in an environment of inert gas (e.g., argon). Electrochemical n-doping is far more common in research, because it is easier to exclude oxygen from a solvent in a sealed flask. However, it is unlikely that n-doped conductive polymers are available commercially. Doping in organic molecular semiconductors Molecular dopants are preferred in doping molecular semiconductors due to their compatibilities of processing with the host, that is, similar evaporation temperatures or controllable solubility. Additionally, the relatively large sizes of molecular dopants compared with those of metal ion dopants (such as Li+ and Mo6+) are generally beneficial, yielding excellent spatial confinement for use in multilayer structures, such as OLEDs and Organic solar cells. Typical p-type dopants include F4-TCNQ and Mo(tfd)3. However, similar to the problem encountered in doping conductive polymers, air-stable n-dopants suitable for materials with low electron affinity (EA) are still elusive. Recently, photoactivation with a combination of cleavable dimeric dopants, such as [RuCp∗Mes]2, suggests a new path to realize effective n-doping in low-EA materials. Magnetic doping Research on magnetic doping has shown that considerable alteration of certain properties such as specific heat may be affected by small concentrations of an impurity; for example, dopant impurities in semiconducting ferromagnetic alloys can generate different properties as first predicted by White, Hogan, Suhl and Nakamura. The inclusion of dopant elements to impart dilute magnetism is of growing significance in the field of magnetic semiconductors. The presence of disperse ferromagnetic species is key to the functionality of emerging spintronics, a class of systems that utilise electron spin in addition to charge. Using density functional theory (DFT) the temperature dependent magnetic behaviour of dopants within a given lattice can be modeled to identify candidate semiconductor systems. Single dopants in semiconductors The sensitive dependence of a semiconductor's properties on dopants has provided an extensive range of tunable phenomena to explore and apply to devices. It is possible to identify the effects of a solitary dopant on commercial device performance as well as on the fundamental properties of a semiconductor material. New applications have become available that require the discrete character of a single dopant, such as single-spin devices in the area of quantum information or single-dopant transistors. Dramatic advances in the past decade towards observing, controllably creating and manipulating single dopants, as well as their application in novel devices have allowed opening the new field of solotronics (solitary dopant optoelectronics). Modulation doping Electrons or holes introduced by doping are mobile, and can be spatially separated from dopant atoms they have dissociated from. Ionized donors and acceptors however attract electrons and holes, respectively, so this spatial separation requires abrupt changes of dopant levels, of band gap (e.g. a quantum well), or built-in electric fields (e.g. in case of noncentrosymmetric crystals). This technique is called modulation doping and is advantageous owing to suppressed carrier-donor scattering, allowing very high mobility to be attained. See also Extrinsic semiconductor Intrinsic semiconductor List of semiconductor materials Monolayer doping p-n junction References External links Semiconductor properties Semiconductor device fabrication
Doping (semiconductor)
[ "Physics", "Materials_science" ]
4,819
[ "Semiconductor device fabrication", "Semiconductor properties", "Condensed matter physics", "Microtechnology" ]
861,530
https://en.wikipedia.org/wiki/Barometric%20formula
The barometric formula is a formula used to model how the pressure (or density) of the air changes with altitude. Pressure equations There are two equations for computing pressure as a function of height. The first equation is applicable to the atmospheric layers in which the temperature is assumed to vary with altitude at a non null lapse rate of : The second equation is applicable to the atmospheric layers in which the temperature is assumed not to vary with altitude (lapse rate is null): where: = reference pressure = reference temperature (K) = temperature lapse rate (K/m) in ISA = geopotential height at which pressure is calculated (m) = geopotential height of reference level b (meters; e.g., hb = 11 000 m) = universal gas constant: 8.3144598 J/(mol·K) = gravitational acceleration: 9.80665 m/s2 = molar mass of Earth's air: 0.0289644 kg/mol Or converted to imperial units: = reference pressure = reference temperature (K) = temperature lapse rate (K/ft) in ISA = height at which pressure is calculated (ft) = height of reference level b (feet; e.g., hb = 36,089 ft) = universal gas constant; using feet, kelvins, and (SI) moles: = gravitational acceleration: 32.17405 ft/s2 = molar mass of Earth's air: 28.9644 lb/lb-mol The value of subscript b ranges from 0 to 6 in accordance with each of seven successive layers of the atmosphere shown in the table below. In these equations, g0, M and R* are each single-valued constants, while P, L, T, and h are multivalued constants in accordance with the table below. The values used for M, g0, and R* are in accordance with the U.S. Standard Atmosphere, 1976, and the value for R* in particular does not agree with standard values for this constant. The reference value for Pb for b = 0 is the defined sea level value, P0 = 101 325 Pa or 29.92126 inHg. Values of Pb of b = 1 through b = 6 are obtained from the application of the appropriate member of the pair equations 1 and 2 for the case when h = hb+1. Density equations The expressions for calculating density are nearly identical to calculating pressure. The only difference is the exponent in Equation 1. There are two equations for computing density as a function of height. The first equation is applicable to the standard model of the troposphere in which the temperature is assumed to vary with altitude at a lapse rate of ; the second equation is applicable to the standard model of the stratosphere in which the temperature is assumed not to vary with altitude. Equation 1: which is equivalent to the ratio of the relative pressure and temperature changes Equation 2: where = mass density (kg/m3) = standard temperature (K) = standard temperature lapse rate (see table below) (K/m) in ISA = height above sea level (geopotential meters) = universal gas constant 8.3144598 N·m/(mol·K) = gravitational acceleration: 9.80665 m/s2 = molar mass of Earth's air: 0.0289644 kg/mol or, converted to U.S. gravitational foot-pound-second units (no longer used in U.K.): = mass density (slug/ft3) = standard temperature (K) = standard temperature lapse rate (K/ft) = height above sea level (geopotential feet) = universal gas constant: 8.9494596×104 ft2/(s·K) = gravitational acceleration: 32.17405 ft/s2 = molar mass of Earth's air: 0.0289644 kg/mol The value of subscript b ranges from 0 to 6 in accordance with each of seven successive layers of the atmosphere shown in the table below. The reference value for ρb for b = 0 is the defined sea level value, ρ0 = 1.2250 kg/m3 or 0.0023768908 slug/ft3. Values of ρb of b = 1 through b = 6 are obtained from the application of the appropriate member of the pair equations 1 and 2 for the case when h = hb+1. In these equations, g0, M and R* are each single-valued constants, while ρ, L, T and h are multi-valued constants in accordance with the table below. The values used for M, g0 and R* are in accordance with the U.S. Standard Atmosphere, 1976, and that the value for R* in particular does not agree with standard values for this constant. {| class="wikitable" |- ! rowspan="2"|Subscript b ! colspan="2"|Geopotential height above MSL (h) ! colspan="2"|Mass Density () ! rowspan="2"|Standard Temperature (T''') (K) ! colspan="2"|Temperature Lapse Rate (L) |- ! (m) !! (ft)!! (kg/m3) !! (slug/ft3) !! (K/m) !! (K/ft) |- | align="center" |0 | align="center" |0 | align="center" |0 | align="center" |1.2250 | align="center" | | align="center" |288.15 | align="center" |0.0065 | align="center" |0.0019812 |- | align="center" |1 | align="center" |11 000 | align="center" |36,089.24 | align="center" |0.36391 | align="center" | | align="center" |216.65 | align="center" |0.0 | align="center" |0.0 |- | align="center" |2 | align="center" |20 000 | align="center" |65,616.79 | align="center" |0.08803 | align="center" | | align="center" |216.65 | align="center" |-0.001 | align="center" |-0.0003048 |- | align="center" |3 | align="center" |32 000 | align="center" |104,986.87 | align="center" |0.01322 | align="center" | | align="center" |228.65 | align="center" |-0.0028 | align="center" |-0.00085344 |- | align="center" |4 | align="center" |47 000 | align="center" |154,199.48 | align="center" |0.00143 | align="center" | | align="center" |270.65 | align="center" |0.0 | align="center" |0.0 |- | align="center" |5 | align="center" |51 000 | align="center" |167,322.83 | align="center" |0.00086 | align="center" | | align="center" |270.65 | align="center" |0.0028 | align="center" |0.00085344 |- | align="center" |6 | align="center" |71 000 | align="center" |232,939.63 | align="center" |0.000064 | align="center" | | align="center" |214.65 | align="center" |0.002 | align="center" |0.0006096 |} Derivation The barometric formula can be derived using the ideal gas law: Assuming that all pressure is hydrostatic: and dividing this equation by we get: Integrating this expression from the surface to the altitude z we get: Assuming linear temperature change and constant molar mass and gravitational acceleration, we get the first barometric formula: Instead, assuming constant temperature, integrating gives the second barometric formula: In this formulation, R* is the gas constant, and the term R*T/Mg gives the scale height (approximately equal to 8.4 km for the troposphere). (For exact results, it should be remembered that atmospheres containing water do not behave as an ideal gas''. See real gas or perfect gas or gas for further understanding.) See also Hypsometric equation NRLMSISE-00 Vertical pressure variation References Vertical position Atmospheric pressure
Barometric formula
[ "Physics", "Mathematics" ]
1,911
[ "Functions and mappings", "Physical quantities", "Mathematical objects", "Meteorological quantities", "Atmospheric pressure", "Vertical distributions", "Mathematical relations" ]
861,928
https://en.wikipedia.org/wiki/Refractory
In materials science, a refractory (or refractory material) is a material that is resistant to decomposition by heat or chemical attack and that retains its strength and rigidity at high temperatures. They are inorganic, non-metallic compounds that may be porous or non-porous, and their crystallinity varies widely: they may be crystalline, polycrystalline, amorphous, or composite. They are typically composed of oxides, carbides or nitrides of the following elements: silicon, aluminium, magnesium, calcium, boron, chromium and zirconium. Many refractories are ceramics, but some such as graphite are not, and some ceramics such as clay pottery are not considered refractory. Refractories are distinguished from the refractory metals, which are elemental metals and their alloys that have high melting temperatures. Refractories are defined by ASTM C71 as "non-metallic materials having those chemical and physical properties that make them applicable for structures, or as components of systems, that are exposed to environments above ". Refractory materials are used in furnaces, kilns, incinerators, and reactors. Refractories are also used to make crucibles and molds for casting glass and metals. The iron and steel industry and metal casting sectors use approximately 70% of all refractories produced. Refractory materials Refractory materials must be chemically and physically stable at high temperatures. Depending on the operating environment, they must be resistant to thermal shock, be chemically inert, and/or have specific ranges of thermal conductivity and of the coefficient of thermal expansion. The oxides of aluminium (alumina), silicon (silica) and magnesium (magnesia) are the most important materials used in the manufacturing of refractories. Another oxide usually found in refractories is the oxide of calcium (lime). Fire clays are also widely used in the manufacture of refractories. Refractories must be chosen according to the conditions they face. Some applications require special refractory materials. Zirconia is used when the material must withstand extremely high temperatures. Silicon carbide and carbon (graphite) are two other refractory materials used in some very severe temperature conditions, but they cannot be used in contact with oxygen, as they would oxidize and burn. Binary compounds such as tungsten carbide or boron nitride can be very refractory. Hafnium carbide is the most refractory binary compound known, with a melting point of 3890 °C. The ternary compound tantalum hafnium carbide has one of the highest melting points of all known compounds (4215 °C). Molybdenum disilicide has a high melting point of 2030 °C and is often used as a heating element. Uses Refractory materials are useful for the following functions: Serving as a thermal barrier between a hot medium and the wall of a containing vessel Withstanding physical stresses and preventing erosion of vessel walls due to the hot medium Protecting against corrosion Providing thermal insulation Refractories have multiple useful applications. In the metallurgy industry, refractories are used for lining furnaces, kilns, reactors, and other vessels which hold and transport hot media such as metal and slag. Refractories have other high temperature applications such as fired heaters, hydrogen reformers, ammonia primary and secondary reformers, cracking furnaces, utility boilers, catalytic cracking units, air heaters, and sulfur furnaces. They are used for surfacing flame deflectors in rocket launch structures. Classification of refractory materials Refractories are classified in multiple ways, based on: Chemical composition Method of manufacture Size and shape Fusion temperature Refractoriness Thermal conductivity Chemical composition Acidic refractories Acidic refractories are generally impervious to acidic materials but easily attacked by basic materials, and are thus used with acidic slag in acidic environments. They include substances such as silica, alumina, and fire clay brick refractories. Notable reagents that can attack both alumina and silica are hydrofluoric acid, phosphoric acid, and fluorinated gases (e.g. HF, F2). At high temperatures, acidic refractories may also react with limes and basic oxides. Silica refractories are refractories containing more than 93% silicon oxide (SiO2). They are acidic, have high resistance to thermal shock, flux and slag resistance, and high spalling resistance. Silica bricks are often used in the iron and steel industry as furnace materials. An important property of silica brick is its ability to maintain hardness under high loads until its fusion point. Silica refractories are usually cheaper hence easily disposable. New technologies that provide higher strength and more casting duration with less silicon oxide (90%) when mixed with organic resins have been developed. Zirconia refractories are refractories primarily composed of zirconium oxide (ZrO2). They are often used for glass furnaces because they have low thermal conductivity, are not easily wetted by molten glass and have low reactivity with molten glass. These refractories are also useful for applications in high temperature construction materials. Aluminosilicate refractories mainly consist of alumina (Al2O3) and silica (SiO2). Aluminosilicate refractories can be semiacidic, fireclay composite, or high alumina content composite. Basic refractories Basic refractories are used in areas where slags and atmosphere are basic. They are stable to alkaline materials but can react to acids, which is important e. g. when removing phosphorus from pig iron (see Gilchrist–Thomas process). The main raw materials belong to the RO group, of which magnesia (MgO) is a common example. Other examples include dolomite and chrome-magnesia. For the first half of the twentieth century, the steel making process used artificial periclase (roasted magnesite) as a furnace lining material. Magnesite refractories are composed of ≥ 85% magnesium oxide (MgO). They have high slag resistance to lime and iron-rich slags, strong abrasion and corrosion resistance, and high refractoriness under load, and are typically used in metallurgical furnaces. Dolomite refractories mainly consist of calcium magnesium carbonate. Typically, dolomite refractories are used in converter and refining furnaces. Magnesia-chrome refractories mainly consist of magnesium oxide (MgO) and chromium oxide (Cr2O3). These refractories have high refractoriness and have a high tolerance for corrosive environments. Neutral refractories These are used in areas where slags and atmosphere are either acidic or basic and are chemically stable to both acids and bases. The main raw materials belong to, but are not confined to, the R2O3 group. Common examples of these materials are alumina (Al2O3), chromia (Cr2O3) and carbon. Carbon graphite refractories mainly consist of carbon. These refractories are often used in highly reducing environments, and their properties of high refractoriness allow them excellent thermal stability and resistance to slags. Chromite refractories are composed of sintered magnesia and chromia. They have constant volume at high temperatures, high refractoriness, and high resistance to slags. Alumina refractories are composed of ≥ 50% alumina (Al2O3). Method of manufacture Dry press process Fused cast Hand molded Formed (normal, fired or chemically bonded) Un-formed (monolithic-plastic, ramming and gunning mass, castables, mortars, dry vibrating cements.) Un-formed dry refractories. Size and shape Refractory objects are manufactured in standard shapes and special shapes. Standard shapes have dimensions that conform to conventions used by refractory manufacturers and are generally applicable to kilns or furnaces of the same types. Standard shapes are usually bricks that have a standard dimension of and this dimension is called a "one brick equivalent". "Brick equivalents" are used in estimating how many refractory bricks it takes to make an installation into an industrial furnace. There are ranges of standard shapes of different sizes manufactured to produce walls, roofs, arches, tubes and circular apertures etc. Special shapes are specifically made for specific locations within furnaces and for particular kilns or furnaces. Special shapes are usually less dense and therefore less hard wearing than standard shapes. Unshaped (monolithic) These are without prescribed form and are only given shape upon application. These types are known as monolithic refractories. Common examples include plastic masses, ramming masses, castables, gunning masses, fettling mix, and mortars. Dry vibration linings often used in induction furnace linings are also monolithic, and sold and transported as a dry powder, usually with a magnesia/alumina composition with additions of other chemicals for altering specific properties. They are also finding more applications in blast furnace linings, although this use is still rare. Fusion temperature Refractory materials are classified into three types based on fusion temperature (melting point). Normal refractories have a fusion temperature of 15801780 °C (e.g. fire clay) High refractories have a fusion temperature of 17802000 °C (e.g. chromite) Super refractories have a fusion temperature of > 2000 °C (e.g. zirconia) Refractoriness Refractoriness is the property of a refractory's multiphase to reach a specific softening degree at high temperature without load, and is measured with a pyrometric cone equivalent (PCE) test. Refractories are classified as: Super duty: PCE value of 33–38 High duty: PCE value of 30–33 Intermediate duty: PCE value of 28–30 Low duty: PCE value of 19–28 Thermal conductivity Refractories may be classified by thermal conductivity as either conducting, nonconducting, or insulating. Examples of conducting refractories are silicon carbide (SiC) and zirconium carbide (ZrC), whereas examples of nonconducting refractories are silica and alumina. Insulating refractories include calcium silicate materials, kaolin, and zirconia. Insulating refractories are used to reduce the rate of heat loss through furnace walls. These refractories have low thermal conductivity due to a high degree of porosity, with a desired porous structure of small, uniform pores evenly distributed throughout the refractory brick in order to minimize thermal conductivity. Insulating refractories can be further classified into four types: Heat-resistant insulating materials with application temperatures ≤ 1100 °C Refractory insulating materials with application temperatures ≤ 1400 °C High refractory insulating materials with application temperatures ≤ 1700 °C Ultra-high refractory insulating materials with application temperatures ≤ 2000 °C See also Fire brick Masonry oven References External links Materials Chemical properties Ceramic materials
Refractory
[ "Physics", "Chemistry", "Engineering" ]
2,411
[ "Refractory materials", "Materials", "Ceramic materials", "nan", "Ceramic engineering", "Matter" ]
862,061
https://en.wikipedia.org/wiki/Wide%20Area%20Augmentation%20System
The Wide Area Augmentation System (WAAS) is an air navigation aid developed by the Federal Aviation Administration to augment the Global Positioning System (GPS), with the goal of improving its accuracy, integrity, and availability. Essentially, WAAS is intended to enable aircraft to rely on GPS for all phases of flight, including approaches with vertical guidance to any airport within its coverage area. It may be further enhanced with the Local Area Augmentation System (LAAS) also known by the preferred ICAO term Ground-Based Augmentation System (GBAS) in critical areas. WAAS uses a network of ground-based reference stations, in North America and Hawaii, to measure small variations in the GPS satellites' signals in the western hemisphere. Measurements from the reference stations are routed to master stations, which queue the received Deviation Correction (DC) and send the correction messages to geostationary WAAS satellites in a timely manner (every 5 seconds or better). Those satellites broadcast the correction messages back to Earth, where WAAS-enabled GPS receivers use the corrections while computing their positions to improve accuracy. The International Civil Aviation Organization (ICAO) calls this type of system a satellite-based augmentation system (SBAS). Europe and Asia are developing their own SBASs: the Indian GPS Aided Geo Augmented Navigation (GAGAN), the European Geostationary Navigation Overlay Service (EGNOS), the Japanese Multi-functional Satellite Augmentation System (MSAS) and the Russian System for Differential Corrections and Monitoring (SDCM), respectively. Commercial systems include StarFire, OmniSTAR, and Atlas. WAAS objectives Accuracy A primary goal of WAAS was to allow aircraft to make a Category I approach without any equipment being installed at the airport. This would allow new GPS-based instrument landing approaches to be developed for any airport, even ones without any ground equipment. A Category I approach requires an accuracy of laterally and vertically. To meet this goal, the WAAS specification requires it to provide a position accuracy of or less (for both lateral and vertical measurements), at least 95% of the time. Actual performance measurements of the system at specific locations have shown it typically provides better than laterally and vertically throughout most of the contiguous United States and large parts of Canada and Alaska. Integrity Integrity of a navigation system includes the ability to provide timely warnings when its signal is providing misleading data that could potentially create hazards. The WAAS specification requires the system detect errors in the GPS or WAAS network and notify users within 6.2 seconds. Certifying that WAAS is safe for instrument flight rules (IFR) (i.e. flying in the clouds) requires proving there is only an extremely small probability that an error exceeding the requirements for accuracy will go undetected. Specifically, the probability is stated as 1×10−7, and is equivalent to no more than 3 seconds of bad data per year. This provides integrity information equivalent to or better than Receiver Autonomous Integrity Monitoring (RAIM). Availability Availability is the probability that a navigation system meets the accuracy and integrity requirements. Before the advent of WAAS, GPS specifications allowed for system unavailability for as much as a total time of four days per year (99% availability). The WAAS specification mandates availability as 99.999% (five nines) throughout the service area, equivalent to a downtime of just over 5 minutes per year. Operation WAAS is composed of three main segments: the ground segment, space segment, and user segment. Ground segment The ground segment is composed of multiple Wide-area Reference Stations (WRS). These precisely surveyed ground stations monitor and collect information on the GPS signals, then send their data to three Wide-area Master Stations (WMS) using a terrestrial communications network. The reference stations also monitor signals from WAAS geostationary satellites, providing integrity information regarding them as well. As of October 2007 there were 38 WRSs: twenty in the contiguous United States (CONUS), seven in Alaska, one in Hawaii, one in Puerto Rico, five in Mexico, and four in Canada. Using the data from the WRS sites, the WMSs generate two different sets of corrections: fast and slow. The fast corrections are for errors which are changing rapidly and primarily concern the GPS satellites' instantaneous positions and clock errors. These corrections are considered user position-independent, which means they can be applied instantly by any receiver inside the WAAS broadcast footprint. The slow corrections include long-term ephemeric and clock error estimates, as well as ionospheric delay information. WAAS supplies delay corrections for a number of points (organized in a grid pattern) across the WAAS service area (see User Segment, below, to understand how these corrections are used). Once these correction messages are generated, the WMSs send them to two pairs of Ground Uplink Stations (GUS), which then transmit to satellites in the Space segment for rebroadcast to the User segment. Reference stations Each FAA Air Route Traffic Control Center in the 50 states has a WAAS reference station, except for Indianapolis. There are also stations positioned in Canada, Mexico and Puerto Rico. See List of WAAS reference stations for the coordinates of the individual receiving antennas. Space segment The space segment consists of multiple communication satellites which broadcast the correction messages generated by the WAAS Master Stations for reception by the user segment. The satellites also broadcast the same type of range information as normal GPS satellites, effectively increasing the number of satellites available for a position fix. The space segment currently consists of three commercial satellites: Eutelsat 117 West B, SES-15, and Galaxy 30. Satellite History The original two WAAS satellites, named Pacific Ocean Region (POR) and Atlantic Ocean Region-West (AOR-W), were leased space on Inmarsat III satellites. These satellites ceased WAAS transmissions on July 31, 2007. With the end of the Inmarsat lease approaching, two new satellites (Galaxy 15 and Anik F1R) were launched in late 2005. Galaxy 15 is a PanAmSat and Anik F1R is a Telesat. As with the previous satellites, these are leased services under the FAA's Geostationary Satellite Communications Control Segment contract with Lockheed Martin for WAAS geostationary satellite leased services, who were contracted to provide up to three satellites through the year 2016. A third satellite was later added to the system. From March to November 2010, the FAA broadcast a WAAS test signal on a leased transponder on the Inmarsat-4 F3 satellite. The test signal was not usable for navigation, but could be received and was reported with the identification numbers PRN 133 (NMEA #46). In November 2010, the signal was certified as operational and made available for navigation. Following in orbit testing, Eutelsat 117 West B, broadcasting signal on PRN 131 (NMEA #44), was certified as operational and made available for navigation on March 27, 2018. The SES 15 satellite was launched on May 18, 2017, and following an in-orbit test of several months, was set operational on July 15, 2019. In 2018, a contract was awarded to place a WAAS L-band payload on the Galaxy 30 satellite. The satellite was successfully launched on August 15, 2020, and the WAAS transmissions were set operational on April 26, 2022, re-using PRN 135 (NMEA #48). After approximately three weeks with four active WAAS satellites, operational WAAS transmissions on Anik F1-R were ended on May 17, 2022. In the table above, PRN is the satellite's actual Pseudo-Random Number code. NMEA is the satellite number sent by some receivers when outputting satellite information (NMEA = PRN - 87). User segment The user segment is the GPS and WAAS receiver, which uses the information broadcast from each GPS satellite to determine its location and the current time, and receives the WAAS corrections from the Space segment. The two types of correction messages received (fast and slow) are used in different ways. The GPS receiver can immediately apply the fast type of correction data, which includes the corrected satellite position and clock data, and determines its current location using normal GPS calculations. Once an approximate position fix is obtained the receiver begins to use the slow corrections to improve its accuracy. Among the slow correction data is the ionospheric delay. As the GPS signal travels from the satellite to the receiver, it passes through the ionosphere. The receiver calculates the location where the signal pierced the ionosphere and, if it has received an ionospheric delay value for that location, corrects for the error the ionosphere created. While the slow data can be updated every minute if necessary, ephemeris errors and ionosphere errors do not change this frequently, so they are only updated every two minutes and are considered valid for up to six minutes. History and development The WAAS was jointly developed by the United States Department of Transportation (DOT) and the Federal Aviation Administration (FAA) as part of the Federal Radionavigation Program (DOT-VNTSC-RSPA-95-1/DOD-4650.5), beginning in 1994, to provide performance comparable to category 1 instrument landing system (ILS) for all aircraft possessing the appropriately certified equipment. Without WAAS, ionospheric disturbances, clock drift, and satellite orbit errors create too much error and uncertainty in the GPS signal to meet the requirements for a precision approach (see GPS sources of error). A precision approach includes altitude information and provides course guidance, distance from the runway, and elevation information at all points along the approach, usually down to lower altitudes and weather minimums than non-precision approaches. Prior to the WAAS, the U.S. National Airspace System (NAS) did not have the ability to provide lateral and vertical navigation for precision approaches for all users at all locations. The traditional system for precision approaches is the instrument landing system (ILS), which used a series of radio transmitters each broadcasting a single signal to the aircraft. This complex series of radios needs to be installed at every runway end, some offsite, along a line extended from the runway centerline, making the implementation of a precision approach both difficult and very expensive. The ILS system is composed of 180 different transmitting antennas at each point built. For some time the FAA and NASA developed a much improved system, the microwave landing system (MLS). The entire MLS system for a particular approach was isolated in one or two boxes located beside the runway, dramatically reducing the cost of implementation. MLS also offered a number of practical advantages that eased traffic considerations, both for aircraft and radio channels. Unfortunately, MLS would also require every airport and aircraft to upgrade their equipment. During the development of MLS, consumer GPS receivers of various quality started appearing. GPS offered a huge number of advantages to the pilot, combining all of an aircraft's long-distance navigation systems into a single easy-to-use system, often small enough to be hand held. Deploying an aircraft navigation system based on GPS was largely a problem of developing new techniques and standards, as opposed to new equipment. The FAA started planning to shut down their existing long-distance systems (VOR and NDBs) in favor of GPS. This left the problem of approaches, however. GPS is simply not accurate enough to replace ILS systems. Typical accuracy is about , whereas even a "CAT I" approach, the least demanding, requires a vertical accuracy of . This inaccuracy in GPS is mostly due to large "billows" in the ionosphere, which slow the radio signal from the satellites by a random amount. Since GPS relies on timing the signals to measure distances, this slowing of the signal makes the satellite appear farther away. The billows move slowly, and can be characterized using a variety of methods from the ground, or by examining the GPS signals themselves. By broadcasting this information to GPS receivers every minute or so, this source of error can be significantly reduced. This led to the concept of Differential GPS, which used separate radio systems to broadcast the correction signal to receivers. Aircraft could then install a receiver which would be plugged into the GPS unit, the signal being broadcast on a variety of frequencies for different users (FM radio for cars, longwave for ships, etc.). Broadcasters of the required power generally cluster around larger cities, making such DGPS systems less useful for wide-area navigation. Additionally, most radio signals are either line-of-sight, or can be distorted by the ground, which made DGPS difficult to use as a precision approach system or when flying low for other reasons. The FAA considered systems that could allow the same correction signals to be broadcast over a much wider area, such as from a satellite, leading directly to WAAS. Since a GPS unit already consists of a satellite receiver, it made much more sense to send out the correction signals on the same frequencies used by GPS units, than to use an entirely separate system and thereby double the probability of failure. In addition to lowering implementation costs by "piggybacking" on a planned satellite launch, this also allowed the signal to be broadcast from geostationary orbit, which meant a small number of satellites could cover all of North America. On July 10, 2003, the WAAS signal was activated for general aviation, covering 95% of the United States, and portions of Alaska offering minimums. On January 17, 2008, Alabama-based Hickok & Associates became the first designer of helicopter WAAS with Localizer Performance (LP) and Localizer Performance with Vertical guidance (LPV) approaches, and the only entity with FAA-approved criteria (which even FAA has yet to develop). This helicopter WAAS criteria offers as low as 250 foot minimums and decreased visibility requirements to enable missions previously not possible. On April 1, 2009, FAA AFS-400 approved the first three helicopter WAAS GPS approach procedures for Hickok & Associates' customer California Shock/Trauma Air Rescue (CALSTAR). Since then they have designed many approved WAAS helicopter approaches for various EMS hospitals and air providers, within the United States as well as in other countries and continents. On December 30, 2009, Seattle-based Horizon Air flew the first scheduled-passenger service flight using WAAS with LPV on flight 2014, a Portland to Seattle flight operated by a Bombardier Q400 with a WAAS FMS from Universal Avionics. The airline, in partnership with the FAA, will outfit seven Q400-aircraft with WAAS and share flight data to better determine the suitability of WAAS in scheduled air service applications. Timeline Wide-Area Augmentation System (WAAS) Timeline Comparison of accuracy Benefits WAAS addresses all of the "navigation problem", providing highly accurate positioning that is extremely easy to use, for the cost of a single receiver installed on the aircraft. Ground- and space-based infrastructure is relatively limited, and no on-airport system is needed. WAAS allows a precision approach to be published for any airport, for the cost of developing the procedures and publishing the new approach plates. This means that almost any airport can have a precision approach and the cost of implementation is drastically reduced. Additionally WAAS works just as well between airports. This allows the aircraft to fly directly from one airport to another, as opposed to following routes based on ground-based signals. This can cut route distances considerably in some cases, saving both time and fuel. In addition, because of its ability to provide information on the accuracy of each GPS satellite's information, aircraft equipped with WAAS are permitted to fly at lower en-route altitudes than was possible with ground-based systems, which were often blocked by terrain of varying elevation. This enables pilots to safely fly at lower altitudes, not having to rely on ground-based systems. For unpressurized aircraft, this conserves oxygen and enhances safety. The above benefits create not only convenience, but also have the potential to generate significant cost savings. The cost to provide the WAAS signal, serving all 5,400 public use airports, is just under US$50 million per year. In comparison, the current ground based systems such as the Instrument Landing System (ILS), installed at only 600 airports, cost US$82 million in annual maintenance. Without ground navigation hardware to purchase, the total cost of publishing a runway's WAAS approach is approximately US$50,000; compared to the $1,000,000 to $1,500,000 cost to install an ILS radio system. Drawbacks and limitations For all its benefits, WAAS is not without drawbacks and critical limitations: Space weather. All man-made satellite systems are subject to space weather and space debris threats. For example, a solar super-storm event composed of an extremely large and fast earthbound Coronal Mass Ejection (CME) could disable the geosynchronous or GPS satellite elements of WAAS. The broadcasting satellites are geostationary, which causes them to be less than 10° above the horizon for locations north of 71.4° latitude. This means aircraft in areas of Alaska or northern Canada may have difficulty maintaining a lock on the WAAS signal. To calculate an ionospheric grid point's delay, that point must be located between a satellite and a reference station. The low number of satellites and ground stations limit the number of points which can be calculated. Aircraft conducting WAAS approaches use certified GPS receivers, which are much more expensive than non-certified units. In 2024, Garmin's least expensive certified receiver, the GPS 175, had a suggested retail price of US$5,895. WAAS is not capable of the accuracies required for Category II or III ILS approaches. Thus, WAAS is not a sole-solution and either existing ILS equipment must be maintained or it must be replaced by new systems, such as the Local Area Augmentation System (LAAS). WAAS Localizer Performance with Vertical guidance (LPV) approaches with 200-foot minimums (LPV-200) will not be published for airports without medium intensity lighting, precision runway markings and a parallel taxiway. Smaller airports, which currently may not have these features, would have to upgrade their facilities or require pilots to use higher minimums. As precision increases and error approaches zero, the navigation paradox states that there is an increased collision risk, as the likelihood of two craft occupying the same space on the shortest distance line between two navigational points has increased. Future of WAAS Improvement to aviation operations In 2007, WAAS vertical guidance was projected to be available nearly all the time (greater than 99%), and its coverage encompasses the full continental U.S., most of Alaska, northern Mexico, and southern Canada. At that time, the accuracy of WAAS would meet or exceed the requirements for Category 1 ILS approaches, namely, three-dimensional position information down to 200 feet (60 m) above touchdown zone elevation. Software improvements Software improvements, to be implemented by September 2008, significantly improve signal availability of vertical guidance throughout the CONUS and Alaska. Area covered by the 95% available LPV solution in Alaska improves from 62% to 86%. And in the CONUS, the 100% availability LPV-200 coverage rises from 48% to 84%, with 100% coverage of the LPV solution. Space segment upgrades Both Galaxy XV (PRN #135) and Anik F1R (PRN #138) contain an L1 & L5 GPS payload. This means they will potentially be usable with the L5 modernized GPS signals when the new signals and receivers become available. With L5, avionics will be able to use a combination of signals to provide the most accurate service possible, thereby increasing availability of the service. These avionics systems will use ionospheric corrections broadcast by WAAS, or self-generated onboard dual frequency corrections, depending on which one is more accurate. See also Satellite-based augmentation system (SBAS) EGNOS—the European operational SBAS MSAS—the Japanese operational SBAS CDGPS Canadian Differential GPS Local Area Augmentation System (LAAS) Joint Precision Approach and Landing System (JPALS) Distance measuring equipment (DME) Instrument flight rules (IFR) Instrument landing system (ILS) Localizer performance with vertical guidance (LPV) Long-range radio navigation (LORAN) Microwave landing system (MLS) Non-directional beacon (NDB) Tactical air navigation system (TACAN) Transponder landing system (TLS) VHF omnidirectional range (VOR) References U.S. Department Of Transportation & Federal Aviation Administration, Specification for the Wide Area Augmentation System (WAAS) External links FAA WJHTC's Real-Time Interactive WAAS Performance Display FAA's WAAS program Garmin's What is WAAS? US Government's 2005 Federal Radionavigation Plan (FRP) WAAS coverage in Canada Global Positioning System 2003 in aviation Navigational aids Satellite-based augmentation systems
Wide Area Augmentation System
[ "Technology", "Engineering" ]
4,330
[ "Global Positioning System", "Aerospace engineering", "Wireless locating", "Aircraft instruments" ]
862,361
https://en.wikipedia.org/wiki/H%C3%BCckel%27s%20rule
In organic chemistry, Hückel's rule predicts that a planar ring molecule will have aromatic properties if it has 4n + 2 π-electrons, where n is a non-negative integer. The quantum mechanical basis for its formulation was first worked out by physical chemist Erich Hückel in 1931. The succinct expression as the 4n + 2 rule has been attributed to W. v. E. Doering (1951), although several authors were using this form at around the same time. In agreement with the Möbius–Hückel concept, a cyclic ring molecule follows Hückel's rule when the number of its π-electrons equals 4n + 2, although clearcut examples are really only established for values of n = 0 up to about n = 6. Hückel's rule was originally based on calculations using the Hückel method, although it can also be justified by considering a particle in a ring system, by the LCAO method and by the Pariser–Parr–Pople method. Aromatic compounds are more stable than theoretically predicted using hydrogenation data of simple alkenes; the additional stability is due to the delocalized cloud of electrons, called resonance energy. Criteria for simple aromatics are: the molecule must have 4n + 2 (a so-called "Hückel number") π electrons (2, 6, 10, ...) in a conjugated system of p orbitals (usually on sp2-hybridized atoms, but sometimes sp-hybridized); the molecule must be (close to) planar (p orbitals must be roughly parallel and able to interact, implicit in the requirement for conjugation); the molecule must be cyclic (as opposed to linear); the molecule must have a continuous ring of p atomic orbitals (there cannot be any sp3 atoms in the ring, nor do exocyclic p orbitals count). Monocyclic hydrocarbons The rule can be used to understand the stability of completely conjugated monocyclic hydrocarbons (known as annulenes) as well as their cations and anions. The best-known example is benzene (C6H6) with a conjugated system of six π electrons, which equals 4n + 2 for n = 1. The molecule undergoes substitution reactions which preserve the six π electron system rather than addition reactions which would destroy it. The stability of this π electron system is referred to as aromaticity. Still, in most cases, catalysts are necessary for substitution reactions to occur. The cyclopentadienyl anion () with six π electrons is planar and readily generated from the unusually acidic cyclopentadiene (pKa 16), while the corresponding cation with four π electrons is destabilized, being harder to generate than a typical acyclic pentadienyl cations and is thought to be antiaromatic. Similarly, the tropylium cation (), also with six π electrons, is so stable compared to a typical carbocation that its salts can be crystallized from ethanol. On the other hand, in contrast to cyclopentadiene, cycloheptatriene is not particularly acidic (pKa 37) and the anion is considered nonaromatic. The cyclopropenyl cation () and the triboracyclopropenyl dianion () are considered examples of a two π electron system, which are stabilized relative to the open system, despite the angle strain imposed by the 60° bond angles. Planar ring molecules with 4n π electrons do not obey Hückel's rule, and theory predicts that they are less stable and have triplet ground states with two unpaired electrons. In practice, such molecules distort from planar regular polygons. Cyclobutadiene (C4H4) with four π electrons is stable only at temperatures below 35 K and is rectangular rather than square. Cyclooctatetraene (C8H8) with eight π electrons has a nonplanar "tub" structure. However, the dianion (cyclooctatetraenide anion), with ten π electrons obeys the 4n + 2 rule for n = 2 and is planar, while the 1,4-dimethyl derivative of the dication, with six π electrons, is also believed to be planar and aromatic. The Cyclononatetraenide anion () is the largest all-cis monocyclic annulene/annulenyl system that is planar and aromatic. These bond angles (140°) differ significantly from the ideal angles of 120°. Larger rings possess trans bonds to avoid the increased angle strain. However, 10 to 14-membered systems all experience considerable transannular strain. Thus, these systems are either nonaromatic or experience modest aromaticity. This changes when we get to [18]annulene, with (4×4) + 2 = 18 π electrons, which is large enough to accommodate six interior hydrogen atoms in a planar configuration (3 cis double bonds and 6 trans double bonds). Thermodynamic stabilization, NMR chemical shifts, and nearly equal bond lengths all point to considerable aromaticity for [18]annulene. The (4n+2) rule is a consequence of the degeneracy of the π orbitals in cyclic conjugated hydrocarbon molecules. As predicted by Hückel molecular orbital theory, the lowest π orbital in such molecules is non-degenerate and the higher orbitals form degenerate pairs. Benzene's lowest π orbital is non-degenerate and can hold 2 electrons, and its next 2 π orbitals form a degenerate pair which can hold 4 electrons. Its 6 π electrons therefore form a stable closed shell in a regular hexagonal molecule. However for cyclobutadiene or cyclooctatrene with regular geometries, the highest molecular orbital pair is occupied by only 2 π electrons forming a less stable open shell. The molecules therefore stabilize by geometrical distortions which separate the degenerate orbital energies so that the last two electrons occupy the same orbital, but the molecule as a whole is less stable in the presence of such a distortion. Heteroatoms Hückel's rule can also be applied to molecules containing other atoms such as nitrogen or oxygen. For example, pyridine (C5H5N) has a ring structure similar to benzene, except that one -CH- group is replaced by a nitrogen atom with no hydrogen. There are still six π electrons and the pyridine molecule is also aromatic and known for its stability. Polycyclic hydrocarbons Hückel's rule is not valid for many compounds containing more than one ring. For example, pyrene and trans-bicalicene contain 16 conjugated electrons (8 bonds), and coronene contains 24 conjugated electrons (12 bonds). Both of these polycyclic molecules are aromatic, even though they fail the 4n + 2 rule. Indeed, Hückel's rule can only be theoretically justified for monocyclic systems. Three-dimensional rule In 2000, Andreas Hirsch and coworkers in Erlangen, Germany, formulated a rule to determine when a spherical compound will be aromatic. They found that closed-shell compounds were aromatic when they had 2(n + 1)2 π-electrons, for instance the buckminsterfullerene species C6010+. In 2011, Jordi Poater and Miquel Solà expanded the rule to open-shell spherical compounds, finding they were aromatic when they had 2n2 + 2n + 1 π-electrons, with spin S = (n + 1/2) - corresponding to a half-filled last energy level with the same spin. For instance C601– is also observed to be aromatic with a spin of 11/2. See also Baird's rule (for triplet states) References Physical organic chemistry Rules of thumb
Hückel's rule
[ "Chemistry" ]
1,692
[ "Physical organic chemistry" ]
862,494
https://en.wikipedia.org/wiki/Gamma%20camera
A gamma camera (γ-camera), also called a scintillation camera or Anger camera, is a device used to image gamma radiation emitting radioisotopes, a technique known as scintigraphy. The applications of scintigraphy include early drug development and nuclear medical imaging to view and analyse images of the human body or the distribution of medically injected, inhaled, or ingested radionuclides emitting gamma rays. Imaging techniques Scintigraphy ("scint") is the use of gamma cameras to capture emitted radiation from internal radioisotopes to create two-dimensional images. SPECT (single photon emission computed tomography) imaging, as used in nuclear cardiac stress testing, is performed using gamma cameras. Usually one, two or three detectors or heads, are slowly rotated around the patient. Construction A gamma camera consists of one or more flat crystal planes (or detectors) optically coupled to an array of photomultiplier tubes in an assembly known as a "head", mounted on a gantry. The gantry is connected to a computer system that both controls the operation of the camera and acquires and stores images. The construction of a gamma camera is sometimes known as a compartmental radiation construction. The system accumulates events, or counts, of gamma photons that are absorbed by the crystal in the camera. Usually a large flat crystal of sodium iodide with thallium doping NaI(Tl) in a light-sealed housing is used. The highly efficient capture method of this combination for detecting gamma rays was discovered in 1944 by Sir Samuel Curran whilst he was working on the Manhattan Project at the University of California at Berkeley. Nobel prize-winning physicist Robert Hofstadter also worked on the technique in 1948. The crystal scintillates in response to incident gamma radiation. When a gamma photon leaves the patient (who has been injected with a radioactive pharmaceutical), it knocks an electron loose from an iodine atom in the crystal, and a faint flash of light is produced when the dislocated electron again finds a minimal energy state. The initial phenomenon of the excited electron is similar to the photoelectric effect and (particularly with gamma rays) the Compton effect. After the flash of light is produced, it is detected. Photomultiplier tubes (PMTs) behind the crystal detect the fluorescent flashes (events) and a computer sums the counts. The computer reconstructs and displays a two dimensional image of the relative spatial count density on a monitor. This reconstructed image reflects the distribution and relative concentration of radioactive tracer elements present in the organs and tissues imaged. Signal processing Hal Anger developed the first gamma camera in 1957. His original design, frequently called the Anger camera, is still widely used today. The Anger camera uses sets of vacuum tube photomultipliers (PMT). Generally each tube has an exposed face of about in diameter and the tubes are arranged in hexagon configurations, behind the absorbing crystal. The electronic circuit connecting the photodetectors is wired so as to reflect the relative coincidence of light fluorescence as sensed by the members of the hexagon detector array. All the PMTs simultaneously detect the (presumed) same flash of light to varying degrees, depending on their position from the actual individual event. Thus the spatial location of each single flash of fluorescence is reflected as a pattern of voltages within the interconnecting circuit array. The location of the interaction between the gamma ray and the crystal can be determined by processing the voltage signals from the photomultipliers; in simple terms, the location can be found by weighting the position of each photomultiplier tube by the strength of its signal, and then calculating a mean position from the weighted positions. The total sum of the voltages from each photomultiplier, measured by a pulse height analyzer is proportional to the energy of the gamma ray interaction, thus allowing discrimination between different isotopes or between scattered and direct photons. Spatial resolution In order to obtain spatial information about the gamma-ray emissions from an imaging subject (e.g. a person's heart muscle cells which have absorbed an intravenous injected radioactive, usually thallium-201 or technetium-99m, medicinal imaging agent) a method of correlating the detected photons with their point of origin is required. The conventional method is to place a collimator over the detection crystal/PMT array. The collimator consists of a thick sheet of lead, typically thick, with thousands of adjacent holes through it. There are three types of collimators: low energy, medium energy, and high energy collimators. As the collimators transitioned from low energy to high energy, the hole sizes, thickness, and septations between the holes also increased. Given a fixed septal thickness, the collimator resolution decreases with increased efficiency and also increasing distance of the source from the collimator. Pulse-height analyser determines the Full width at half maximum that selects certain photons to contribute to the final image, thus determining the collimator resolution. The individual holes limit photons which can be detected by the crystal to a cone shape; the point of the cone is at the midline center of any given hole and extends from the collimator surface outward. However, the collimator is also one of the sources of blurring within the image; lead does not totally attenuate incident gamma photons, there can be some crosstalk between holes. Unlike a lens, as used in visible light cameras, the collimator attenuates most (>99%) of incident photons and thus greatly limits the sensitivity of the camera system. Large amounts of radiation must be present so as to provide enough exposure for the camera system to detect sufficient scintillation dots to form a picture. Other methods of image localization (pinhole, rotating slat collimator with CZT) have been proposed and tested; however, none have entered widespread routine clinical use. The best current camera system designs can differentiate two separate point sources of gamma photons located at 6 to 12 mm depending on distance from the collimator, the type of collimator and radio-nucleide. Spatial resolution decreases rapidly at increasing distances from the camera face. This limits the spatial accuracy of the computer image: it is a fuzzy image made up of many dots of detected but not precisely located scintillation. This is a major limitation for heart muscle imaging systems; the thickest normal heart muscle in the left ventricle is about 1.2 cm and most of the left ventricle muscle is about 0.8 cm, always moving and much of it beyond 5 cm from the collimator face. To help compensate, better imaging systems limit scintillation counting to a portion of the heart contraction cycle, called gating, however this further limits system sensitivity. See also Nuclear medicine Scintigraphy References Further reading H. Anger. A new instrument for mapping gamma-ray emitters. Biology and Medicine Quarterly Report UCRL, 1957, 3653: 38. (University of California Radiation Laboratory, Berkeley) External links Nuclear medicine Image sensors Medical physics American inventions Gamma rays Articles containing video clips
Gamma camera
[ "Physics" ]
1,481
[ "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Gamma rays", "Medical physics" ]
862,694
https://en.wikipedia.org/wiki/Photometer
A photometer is an instrument that measures the strength of electromagnetic radiation in the range from ultraviolet to infrared and including the visible spectrum. Most photometers convert light into an electric current using a photoresistor, photodiode, or photomultiplier. Photometers measure: Illuminance Irradiance Light absorption Scattering of light Reflection of light Fluorescence Phosphorescence Luminescence Historically, photometry was done by estimation, comparing the luminous flux of a source with a standard source. By the 19th century, common photometers included Rumford's photometer, which compared the depths of shadows cast by different light sources, and Ritchie's photometer, which relied on equal illumination of surfaces. Another type was based on the extinction of shadows. Modern photometers utilize photoresistors, photodiodes or photomultipliers to detect light. Some models employ photon counting, measuring light by counting individual photons. They are especially useful in areas where the irradiance is low. Photometers have wide-ranging applications including photography, where they determine the correct exposure, and science, where they are used in absorption spectroscopy to calculate the concentration of substances in a solution, infrared spectroscopy to study the structure of substances, and atomic absorption spectroscopy to determine the concentration of metals in a solution. History Before electronic light sensitive elements were developed, photometry was done by estimation by the eye. The relative luminous flux of a source was compared with a standard source. The photometer is placed such that the illuminance from the source being investigated is equal to the standard source, as the human eye can judge equal illuminance. The relative luminous fluxes can then be calculated as the illuminance decreases proportionally to the inverse square of distance. A standard example of such a photometer consists of a piece of paper with an oil spot on it that makes the paper slightly more transparent. When the spot is not visible from either side, the illuminance from the two sides is equal. By 1861, three types were in common use. These were Rumford's photometer, Ritchie's photometer, and photometers that used the extinction of shadows, which was considered to be the most precise. Rumford's photometer Rumford's photometer (also called a shadow photometer) depended on the principle that a brighter light would cast a deeper shadow. The two lights to be compared were used to cast a shadow onto paper. If the shadows were of the same depth, the difference in distance of the lights would indicate the difference in intensity (e.g. a light twice as far would be four times the intensity). Ritchie's photometer Ritchie's photometer depends upon equal illumination of surfaces. It consists of a box (a,b) six or eight inches long, and one in width and depth. In the middle, a wedge of wood (f,e,g) was angled upwards and covered with white paper. The user's eye looked through a tube (d) at the top of a box. The height of the apparatus was also adjustable via the stand (c). The lights to compare were placed at the side of the box (m, n)—which illuminated the paper surfaces so that the eye saw both surfaces at once. By changing the position of the lights, they were made to illuminate both surfaces equally, with the difference in intensity corresponding to the square of the difference in distance. Method of extinction of shadows This type of photometer depended on the fact that if a light throws the shadow of an opaque object onto a white screen, there is a certain distance that, if a second light is brought there, obliterates all traces of the shadow. Principle of photometers Most photometers detect the light with photoresistors, photodiodes or photomultipliers. To analyze the light, the photometer may measure the light after it has passed through a filter or through a monochromator for determination at defined wavelengths or for analysis of the spectral distribution of the light. Photon counting Some photometers measure light by counting individual photons rather than incoming flux. The operating principles are the same but the results are given in units such as photons/cm2 or photons·cm−2·sr−1 rather than W/cm2 or W·cm−2·sr−1. Due to their individual photon counting nature, these instruments are limited to observations where the irradiance is low. The irradiance is limited by the time resolution of its associated detector readout electronics. With current technology this is in the megahertz range. The maximum irradiance is also limited by the throughput and gain parameters of the detector itself. The light sensing element in photon counting devices in NIR, visible and ultraviolet wavelengths is a photomultiplier to achieve sufficient sensitivity. In airborne and space-based remote sensing such photon counters are used at the upper reaches of the electromagnetic spectrum such as the X-ray to far ultraviolet. This is usually due to the lower radiant intensity of the objects being measured as well as the difficulty of measuring light at higher energies using its particle-like nature as compared to the wavelike nature of light at lower frequencies. Conversely, radiometers are typically used for remote sensing from the visible, infrared though radio frequency range. Photography Photometers are used to determine the correct exposure in photography. In modern cameras, the photometer is usually built in. As the illumination of different parts of the picture varies, advanced photometers measure the light intensity in different parts of the potential picture and use an algorithm to determine the most suitable exposure for the final picture, adapting the algorithm to the type of picture intended (see Metering mode). Historically, a photometer was separate from the camera and known as an exposure meter. The advanced photometers then could be used either to measure the light from the potential picture as a whole, to measure from elements of the picture to ascertain that the most important parts of the picture are optimally exposed, or to measure the incident light to the scene with an integrating adapter. Visible light reflectance photometry A reflectance photometer measures the reflectance of a surface as a function of wavelength. The surface is illuminated with white light, and the reflected light is measured after passing through a monochromator. This type of measurement has mainly practical applications, for instance in the paint industry to characterize the colour of a surface objectively. UV and visible light transmission photometry These are optical instruments for measurement of the absorption of light of a given wavelength (or a given range of wavelengths) of coloured substances in solution. From the light absorption, Beer's law makes it possible to calculate the concentration of the coloured substance in the solution. Due to its wide range of application and its reliability and robustness, the photometer has become one of the principal instruments in biochemistry and analytical chemistry. Absorption photometers for work in aqueous solution work in the ultraviolet and visible ranges, from wavelength around 240 nm up to 750 nm. The principle of spectrophotometers and filter photometers is that (as far as possible) monochromatic light is allowed to pass through a container (cell) with optically flat windows containing the solution. It then reaches a light detector, that measures the intensity of the light compared to the intensity after passing through an identical cell with the same solvent but without the coloured substance. From the ratio between the light intensities, knowing the capacity of the coloured substance to absorb light (the absorbency of the coloured substance, or the photon cross section area of the molecules of the coloured substance at a given wavelength), it is possible to calculate the concentration of the substance using Beer's law. Two types of photometers are used: spectrophotometer and filter photometer. In spectrophotometers a monochromator (with prism or with grating) is used to obtain monochromatic light of one defined wavelength. In filter photometers, optical filters are used to give the monochromatic light. Spectrophotometers can thus easily be set to measure the absorbance at different wavelengths, and they can also be used to scan the spectrum of the absorbing substance. They are in this way more flexible than filter photometers, also give a higher optical purity of the analyzing light, and therefore they are preferably used for research purposes. Filter photometers are cheaper, robuster and easier to use and therefore they are used for routine analysis. Photometers for microtiter plates are filter photometers. Infrared light transmission photometry Spectrophotometry in infrared light is mainly used to study structure of substances, as given groups give absorption at defined wavelengths. Measurement in aqueous solution is generally not possible, as water absorbs infrared light strongly in some wavelength ranges. Therefore, infrared spectroscopy is either performed in the gaseous phase (for volatile substances) or with the substances pressed into tablets together with salts that are transparent in the infrared range. Potassium bromide (KBr) is commonly used for this purpose. The substance being tested is thoroughly mixed with specially purified KBr and pressed into a transparent tablet, that is placed in the beam of light. The analysis of the wavelength dependence is generally not done using a monochromator as it is in UV-Vis, but with the use of an interferometer. The interference pattern can be analyzed using a Fourier transform algorithm. In this way, the whole wavelength range can be analyzed simultaneously, saving time, and an interferometer is also less expensive than a monochromator. The light absorbed in the infrared region does not correspond to electronic excitation of the substance studied, but rather to different kinds of vibrational excitation. The vibrational excitations are characteristic of different groups in a molecule, that can in this way be identified. The infrared spectrum typically has very narrow absorption lines, which makes them unsuited for quantitative analysis but gives very detailed information about the molecules. The frequencies of the different modes of vibration varies with isotope, and therefore different isotopes give different peaks. This makes it possible also to study the isotopic composition of a sample with infrared spectrophotometry. Atomic absorption photometry Atomic absorption photometers are photometers that measure the light from a very hot flame. The solution to be analyzed is injected into the flame at a constant, known rate. Metals in the solution are present in atomic form in the flame. The monochromatic light in this type of photometer is generated by a discharge lamp where the discharge takes place in a gas with the metal to be determined. The discharge then emits light with wavelengths corresponding to the spectral lines of the metal. A filter may be used to isolate one of the main spectral lines of the metal to be analyzed. The light is absorbed by the metal in the flame, and the absorption is used to determine the concentration of the metal in the original solution. See also Radiometry Raman spectroscopy Photodetector – A transducer capable of accepting an optical signal and producing an electrical signal containing the same information as in the optical signal. References Article partly based on the corresponding article in Swedish Wikipedia Electromagnetic radiation meters Optical instruments Photometry
Photometer
[ "Physics", "Technology", "Engineering" ]
2,316
[ "Measuring instruments", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Electromagnetic radiation meters" ]
862,717
https://en.wikipedia.org/wiki/Projectile%20motion
Projectile motion is a form of motion experienced by an object or particle (a projectile) that is projected in a gravitational field, such as from Earth's surface, and moves along a curved path (a trajectory) under the action of gravity only. In the particular case of projectile motion on Earth, most calculations assume the effects of air resistance are passive. Galileo Galilei showed that the trajectory of a given projectile is parabolic, but the path may also be straight in the special case when the object is thrown directly upward or downward. The study of such motions is called ballistics, and such a trajectory is described as ballistic. The only force of mathematical significance that is actively exerted on the object is gravity, which acts downward, thus imparting to the object a downward acceleration towards Earth's center of mass. Due to the object's inertia, no external force is needed to maintain the horizontal velocity component of the object's motion. Taking other forces into account, such as aerodynamic drag or internal propulsion (such as in a rocket), requires additional analysis. A ballistic missile is a missile only guided during the relatively brief initial powered phase of flight, and whose remaining course is governed by the laws of classical mechanics. Ballistics () is the science of dynamics that deals with the flight, behavior and effects of projectiles, especially bullets, unguided bombs, rockets, or the like; the science or art of designing and accelerating projectiles so as to achieve a desired performance. The elementary equations of ballistics neglect nearly every factor except for initial velocity, the launch angle and a gravitational acceleration assumed constant. Practical solutions of a ballistics problem often require considerations of air resistance, cross winds, target motion, acceleration due to gravity varying with height, and in such problems as launching a rocket from one point on the Earth to another, the horizon's distance vs curvature R of the Earth (its local speed of rotation ). Detailed mathematical solutions of practical problems typically do not have closed-form solutions, and therefore require numerical methods to address. Kinematic quantities In projectile motion, the horizontal motion and the vertical motion are independent of each other; that is, neither motion affects the other. This is the principle of compound motion established by Galileo in 1638, and used by him to prove the parabolic form of projectile motion. A ballistic trajectory is a parabola with homogeneous acceleration, such as in a space ship with constant acceleration in absence of other forces. On Earth the acceleration changes magnitude with altitude as and direction (faraway targets) with latitude/longitude along the trajectory. This causes an elliptic trajectory, which is very close to a parabola on a small scale. However, if an object was thrown and the Earth was suddenly replaced with a black hole of equal mass, it would become obvious that the ballistic trajectory is part of an elliptic orbit around that "black hole", and not a parabola that extends to infinity. At higher speeds the trajectory can also be circular (cosmonautics at LEO?, geostationary satellites at 5 R), parabolic or hyperbolic (unless distorted by other objects like the Moon or the Sun). In this article a homogeneous gravitational acceleration is assumed. Acceleration Since there is acceleration only in the vertical direction, the velocity in the horizontal direction is constant, being equal to . The vertical motion of the projectile is the motion of a particle during its free fall. Here the acceleration is constant, being equal to g. The components of the acceleration are: , .* *The y acceleration can also be referred to as the force of the earth on the object(s) of interest. Velocity Let the projectile be launched with an initial velocity , which can be expressed as the sum of horizontal and vertical components as follows: . The components and can be found if the initial launch angle θ is known: , The horizontal component of the velocity of the object remains unchanged throughout the motion. The vertical component of the velocity changes linearly, because the acceleration due to gravity is constant. The accelerations in the x and y directions can be integrated to solve for the components of velocity at any time t, as follows: , . The magnitude of the velocity (under the Pythagorean theorem, also known as the triangle law): . Displacement At any time , the projectile's horizontal and vertical displacement are: , . The magnitude of the displacement is: . Consider the equations, and . If t is eliminated between these two equations the following equation is obtained: Here R is the range of a projectile. Since g, θ, and v0 are constants, the above equation is of the form , in which a and b are constants. This is the equation of a parabola, so the path is parabolic. The axis of the parabola is vertical. If the projectile's position (x,y) and launch angle (θ or α) are known, the initial velocity can be found solving for v0 in the afore-mentioned parabolic equation: . Displacement in polar coordinates The parabolic trajectory of a projectile can also be expressed in polar coordinates instead of Cartesian coordinates. In this case, the position has the general formula . In this equation, the origin is the midpoint of the horizontal range of the projectile, and if the ground is flat, the parabolic arc is plotted in the range . This expression can be obtained by transforming the Cartesian equation as stated above by and . Properties of the trajectory Time of flight or total time of the whole journey The total time t for which the projectile remains in the air is called the time-of-flight. After the flight, the projectile returns to the horizontal axis (x-axis), so . Note that we have neglected air resistance on the projectile. If the starting point is at height y0 with respect to the point of impact, the time of flight is: As above, this expression can be reduced (y0 is 0) to = if θ equals 45°. Time of flight to the target's position As shown above in the Displacement section, the horizontal and vertical velocity of a projectile are independent of each other. Because of this, we can find the time to reach a target using the displacement formula for the horizontal velocity: This equation will give the total time t the projectile must travel for to reach the target's horizontal displacement, neglecting air resistance. Maximum height of projectile The greatest height that the object will reach is known as the peak of the object's motion. The increase in height will last until , that is, . Time to reach the maximum height(h): . For the vertical displacement of the maximum height of the projectile: The maximum reachable height is obtained for θ=90°: If the projectile's position (x,y) and launch angle (θ) are known, the maximum height can be found by solving for h in the following equation: Angle of elevation (φ) at the maximum height is given by: Relation between horizontal range and maximum height The relation between the range d on the horizontal plane and the maximum height h reached at is: × . If Maximum distance of projectile The range and the maximum height of the projectile do not depend upon its mass. Hence range and maximum height are equal for all bodies that are thrown with the same velocity and direction. The horizontal range d of the projectile is the horizontal distance it has traveled when it returns to its initial height (). . Time to reach ground: . From the horizontal displacement the maximum distance of the projectile: , so Note that d has its maximum value when which necessarily corresponds to , or . The total horizontal distance (d) traveled. When the surface is flat (initial height of the object is zero), the distance traveled: Thus the maximum distance is obtained if θ is 45 degrees. This distance is: Application of the work energy theorem According to the work-energy theorem the vertical component of velocity is: . These formulae ignore aerodynamic drag and also assume that the landing area is at uniform height 0. Angle of reach The "angle of reach" is the angle (θ) at which a projectile must be launched in order to go a distance d, given the initial velocity v. There are two solutions: (shallow trajectory) and because , (steep trajectory) Angle θ required to hit coordinate (x, y) To hit a target at range x and altitude y when fired from (0,0) and with initial speed v, the required angle(s) of launch θ are: The two roots of the equation correspond to the two possible launch angles, so long as they aren't imaginary, in which case the initial speed is not great enough to reach the point (x,y) selected. This formula allows one to find the angle of launch needed without the restriction of . One can also ask what launch angle allows the lowest possible launch velocity. This occurs when the two solutions above are equal, implying that the quantity under the square root sign is zero. This, tan θ = v2/gx, requires solving a quadratic equation for , and we find This gives If we denote the angle whose tangent is by , then its reciprocal: This implies In other words, the launch should be at the angle halfway between the target and zenith (vector opposite to gravity). Total Path Length of the Trajectory The length of the parabolic arc traced by a projectile, L, given that the height of launch and landing is the same (there is no air resistance), is given by the formula: where is the initial velocity, is the launch angle and is the acceleration due to gravity as a positive value. The expression can be obtained by evaluating the arc length integral for the height-distance parabola between the bounds initial and final displacement (i.e. between 0 and the horizontal range of the projectile) such that: If the time-of-flight is t, Trajectory of a projectile with air resistance Air resistance creates a force that (for symmetric projectiles) is always directed against the direction of motion in the surrounding medium and has a magnitude that depends on the absolute speed: . The speed-dependence of the friction force is linear () at very low speeds (Stokes drag) and quadratic () at large speeds (Newton drag). The transition between these behaviours is determined by the Reynolds number, which depends on object speed and size, density and dynamic viscosity of the medium. For Reynolds numbers below about 1 the dependence is linear, above 1000 (turbulent flow) it becomes quadratic. In air, which has a kinematic viscosity around 0.15 cm2/s, this means that the drag force becomes quadratic in v when the product of object speed and diameter is more than about 0.015 m2/s, which is typically the case for projectiles. Stokes drag: (for ) Newton drag: (for ) The free body diagram on the right is for a projectile that experiences air resistance and the effects of gravity. Here, air resistance is assumed to be in the direction opposite of the projectile's velocity: Trajectory of a projectile with Stokes drag Stokes drag, where , only applies at very low speed in air, and is thus not the typical case for projectiles. However, the linear dependence of on causes a very simple differential equation of motion in which the 2 cartesian components become completely independent, and it is thus easier to solve. Here, , and will be used to denote the initial velocity, the velocity along the direction of x and the velocity along the direction of y, respectively. The mass of the projectile will be denoted by m, and . For the derivation only the case where is considered. Again, the projectile is fired from the origin (0,0). The relationships that represent the motion of the particle are derived by Newton's Second Law, both in the x and y directions. In the x direction and in the y direction . This implies that: (1), and (2) Solving (1) is an elementary differential equation, thus the steps leading to a unique solution for vx and, subsequently, x will not be enumerated. Given the initial conditions (where vx0 is understood to be the x component of the initial velocity) and for : (1a) (1b) While (1) is solved much in the same way, (2) is of distinct interest because of its non-homogeneous nature. Hence, we will be extensively solving (2). Note that in this case the initial conditions are used and when . (2) (2a) This first order, linear, non-homogeneous differential equation may be solved a number of ways; however, in this instance, it will be quicker to approach the solution via an integrating factor . (2c) (2d) (2e) (2f) (2g) And by integration we find: (3) Solving for our initial conditions: (2h) (3a) With a bit of algebra to simplify (3a): (3b) The total time of the journey in the presence of air resistance (more specifically, when ) can be calculated by the same strategy as above, namely, we solve the equation . While in the case of zero air resistance this equation can be solved elementarily, here we shall need the Lambert W function. The equation is of the form , and such an equation can be transformed into an equation solvable by the function (see an example of such a transformation here). Some algebra shows that the total time of flight, in closed form, is given as . Trajectory of a projectile with Newton drag The most typical case of air resistance, in case of Reynolds numbers above about 1000, is Newton drag with a drag force proportional to the speed squared, . In air, which has a kinematic viscosity around 0.15 cm2/s, this means that the product of object speed and diameter must be more than about 0.015 m2/s. Unfortunately, the equations of motion can not be easily solved analytically for this case. Therefore, a numerical solution will be examined. The following assumptions are made: Constant gravitational acceleration Air resistance is given by the following drag formula, Where: FD is the drag force, c is the drag coefficient, ρ is the air density, A is the cross sectional area of the projectile. Again . Compare this with theory/practice of the ballistic coefficient. Special cases Even though the general case of a projectile with Newton drag cannot be solved analytically, some special cases can. Here we denote the terminal velocity in free-fall as and the characteristic settling time constant . (Dimension of [m/s2], [1/m]) Near-horizontal motion: In case the motion is almost horizontal, , such as a flying bullet. The vertical velocity component has very little influence on the horizontal motion. In this case: The same pattern applies for motion with friction along a line in any direction, when gravity is negligible (relatively small ). It also applies when vertical motion is prevented, such as for a moving car with its engine off. Vertical motion upward: Here and and where is the initial upward velocity at and the initial position is . A projectile cannot rise longer than in the vertical direction, when it reaches the peak (0 m, ypeak) at 0 m/s. Vertical motion downward: With hyperbolic functions After a time at y=0, the projectile reaches almost terminal velocity . Numerical solution A projectile motion with drag can be computed generically by numerical integration of the ordinary differential equation, for instance by applying a reduction to a first-order system. The equation to be solved is . This approach also allows to add the effects of speed-dependent drag coefficient, altitude-dependent air density (in product ) and position-dependent gravity field (when , is linear decrease). Lofted trajectory A special case of a ballistic trajectory for a rocket is a lofted trajectory, a trajectory with an apogee greater than the minimum-energy trajectory to the same range. In other words, the rocket travels higher and by doing so it uses more energy to get to the same landing point. This may be done for various reasons such as increasing distance to the horizon to give greater viewing/communication range or for changing the angle with which a missile will impact on landing. Lofted trajectories are sometimes used in both missile rocketry and in spaceflight. Projectile motion on a planetary scale When a projectile travels a range that is significant compared to the Earth's radius (above ≈100 km), the curvature of the Earth and the non-uniform Earth's gravity have to be considered. This is, for example, the case with spacecrafts and intercontinental missiles. The trajectory then generalizes (without air resistance) from a parabola to a Kepler-ellipse with one focus at the center of the Earth (shown in fig. 3). The projectile motion then follows Kepler's laws of planetary motion. The trajectory's parameters have to be adapted from the values of a uniform gravity field stated above. The Earth radius is taken as R, and g as the standard surface gravity. Let be the launch velocity relative to the first cosmic or escape velocity. Total range d between launch and impact: (where launch angle ) Maximum range of a projectile for optimum launch angle θ=45o:       with , the first cosmic velocity Maximum height of a projectile above the planetary surface: Maximum height of a projectile for vertical launch ():       with , the second cosmic velocity, Time of flight: See also Equations of motion Phugoid Notes References Mechanics
Projectile motion
[ "Physics", "Technology" ]
3,580
[ "Machines", "Kinematics", "Physical phenomena", "Classical mechanics", "Physical systems", "Motion (physics)", "Mechanics" ]
863,161
https://en.wikipedia.org/wiki/Da%20Vinci%20Project
The da Vinci Project was a privately funded, volunteer-staffed attempt to launch a reusable crewed sub-orbital spacecraft. It was formed in 1996 specifically to be a contender for the Ansari X Prize for the first non-governmental reusable crewed spacecraft. The project was based in Toronto, Ontario, Canada and led by Brian Feeney. The original da Vinci Project is no longer operating. A documentary was filmed throughout much of the project's life from 2000 through post-X Prize roundup footage in 2008. The documentary accumulated some 1000 hours or so of footage. It was a private undertaking by Michel Jones of Riverstone Productions, Toronto, and as of early 2009 was still in a preliminary stage of editing and completion. The project last participated in the X Prize Cup 2005, displaying a mock-up of its Wild Fire MK Vl spacecraft. Spacecraft design The project's design was a rocket-powered spacecraft to be air-launched from a helium balloon at an altitude of about 21 km (65,000 ft). The project scope included design and construction of both the spacecraft and the launching balloon. The chosen design can be described as a crewed rockoon. History and status The project was established in 1996. It is named after Leonardo da Vinci, who, among innumerable other inventions, was the first recorded person to design an aircraft. The project was staffed entirely by volunteers. The project unveiled a mockup of their spacecraft, Wild Fire, on August 5, 2004 at a hangar at Downsview Airport in Toronto. At this point, it was considered a contender for the Ansari X Prize, and Tier One had just given notice of their planned competitive flights. When announcing the unveiling, the da Vinci Project also appealed for funds to fly Wild Fire. An agreement was reached with GoldenPalace.com, and the project subsequently gave the required 60-day notice that they would make the Ansari X Prize competitive flights. GoldenPalace.com, known for its marketing gimmicks, was to place a soccer ball kicked out of the stadium by David Beckham during the 2004 Euro World Cup inside the space craft. The da Vinci Project initially announced that it would fly first on October 2, 2004, launching from Kindersley, Saskatchewan. This was only three days after the first expected X Prize flight, by Scaled Composites, on September 29, 2004. However, on September 23, 2004 the da Vinci project announced that they would not be ready. Scaled Composites won the X Prize on October 4, 2004. Hardware The rocket and support equipment was mostly COTS components with a hybrid propulsion system using nitrous oxide and a spin cast paraffin fuel engine in a re-loadable and expendable cardboard cartridge. The most notable development problem was finding a practical low cost solution to the thermal contraction of the liquid paraffin fuel when it cooled and solidified inside the cartridge inner casing. The capsule used two automotive racing seats and aviation BRS parachute systems and was designed and modeled with finite element software. The nozzle was carbon fiber exterior with a tough, thermally insulating inner coating. The combustion chamber was metallic, although a wound carbon fiber exterior was planned but never completed. The planned tracking system used a four car team with networked laptop computers using a hybrid cellular and shortwave radio with the capability of automatically predicting the landing spot so a support team could converge on the landing spot hand prior to landing. The highest forces were predicted to be at re-entry, peaking at approximately at up to 6G. Development, construction and testing continued in earnest until the second flight of the X Prize on October 4, 2004. Structure The project had a small group of core area leaders and relied heavily on volunteer efforts. It followed a variety of business models including share ownership partners, technology partnerships, employee style volunteering, integrator as well as technology/IP aggregator. Many of the expensive components were donated by businesses in exchange for recognition on the website homepage, since removed. References External links Daily Planet video: Interview with Brian Feeney about launch delay See also List of private spaceflight companies - A compiled list of private spaceflight companies Human spaceflight programs Private spaceflight companies Ansari X Prize Defunct spaceflight companies
Da Vinci Project
[ "Engineering" ]
863
[ "Space programs", "Human spaceflight programs" ]
21,097,089
https://en.wikipedia.org/wiki/National%20Oceanographic%20Partnership%20Program
The National Oceanographic Partnership Program (NOPP) facilitates interagency and multi-sectoral partnerships to address federal ocean science and technology research priorities. Through this collaboration, federal agencies can leverage resources to invest in priorities that fall between agency missions or are too large for any single agency to support. In its first 20 years, NOPP invested more than $468 million to support over 200 research and education projects with over 600 partners. A comparable amount of in-kind support has been committed by the research and education community. Purpose and function NOPP was established in 1997 through the National Oceanographic Partnership Act (PL 104-201, 10 USC 7901-7903) to improve the nation’s knowledge of the ocean, with the goals of promoting national security, advancing economic development, protecting quality of life, and strengthening science education and communication. NOPP policies are determined by the NOPP Committee, which is composed of Federal agency representatives committed to advancing ocean science and technology initiatives through partnerships. The NOPP Committee establishes NOPP implementation procedures and selects NOPP projects through agency-issued calls for proposals. The Biodiversity Ad-Hoc Working Group and Federal Renewable Ocean Energy Working Group (FROEWG) are subcommittees of the NOPP Committee focused on facilitating interagency communications and collaborations around their respective focus areas. NOPP also supports the Interagency Working Group on Facilities and Infrastructure (IWG-FI) and the Ocean Research Advisory Panel (ORAP). IWG-FI is a subgroup of the National Science and Technology Council’s Subcommittee on Ocean and Science Technology. IWG-FI reviews and evaluates Federal infrastructure regarding facilities (e.g., ships) necessary for conducting ocean research and observation, and is involved in evaluating future needs and planning future investments in ocean-related facilities. ORAP provides independent recommendations to federal agencies that relate to the ocean and is composed of representatives from the National Academies, state governments, academic institutions, and ocean industries. Accomplishments NOPP has significantly impacted the realm of ocean science and technology and results from NOPP research projects have informed both federal ocean policy and federal and regional natural resource management. Through its outreach efforts and support of the National Ocean Sciences Bowl, NOPP has inspired careers in STEM fields. NOPP contributions have increased the volume and efficiency of ocean research and stimulated the development of applied ocean technology. Perhaps the most important role of NOPP has been to increase multi-disciplinary, cross-sector research partnering and strengthen communication about the most pressing research needs within the national ocean science community. In general, NOPP projects fall within the categories of ocean observation systems, marine infrastructure and technology, earth systems modelling, coastal and marine resources, ocean education, and marine life. Projects that exemplify the highest level of success in achieving NOPP goals and working in diverse sector partnerships are awarded the yearly NOPP Excellence in Partnering Award. Some examples of NOPP-funded projects include: U.S. Integrated Ocean Observing System (IOOS): Over 50 NOPP projects have supported IOOS, including developing independent observation systems in every marine region of the U.S. and integrating and maintaining the necessary data infrastructure. Argo: The Argo program created a global array of over 3,000 autonomous profiling CTD floats that deliver real-time climate and oceanography data. Marine Biodiversity Observation Network (MBON): MBON has used novel eDNA techniques and ongoing ocean observation systems to evaluate habitat and trophic level diversity and changing ecological states, with the goal of better understanding the relationships between human dimensions, climate and environmental variability, and ecosystem structure. JASON: The JASON project began as a way to connect students and teachers to researchers participating in the 1998 East Pacific Rise expedition. In continuation, the project provides an educational platform to help educators gain access to marine researchers and interactive marine science-related curriculum. National Ocean Sciences Bowl (NOSB): NOSB is an academic competition that engages high school students in ocean science, prepares them for STEM careers, and helps them become environmental stewards. Marine Arctic Ecosystem Study (MARES): MARES aims to better understand the relationship between the physical, biological, chemical, and human systems of the Beaufort Sea, with the goal of advancing prediction capabilities relating to marine life, human use, sea ice, and atmospheric and oceanic processes. Deep Sea Exploration to Advance Research on Coral/Canyon/Cold Seep Habitats (Deep SEARCH): Deep SEARCH is exploring and characterizing the biological communities of deep-sea habitats to improve prediction capabilities of seafloor communities in the Atlantic that are potentially sensitive to natural and anthropogenic disturbances. Atlantic Deepwater Ecosystem Observatory Network (ADEON): ADEOM combines passive and active acoustic information with data from space-based remote sensing, hydrographic sensors, and mobile platforms to better understand how human, biotic, and abiotic components influence the soundscape and ecosystem of the Outer Continental Shelf. Bridge: The Bridge is a web-based research center that connects marine educators, academia, private sector, and government by allowing researchers to disseminate accurate and useful marine science information directly to educators. Partner agencies Marine Mammal Commission National Aeronautics and Space Administration National Science Foundation United States Department of Commerce National Oceanic and Atmospheric Administration United States Department of Defense United States Army Corps of Engineers United States Coast Guard Office of Naval Research United States Department of Energy United States Department of Interior United States Geological Survey Bureau of Ocean Energy Management United States Fish and Wildlife Service Bureau of Safety and Environmental Enforcement United States Department of State United States Environmental Protection Agency See also U.S. Global Change Research Program Joint Ocean Commission Initiative References External links Oceanography
National Oceanographic Partnership Program
[ "Physics", "Environmental_science" ]
1,135
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
21,098,058
https://en.wikipedia.org/wiki/List%20of%20software%20under%20the%20GNU%20AGPL
This is an incomplete list of software that are licensed under the GNU Affero General Public License, in alphabetical order. Akvo platform - data platform for Sustainable Development Goals and international development tracking Alaveteli Ampache - web-based audio/video streaming application Anki - the desktop version is under GNU AGPL, the Android version is under GPLv3.0 Bacula BEdita 3 Open BerkeleyDB - a B-tree NoSQL database developed by Oracle, the open source license is under GNU AGPL Bitwarden password management service server code Booktype - online book production platform CiviCRM the open-source CRM for non-profits with its mobile application CiviMobile. CKAN - data management system Co-Ment - online text annotation and collaborative writing Diaspora Element - Decentralized chat and collaboration software Evercam - Camera management software Feng Office Community Edition FreeJ FreePBX Frei0r Friendica Genenetwork Genode - Microkernel-based operating system framework Ghostscript Gitorious GlobaLeaks GNUnet - Internet-like anonymous peer-to-peer network stack Grafana HumHub - Social network software Instructure Canvas iText Joplin - note-taking and to-do list application Kune - collaborative social network Launchpad Lemmy - social network lichess Loomio Mastodon Mattermost server code MediaGoblin Minds Minio MongoDB - until late 2018, when they switched to SSPL MuPDF - a lightweight and high-quality PDF reader developed by Artifex Software Inc Nextcloud - private cloud software Nightscout OnlyOffice - MS Office compatible free software office suite Opa - a web application programming language OpenBroadcaster OpenBTS OpenCog Open edX Open Library OpenRemote - IoT middleware OvenMediaEngine - low latency streaming server ownCloud PeerTube POV-Ray Proxmox Virtual Environment - a server virtualization management platform Public Whip RapidMiner - data mining suite, old versions are released as AGPL RStudio ScyllaDB - Cassandra-like NoSQL DB Seafile Searx SecureDrop Seeks SequoiaDB Signal backend server code Snap! Sones GraphDB StatusNet stet SugarCRM (community edition) Wiki.js - A wiki application built on Node.js WURFL Zarafa References Free and open-source software licenses Lists of software
List of software under the GNU AGPL
[ "Technology" ]
508
[ "Computing-related lists", "Lists of software" ]
21,099,324
https://en.wikipedia.org/wiki/Krull%E2%80%93Schmidt%20category
In category theory, a branch of mathematics, a Krull–Schmidt category is a generalization of categories in which the Krull–Schmidt theorem holds. They arise, for example, in the study of finite-dimensional modules over an algebra. Definition Let C be an additive category, or more generally an additive -linear category for a commutative ring . We call C a Krull–Schmidt category provided that every object decomposes into a finite direct sum of objects having local endomorphism rings. Equivalently, C has split idempotents and the endomorphism ring of every object is semiperfect. Properties One has the analogue of the Krull–Schmidt theorem in Krull–Schmidt categories: An object is called indecomposable if it is not isomorphic to a direct sum of two nonzero objects. In a Krull–Schmidt category we have that an object is indecomposable if and only if its endomorphism ring is local. every object is isomorphic to a finite direct sum of indecomposable objects. if where the and are all indecomposable, then , and there exists a permutation such that for all . One can define the Auslander–Reiten quiver of a Krull–Schmidt category. Examples An abelian category in which every object has finite length. This includes as a special case the category of finite-dimensional modules over an algebra. The category of finitely-generated modules over a finite -algebra, where is a commutative Noetherian complete local ring. The category of coherent sheaves on a complete variety over an algebraically-closed field. A non-example The category of finitely-generated projective modules over the integers has split idempotents, and every module is isomorphic to a finite direct sum of copies of the regular module, the number being given by the rank. Thus the category has unique decomposition into indecomposables, but is not Krull-Schmidt since the regular module does not have a local endomorphism ring. See also Quiver Karoubi envelope Notes References Michael Atiyah (1956) On the Krull-Schmidt theorem with application to sheaves Bull. Soc. Math. France 84, 307–317. Henning Krause, Krull-Remak-Schmidt categories and projective covers, May 2012. Irving Reiner (2003) Maximal orders. Corrected reprint of the 1975 original. With a foreword by M. J. Taylor. London Mathematical Society Monographs. New Series, 28. The Clarendon Press, Oxford University Press, Oxford. . Claus Michael Ringel (1984) Tame Algebras and Integral Quadratic Forms, Lecture Notes in Mathematics 1099, Springer-Verlag, 1984. Category theory Representation theory
Krull–Schmidt category
[ "Mathematics" ]
582
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Mathematical relations", "Representation theory" ]
21,100,715
https://en.wikipedia.org/wiki/Krull%E2%80%93Schmidt%20theorem
In mathematics, the Krull–Schmidt theorem states that a group subjected to certain finiteness conditions on chains of subgroups, can be uniquely written as a finite direct product of indecomposable subgroups. Definitions We say that a group G satisfies the ascending chain condition (ACC) on subgroups if every sequence of subgroups of G: is eventually constant, i.e., there exists N such that GN = GN+1 = GN+2 = ... . We say that G satisfies the ACC on normal subgroups if every such sequence of normal subgroups of G eventually becomes constant. Likewise, one can define the descending chain condition on (normal) subgroups, by looking at all decreasing sequences of (normal) subgroups: Clearly, all finite groups satisfy both ACC and DCC on subgroups. The infinite cyclic group satisfies ACC but not DCC, since (2) > (2)2 > (2)3 > ... is an infinite decreasing sequence of subgroups. On the other hand, the -torsion part of (the quasicyclic p-group) satisfies DCC but not ACC. We say a group G is indecomposable if it cannot be written as a direct product of non-trivial subgroups G = H × K. Statement If is a group that satisfies both ACC and DCC on normal subgroups, then there is exactly one way of writing as a direct product of finitely many indecomposable subgroups of . Here, uniqueness means direct decompositions into indecomposable subgroups have the exchange property. That is: suppose is another expression of as a product of indecomposable subgroups. Then and there is a reindexing of the 's satisfying and are isomorphic for each ; for each . Proof Proving existence is relatively straightforward: let be the set of all normal subgroups that can not be written as a product of indecomposable subgroups. Moreover, any indecomposable subgroup is (trivially) the one-term direct product of itself, hence decomposable. If Krull-Schmidt fails, then contains ; so we may iteratively construct a descending series of direct factors; this contradicts the DCC. One can then invert the construction to show that all direct factors of appear in this way. The proof of uniqueness, on the other hand, is quite long and requires a sequence of technical lemmas. For a complete exposition, see. Remark The theorem does not assert the existence of a non-trivial decomposition, but merely that any such two decompositions (if they exist) are the same. Remak decomposition A Remak decomposition, introduced by Robert Remak, is a decomposition of an abelian group or similar object into a finite direct sum of indecomposable objects. The Krull–Schmidt theorem gives conditions for a Remak decomposition to exist and for its factors to be unique. Krull–Schmidt theorem for modules If is a module that satisfies the ACC and DCC on submodules (that is, it is both Noetherian and Artinian or – equivalently – of finite length), then is a direct sum of indecomposable modules. Up to a permutation, the indecomposable components in such a direct sum are uniquely determined up to isomorphism. In general, the theorem fails if one only assumes that the module is Noetherian or Artinian. History The present-day Krull–Schmidt theorem was first proved by Joseph Wedderburn (Ann. of Math (1909)), for finite groups, though he mentions some credit is due to an earlier study of G.A. Miller where direct products of abelian groups were considered. Wedderburn's theorem is stated as an exchange property between direct decompositions of maximum length. However, Wedderburn's proof makes no use of automorphisms. The thesis of Robert Remak (1911) derived the same uniqueness result as Wedderburn but also proved (in modern terminology) that the group of central automorphisms acts transitively on the set of direct decompositions of maximum length of a finite group. From that stronger theorem Remak also proved various corollaries including that groups with a trivial center and perfect groups have a unique Remak decomposition. Otto Schmidt (Sur les produits directs, S. M. F. Bull. 41 (1913), 161–164), simplified the main theorems of Remak to the 3 page predecessor to today's textbook proofs. His method improves Remak's use of idempotents to create the appropriate central automorphisms. Both Remak and Schmidt published subsequent proofs and corollaries to their theorems. Wolfgang Krull (Über verallgemeinerte endliche Abelsche Gruppen, M. Z. 23 (1925) 161–196), returned to G.A. Miller's original problem of direct products of abelian groups by extending to abelian operator groups with ascending and descending chain conditions. This is most often stated in the language of modules. His proof observes that the idempotents used in the proofs of Remak and Schmidt can be restricted to module homomorphisms; the remaining details of the proof are largely unchanged. O. Ore unified the proofs from various categories include finite groups, abelian operator groups, rings and algebras by proving the exchange theorem of Wedderburn holds for modular lattices with descending and ascending chain conditions. This proof makes no use of idempotents and does not reprove the transitivity of Remak's theorems. Kurosh's The Theory of Groups and Zassenhaus' The Theory of Groups include the proofs of Schmidt and Ore under the name of Remak–Schmidt but acknowledge Wedderburn and Ore. Later texts use the title Krull–Schmidt (Hungerford's Algebra) and Krull–Schmidt–Azumaya (Curtis–Reiner). The name Krull–Schmidt is now popularly substituted for any theorem concerning uniqueness of direct products of maximum size. Some authors choose to call direct decompositions of maximum-size Remak decompositions to honor his contributions. See also Krull–Schmidt category References Further reading A. Facchini: Module theory. Endomorphism rings and direct sum decompositions in some classes of modules. Progress in Mathematics, 167. Birkhäuser Verlag, Basel, 1998. C.M. Ringel: Krull–Remak–Schmidt fails for Artinian modules over local rings. Algebr. Represent. Theory 4 (2001), no. 1, 77–86. External links Page at PlanetMath Module theory Theorems in group theory
Krull–Schmidt theorem
[ "Mathematics" ]
1,433
[ "Fields of abstract algebra", "Module theory" ]
21,103,187
https://en.wikipedia.org/wiki/Glidcop
Glidcop is a family of copper-based metal matrix composite (MMC) alloys mixed primarily with small amounts of aluminum oxide ceramic particles. It is a trademark of North American Höganäs. The name is sometimes written GlidCop or GLIDCOP. The aluminum oxide particles block dislocation creep, which retards recrystallization and prevents grain growth; thus preserving the metal's strength at high temperatures. They also protect the metal against radiation damage. On the other hand, they exclude the possibly of heat treatment or hot working of the worked parts. Properties Composition and physical properties Glidcop is available in several grades which have varying amounts of aluminum oxide content. Additional materials and elements can be added if lower thermal expansion is required, or higher room temperature and elevated temperature strengths. The hardness can also be increased. A composite material of Glidcop AL-60 and 10% Niobium provides high strength and high conductivity. The hardness is comparable to many copper-beryllium and copper-tungsten alloys, while the electrical conductivity is comparable to RWMA Class 2 alloy. Other additives for specialized applications include molybdenum, tungsten, Kovar, and Alloy 42. At , Glidcop AL-15 has a yield strength of over 29 ksi (200 MPa). Post-neutron-irradiation properties Glidcop is resistant to degradation by neutron irradiation. Samples irradiated by neutrons at and cooled to room temperature were found to have greater tensile strength and electrical conductivity and less swelling than samples of pure copper under the same treatment. For radiation levels of 0 to 150 dpa (displacements per atom), the tensile strength was nearly constant and swelling not noticeable, while pure copper experienced a linear decrease in tensile strength and 30% swelling between 0 and 50 dpa. While both pure copper and Glidcop experienced linear drops of electrical conductivity, the drop for Gildcop was smaller. Workability The machinability and cold working properties of Glidcop are similar to those of pure copper. Brazing with silver-based brazing alloys may require first electroplating the Glidcop part with either copper or nickel. The copper plating can be done with a copper cyanide solution; other solutions may not work. Gold-based brazing alloys like 3565 AuCu and 5050 AuCu, can be used in a dry hydrogen atmosphere. Cold working Gildcop by drawing, cold heading etc. increases its strength through work hardening while reducing ductility. Applications Glidcop uses include resistance welding electrodes to prevent them from sticking to galvanized and other coated steels. It has also been used in applications where its resistance to softening at high temperatures is necessary, including incandescent light bulb, leads relay blades, contactor supports, x-ray tube components, heat exchanger sections for fusion power and synchrotron units, high field magnetic coils, sliding electrical contacts, arc welder electrodes, electronic leadframes, MIG contact tips, commutators, high speed motor and generator components, and microwave power tube components. Glidcop has also been used in hybrid circuit packages due to its compatibility with high temperature brazing, and in particle accelerator components, such as radio frequency quadrupoles and compact X-ray absorbers for undulator beam lines, where the alloy may be subjected to high temperatures and high radiation simultaneously. See also Precipitation hardening References External links Höganäs's Glidcop homepage UNS Number Lookup, MatWeb Entering the UNS number shows a data sheet on the alloy. MatWeb GlidCop Technical Data Sheets Metal matrix composites Copper alloys Composite materials
Glidcop
[ "Physics", "Chemistry" ]
777
[ "Copper alloys", "Composite materials", "Materials", "Alloys", "Matter" ]
21,103,531
https://en.wikipedia.org/wiki/University%20Nanosatellite%20Program
The University Nanosat Program is a satellite design and fabrication competition for universities. It is jointly administered by the Air Force Office of Scientific Research (AFOSR), the Air Force Research Laboratory (AFRL), the American Institute of Aeronautics and Astronautics (AIAA), the Space Development and Test Wing and the AFRL Space Vehicles Directorate's Spacecraft Technology division. NASA's Goddard Space Flight Center was involved from the program inception through Nanosat-3. The UNP is a recurring competition that involves two phases. The first phase (Phase A) occurs as university teams initially respond to a solicitation posted by the UNP program or one of its partner organizations. The solicitation results in a competition for selection for that program cycle. Typically 10-11 awards are made during this initial phase. Grants are offered to the awardees to participate in a rigorous two-year process to design and develop their satellite concept. At the end of the two years, a Flight Competition Review is held where judges evaluate each program's progress and readiness to move to the next phase. Winners from each cycle are offered launch by AFRL when the systems are ready for flight. Other U.S. Government agencies, such as NASA through the Educational Launch of Nanosatellites (ELaNa) initiative, also step in to offer launch opportunities when available. Since 1999, there have been 11 cycles of the program. The program's objective is to train tomorrow's space professionals by providing a rigorous two year concept to flight-ready spacecraft competition for U. S. higher education institutions and to enable small satellite research and development (R&D), integration and flight test. Approximately 5,000 college students and 40 institutions of higher learning have been involved in this unique experience since its inception in 1999. Program Cycles Nanosat-1/Nanosat-2 1st-group. Arizona State University: Sparkie (3CornerSat) 1st-group. New Mexico State University: Petey (3CornerSat) 1st-group. University of Colorado at Boulder: Ralphie (3CornerSat) Boston University: Constellation Pathfinder Carnegie Mellon University: Solar Blade Nanosat Santa Clara University: Emerald and Orion Stanford University: Emerald and Orion Utah State University: USUSat Virginia Polytechnic Institute and State University: HokieSat University of Washington: DAWGSTAR Events and Milestones: December 2004. Sparkie and Ralphie launch on the inaugural Delta-IV Heavy Nanosat-3 The Nanosat-3 cycle started in 2003 when 13 universities were chosen to compete. The panel selected the University of Texas at Austin’s Formation Autonomous Spacecraft with Thruster, Relative-Navigation, Attitude and Crosslink or FASTRAC satellite(s) as the winner. 1st. The University of Texas at Austin: FASTRAC 2nd. Taylor University: TEST 3rd. Michigan Technological University: HuskySat Arizona State University University of Colorado at Boulder: DINO University of Hawaii at Manoa: Twin Stars University of Michigan: FENIX Montana State University: MAIA New Mexico State University: NMSUSat Penn State University: LionSat Utah State University: USUSat II Washington University in St. Louis: Akoya and Bandit Worcester Polytechnic Institute: PANSAT Events and Milestones: November 19, 2010. University of Texas FASTRAC spacecraft launches on a Minotaur IV Nanosat-4 In March 2005, eleven universities were chosen from the submitted proposals to compete in the Nanosat-4 Phase B effort. CUSat was selected the winner of the cycle in March 2007. 1st. Cornell University: CUSat 2nd. Washington University in St. Louis: Akoya and Bandit 3rd. University of Missouri-Rolla: UMR SAT University of Central Florida: KNIGHTSAT University of Cincinnati: BEARSat University of Minnesota: MinneSAT New Mexico State University: NMSUSat 2 Santa Clara University: ONYX Texas A&M University: AggieSat1 University of Texas at Austin: ARTEMIS Utah State University: TOROID Events and Milestones: March 2007. Nanosat-4 Flight Competition Review where CUSat named winner September 29, 2013. Cornell University's CUSat launched successfully. Nanosat-5 The Nanosat-5 competition began in January 2007 with 11 universities being selected from 26 proposal submissions. The University of Colorado at Boulder’s Drag and Atmospheric Neutral Density Experiment or DANDE was selected to continue on toward launch. 1st. University of Colorado at Boulder: DANDE 2nd. Washington University in St. Louis: Akoya-B & Bandit-C 3rd. Michigan Technological University: Oculus Boston University: BUSat University of Minnesota: Goldeneye Montana State University: SpaceBuoy Penn State University: NittanySat Santa Clara University: Obsidian Texas A&M University: AggieSat3 The University of Texas at Austin: 2-STEP Utah State University: TOROID II Events and Milestones: January 2009. Nanosat-5 Flight Competition Review where DANDE named winner September 29, 2013. DANDE launches on Falcon-9 Nanosat-6 The Nanosat-6 Program Flight Competition Review was sponsored by the American Institute of Aeronautics and Astronautics was held in Albuquerque, New Mexico. A panel of judges from the Air Force Research Laboratory, Space Test Program, Air Force Institute of Technology and industry selected the winners identified in the table below. 1st. Michigan Technological University: Oculus-ASR 2nd. Cornell University: Violet 3rd. University of Hawaii at Manoa: Ho'oponopono University of Central Florida: KnightSat 2 Georgia Institute of Technology: R3 Massachusetts Institute of Technology: CASTOR University of Minnesota: TwinSat Missouri S&T: MR & MRS SAT Montana State University: SpaceBuoy Saint Louis University: COPPER Santa Clara University: IRIS Events and Milestones: January 2009. Kickoff January 2011. Flight Competition Review June 25, 2019. Michigan Tech's Oculus-ASR satellite launches on Falcon-9 Heavy Nanosat-7 Eleven schools were selected to pursue the Nanosat-7 opportunity: 1st- Microsats. Georgia Institute of Technology: Prox-1 2nd- Microsats. Missouri S&T 1st- Cubesats. University of Texas at Austin 2nd- Cubesats. University of Michigan Boston University: BUSat SUNY Buffalo University of Hawaii at Manoa University of Maryland Massachusetts Institute of Technology Montana State University St. Louis University: Argus Nanosat-8 The Nanosat-8 cycle started in late 2012 with the selection of 10 competing schools. AFRL announced the winners of the Nanosat-8 cycle in February 2015. The first four winners included Missouri University of Science and Technology, the University of Colorado at Boulder, Georgia Institute of Technology, and Taylor University respectively. With a tie for fifth spot, Boston University and State University of New York at Buffalo teams will support deep-dive visits from judges to each program for a tie-breaker decision. 1st. Missouri S&T: MR & MRS SAT 2nd. University of Colorado Boulder: PolarCube 3rd. Georgia Institute of Technology: RECONSO 4th. Taylor University: ELEO-Sat 5th (t). Boston University: ANDESITE 5th (t). SUNY Buffalo: GLADOS University of California, Los Angeles: ELFIN Embry-Riddle Aeronautical University: ARAPAIMA University of Florida: CHOMPTT New Mexico State University: INCA Nanosat-9 The Nanosat-9 Flight Selection Review process resulted in selection of the University of Georgia MOCI payload as winner with the University of Colorado at Boulder's MAXWELL coming in second. 1st. University of Georgia: MOCI 2nd. University of Colorado Boulder: MAXWELL University of Arizona SUNY Buffalo Massachusetts Institute of Technology Michigan Technological University University of Minnesota Missouri S&T: APEX United States Naval Academy Western Michigan University Nanosat-10 In November 2021, three universities were notified of selection for flight when each program's satellite is ready for launch. 1st. University of Minnesota: EXACT 2nd. Texas A&M University: Aggiesat6 3rd. Michigan Technological University: Auris St. Louis University: DORRE Nanosat-11 The Nanosat-11 competition was announced in August 2021. Participants were notified by AFRL of onward inclusion in the Nanosat-11 effort on November 23, 2021 University of Alaska Fairbanks: CCP Auburn University: QUEST SUNY Buffalo: POLAR University of Colorado Boulder: RALPHIE University of Maryland: THEIA Purdue University: FLaC-Sat Rutgers University: SPICEsat Saint Louis University: DORRE University of Texas at Austin: SERPENT Western Michigan University: PEP-GS See also United States Air Force Research Laboratory#Space Vehicles Directorate - AFRL - SV NASA Educational Launch of Nanosatellites (ELaNa) External links References Spacecraft design CubeSats Nanosatellites Air Force Research Laboratory projects
University Nanosatellite Program
[ "Engineering" ]
1,776
[ "Spacecraft design", "Design", "Aerospace engineering" ]
21,106,847
https://en.wikipedia.org/wiki/Integrated%20operations
In the petroleum industry, Integrated operations (IO) refers to the integration of people, disciplines, organizations, work processes and information and communication technology to make smarter decisions. In short, IO is collaboration with focus on production. Contents of the term The most striking part of IO has been the use of always-on videoconference rooms between offshore platforms and land-based offices. This includes broadband connections for sharing of data and video-surveillance of the platform. This has made it possible to move some personnel onshore and use the existing human resources more efficiently. Instead of having e.g. an expert in geology on duty at every platform, the expert may be stationed on land and be available for consultation for several offshore platforms. It is also possible for a team at an office in a different time zone to be consulting the night-shift of the platform, so that no land-based workers need work at night. Splitting the team between land and sea demands new work processes, which together with ICT is the two main focus points for IO. Tools like videoconferencing and 3D-visualization also creates an opportunity for new, more cross-discipline cooperations. For instance, a shared 3D-visualization may be tailored to each member of the group, so that the geologist gets a visualization of the geological structures while the drilling engineer focuses on visualizing the well. Here, real-time measurements from the well are important but the downhole bandwidth has previously been very restricted. Improvements in bandwidth, better measurement devices, better aggregation and visualization of this information and improved models that simulate the rock formations and wellbore currently all feed on each other. An important task where all these improvements play together is real-time production optimization. In the process industry in general, the term is used to describe the increased cooperation, independent of location, between operators, maintenance personnel, electricians, production management as well as business management and suppliers to provide a more streamlined plant operation. By deploying IO, the petroleum industry draws on lessons from the process industry. This can be seen in a larger focus on the whole production chain and management ideas imported from the production and process industry. A prominent idea in this regard is real-time optimization of the whole value chain, from long term management of the oil reservoir, through capacity allocations in pipe networks and calculations of the net present value of the produced oil. Reviews of the application of Integrated Operations can be found in papers presented in the by-annual society of petroleum engineers Intelligent Energy conferences. A focus on the whole production chain is also seen in debates about how to organize people in an IO organisation, with frequent calls for breaking down the Information silos in the oil companies. A large oil company is typically organized in functional silos corresponding to disciplines such as drilling, production and reservoir management. This is regarded as inefficient by the IO movement, pointing out that the activities in any well or field by any of the silos will involve or affect all of the others. While some companies focus on their inhouse management structure, others also emphasize the integration and coordination of outside suppliers and collaborators in offshore-operations. For instance, it is pointed out that the oil and gas industry is lagging behind other industries in terms of Operational intelligence. Ideas and theories that IO management and work processes build on will be familiar from operations research, knowledge management and continual improvement as well as information systems and business transformation. This is perhaps most evident in the repeated referral to "people, process and technology" in IO discussions . As bullet-points this mirror many of the aforementioned fields. Since 2010 major mining companies have become implementers of Integrated Operations, most notable Rio Tinto, BHP Biliton and Codelco. Incentives Common to most companies is that IO leads to cost savings as fewer people are stationed offshore and an increased efficiency. Lower costs, more efficient reservoir management and fewer mistakes during well drilling will in turn raise profits and make more oil fields economically viable. IO comes at a time when the oil industry is faced with more "brown fields", also referred to as "tail production", where the cost of extracting the oil will be higher than its market value, unless major improvements in technology and work processes are made. It has been estimated that deployment of IO could produce 300 billion NOK of added value to the Norwegian continental shelf alone. On a longer time-scale, working onshore control and monitoring of the oil production may become a necessity as new fields at deeper waters are based purely on unmanned sub-sea facilities. Moving jobs onshore has also been touted as a way to keep and make better use of an aging workforce, which is regarded as a challenge by western oil and gas companies. As the average age of the industry workforce is increasing with many nearing retirement, IO is being leveraged for knowledge sharing and training of younger workforce. More comfortable onshore jobs together with "high-tech" tools has also been fronted as a way to recruit young workers into an industry that is seen as "unsexy", "lowtech" and difficult to combine with a normal family life. Critique The security aspect of reducing the offshore workforce has been raised. Will on-site experience be lost and can familiarity with the platform and its processes be attained from an onshore office? The new working environment in any case demands changes to HSE routines. Some of the challenges also include clear role and responsibility definitions and clarifications between the onshore & offshore personnel. Who in a given situation has the authority to take decisions, the onsite or the offshore staff. The increased integration of the offshore facilities with the onshore office environment and outside collaborators also expose work-critical ICT-infrastructure to the internet and the hazards of everyday ICT. As for the efficiency aspect, some criticize the onshore-offshore collaboration for creating a more bureaucratic working environment. Naming conventions Both the exact terms and the content used to describe IO vary between companies. The oil company Shell has traditionally branded the term Smart Fields, which was an extension of Smart Wells that only referred to remote-controlled well-valves. BP uses Field of the future to refer to its innovations in oil production. Chevron has i-field, Honeywell has Digital Suites for Oil and Gas (a set of software and services), and Schlumberger terms it Digital Energy. The latter term, understood as referring to oil and gas, is adopted in the title of the digital energy journal. This term could have several meanings, as GE Digital Energy for instance, do not appear to use it in the IO sense. Other terms include e-Field, i-Field, Digital Oilfield, Intelligent Oilfield, Field of the future and Intelligent Energy. Integrated operations has been the preferred term by Statoil, the Norwegian Oil Industry Association (OLF), a professional body and employer's association for oil and supplier companies and vendors such as ABB. IO is also the preferred term for Petrobras. Intelligent Energy is the dominant term in publications revolving around the biannual SPE Intelligent Energy conference, which has been one of the major conferences for the IO movement, along with the annual IO Science and Practice conference which obviously supports the IO term. See also Integrated asset modelling a holistic modelling approach in oil and gas, connecting models across disciplines https://stepchangeglobal.com/ Integrated Operations in the High North, a collaboration project working on the next or second generation of Integrated Operations. ISO 15926, an enabler for the next or second generation of Integrated Operations by integrating data across disciplines and business domains. WITSML an example of a standardisation effort for real-time drilling data which facilitate integration of disparate computer systems Definition of IO by Global IO References External links Integrated Operations on the StatoilHydro corporate website. Integrated Operations Center for Integrated Operations in the Petroleum Industry Stepchange Global - independent Integrated Operations Advisory Services http://www.stepchangeglobal.com/ Global IO - independent Integrated Operations Management Consultancy Services Petroleum industry Petroleum engineering Petroleum production
Integrated operations
[ "Chemistry", "Engineering" ]
1,634
[ "Petroleum engineering", "Energy engineering", "Petroleum industry", "Petroleum", "Chemical process engineering" ]
21,108,206
https://en.wikipedia.org/wiki/Noise%20and%20vibration%20on%20maritime%20vessels
On maritime vessels, noise and vibration are not the same but they have the same origin and come in many forms. The methods to handle the related problems are similar, to a certain level, where most shipboard noise problems are reduced by controlling vibration. Sources The main producers of mechanically created noise and vibration are the engines, but there are also other sources, like the air conditioning, shaft-line, cargo handling and control equipment and mooring machinery. Diesel engines When looking at diesel driven vessels, the engines induce large accelerations that travel from the foundation of the engine throughout the ship. In most compartments, this type of vibration normally manifests itself as audible noise. The problem with diesels is that, for a given size, there is a fixed amount of power generated per cylinder. To increase power it is necessary to add cylinders but, when cylinders are added, the crankshaft has to be lengthened and after a very limited number of additions, the lengthened crankshaft begins to flex and vibrate all on its own. This results in an increase of vibrations spread all over the ships structure. Crankshaft vibration can be reduced by a harmonic balancer. Electrical engines Large vessels sometimes use electrical propulsion motors, the electrical power being provided by a diesel generator. Noise and vibration of electric motors include, besides mechanical and aerodynamic sources, an electromagnetic source due to electromagnetic forces which is responsible for the "whining noise" of the motor. Turbines Steam turbines and gas turbines, on the other hand, when new and/or in good repair, do not, by themselves generate excessive vibration as long as the turbine blades are in a perfect condition and rotate in a smooth gas flow. But after some time microscopic defects appear and cause small pits to appear in the surface of the intake and the blades which set up eddies in the gas flow, resulting in loss of performance and vibrations. Vibration levels may change with different loading conditions or when doing a manoeuvre. Other sources Besides mechanical produced vibrations, other sources are caused by the motion of the sea, slamming of the vessel on the waves and water depth to mention just a few. The main problem here is that they are less controllable. The engine-gearbox interaction is usually a source for noise and vibrations. Here, it can be installed highly flexible couplings between the engine and the gearbox. These type of couplings are used because of their low torsional stiffness. Exposure limits Exposure to noise and vibrations is regulated and limits for maritime vessels are given in the ISO standard 6954: Guidelines for permissible mechanical vibrations on board seagoing vessels to protect personnel and crew. Because there are different noise regulations from country to country, the International Maritime Organization (gago) sets some standards for vessels. The table below gives some comparisons of preferred maximum noise levels on board of vessels and onshore levels. Noise and vibration control Noise generated on board ships and submarines can have far-reaching effects on the ability of the vessel to operate safely and efficiently. Military vessels in particular need to be quiet to avoid detection by sonar, so many methods have been used to limit a vessel's noise signature. Controlling noise is therefore a defense measure, most acutely for submarines. Prevention At the design table, the naval architect makes the necessary choices concerning the ship's structure to achieve an optimized design towards noise and vibration control. Decisions are made about the engine and shaft, what kind of instruments and material can be used to reduce noise and vibrations throughout the vessel and what is the best way to implement these. Advanced computer technology tries to simulate these vibrations under different ship conditions to provide an overview of weak spots. The generated vibrations are also compared with the natural frequencies of the different parts/sections and adaptions can be done to the structure. On board, noise travels through the structure (mainly low frequencies), more than through the air, so insulating the engine room is not enough as a way to avoid the noise travelling through the boat. Control at source To control the mechanical vibrations at the origin, isolating fittings, elastic mounting of engines, elastic holding of pipes or dampers can be installed. These will absorb a part of the vibrations (and the noise) produced by the machines. To control the electromagnetic vibrations at the origin, skewing the electric motor or choosing a better slot/pole combination will reduce electromagnetic force harmonics or avoid resonances between magnetic forces and structural modes of the electric motor. In megayachts, the engines and alternators let out unwanted noise and vibrations. To solve this, the solution is a double elastic suspension where the engine and alternator are mounted with vibration dampers on a common frame. Then, this is mounted elastically between the common frame and the hull. While in megayatchs the requirement is the comfort of crew and passengers, in other applications, such as navy ships, the requirements involve that the engines or generators should work under certain shock loads. To achieve this the ships install double elastic suspensions and high deflection mounts are installed between the unit and base frame. Beforehand, the engineers calculate the torsional vibrations or the 6/12 degree of freedom to guarantee the optimum combination of couplings and mounts. Maintenance Regular maintenance will have a major influence on the performance of instruments and machines. Lubrication of the joints, tightening of the bolts, good alignment of stern contour of the vessel, adjusting of variables following the weekly and monthly schedule are the most effective routes to noise and vibration control. References Lloyd's Register Technical Papers; Ship vibration and noise: some topical aspects by J.S. Carlton and D. Vlašić International Maritime Organization Specific Naval architecture Mechanical vibrations
Noise and vibration on maritime vessels
[ "Physics", "Engineering" ]
1,147
[ "Structural engineering", "Naval architecture", "Mechanics", "Mechanical vibrations", "Marine engineering" ]
27,342,975
https://en.wikipedia.org/wiki/Highly%20structured%20ring%20spectrum
In mathematics, a highly structured ring spectrum or -ring is an object in homotopy theory encoding a refinement of a multiplicative structure on a cohomology theory. A commutative version of an -ring is called an -ring. While originally motivated by questions of geometric topology and bundle theory, they are today most often used in stable homotopy theory. Background Highly structured ring spectra have better formal properties than multiplicative cohomology theories – a point utilized, for example, in the construction of topological modular forms, and which has allowed also new constructions of more classical objects such as Morava K-theory. Beside their formal properties, -structures are also important in calculations, since they allow for operations in the underlying cohomology theory, analogous to (and generalizing) the well-known Steenrod operations in ordinary cohomology. As not every cohomology theory allows such operations, not every multiplicative structure may be refined to an -structure and even in cases where this is possible, it may be a formidable task to prove that. The rough idea of highly structured ring spectra is the following: If multiplication in a cohomology theory (analogous to the multiplication in singular cohomology, inducing the cup product) fulfills associativity (and commutativity) only up to homotopy, this is too lax for many constructions (e.g. for limits and colimits in the sense of category theory). On the other hand, requiring strict associativity (or commutativity) in a naive way is too restrictive for many of the wanted examples. A basic idea is that the relations need only hold up to homotopy, but these homotopies should fulfill again some homotopy relations, whose homotopies again fulfill some further homotopy conditions; and so on. The classical approach organizes this structure via operads, while the recent approach of Jacob Lurie deals with it using -operads in -categories. The most widely used approaches today employ the language of model categories. All these approaches depend on building carefully an underlying category of spectra. Approaches for the definition Operads The theory of operads is motivated by the study of loop spaces. A loop space ΩX has a multiplication by composition of loops. Here the two loops are sped up by a factor of 2 and the first takes the interval [0,1/2] and the second [1/2,1]. This product is not associative since the scalings are not compatible, but it is associative up to homotopy and the homotopies are coherent up to higher homotopies and so on. This situation can be made precise by saying that ΩX is an algebra over the little interval operad. This is an example of an -operad, i.e. an operad of topological spaces which is homotopy equivalent to the associative operad but which has appropriate "freeness" to allow things only to hold up to homotopy (succinctly: any cofibrant replacement of the associative operad). An -ring spectrum can now be imagined as an algebra over an -operad in a suitable category of spectra and suitable compatibility conditions (see May, 1977). For the definition of -ring spectra essentially the same approach works, where one replaces the -operad by an -operad, i.e. an operad of contractible topological spaces with analogous "freeness" conditions. An example of such an operad can be again motivated by the study of loop spaces. The product of the double loop space is already commutative up to homotopy, but this homotopy fulfills no higher conditions. To get full coherence of higher homotopies one must assume that the space is (equivalent to) an n-fold loopspace for all n. This leads to the in -cube operad of infinite-dimensional cubes in infinite-dimensional space, which is an example of an -operad. The above approach was pioneered by J. Peter May. Together with Elmendorf, Kriz and Mandell he developed in the 90s a variant of his older definition of spectra, so called S-modules (see Elmendorf et al., 2007). S-modules possess a model structure, whose homotopy category is the stable homotopy category. In S-modules the category of modules over an -operad and the category of monoids are Quillen equivalent and likewise the category of modules over an -operad and the category of commutative monoids. Therefore, is it possible to define -ring spectra and -ring spectra as (commutative) monoids in the category of S-modules, so called (commutative) S-algebras. Since (commutative) monoids are easier to deal with than algebras over complicated operads, this new approach is for many purposes more convenient. It should, however, be noted that the actual construction of the category of S-modules is technically quite complicated. Diagram spectra Another approach to the goal of seeing highly structured ring spectra as monoids in a suitable category of spectra are categories of diagram spectra. Probably the most famous one of these is the category of symmetric spectra, pioneered by Jeff Smith. Its basic idea is the following: In the most naive sense, a spectrum is a sequence of (pointed) spaces together with maps , where ΣX denotes the suspension. Another viewpoint is the following: one considers the category of sequences of spaces together with the monoidal structure given by a smash product. Then the sphere sequence has the structure of a monoid and spectra are just modules over this monoid. If this monoid was commutative, then a monoidal structure on the category of modules over it would arise (as in algebra the modules over a commutative ring have a tensor product). But the monoid structure of the sphere sequence is not commutative due to different orderings of the coordinates. The idea is now that one can build the coordinate changes into the definition of a sequence: a symmetric sequence is a sequence of spaces together with an action of the n-th symmetric group on . If one equips this with a suitable monoidal product, one gets that the sphere sequence is a commutative monoid. Now symmetric spectra are modules over the sphere sequence, i.e. a sequence of spaces together with an action of the n-th symmetric group on and maps satisfying suitable equivariance conditions. The category of symmetric spectra has a monoidal product denoted by . A highly structured (commutative) ring spectrum is now defined to be a (commutative) monoid in symmetric spectra, called a (commutative) symmetric ring spectrum. This boils down to giving maps which satisfy suitable equivariance, unitality and associativity (and commutativity) conditions (see Schwede 2007). There are several model structures on symmetric spectra, which have as homotopy the stable homotopy category. Also here it is true that the category of modules over an -operad and the category of monoids are Quillen equivalent and likewise the category of modules over an -operad and the category of commutative monoids. A variant of symmetric spectra are orthogonal spectra, where one substitutes the symmetric group by the orthogonal group (see Mandell et al., 2001). They have the advantage that the naively defined homotopy groups coincide with those in the stable homotopy category, which is not the case for symmetric spectra. (I.e., the sphere spectrum is now cofibrant.) On the other hand, symmetric spectra have the advantage that they can also be defined for simplicial sets. Symmetric and orthogonal spectra are arguably the simplest ways to construct a sensible symmetric monoidal category of spectra. Infinity-categories Infinity-categories are a variant of classical categories where composition of morphisms is not uniquely defined, but only up to contractible choice. In general, it does not make sense to say that a diagram commutes strictly in an infinity-category, but only that it commutes up to coherent homotopy. One can define an infinity-category of spectra (as done by Lurie). One can also define infinity-versions of (commutative) monoids and then define -ring spectra as monoids in spectra and -ring spectra as commutative monoids in spectra. This is worked out in Lurie's book Higher Algebra. Comparison The categories of S-modules, symmetric and orthogonal spectra and their categories of (commutative) monoids admit comparisons via Quillen equivalences due to work of several mathematicians (including Schwede). In spite of this the model category of S-modules and the model category of symmetric spectra have quite different behaviour: in S-modules every object is fibrant (which is not true in symmetric spectra), while in symmetric spectra the sphere spectrum is cofibrant (which is not true in S-modules). By a theorem of Lewis, it is not possible to construct one category of spectra, which has all desired properties. A comparison of the infinity category approach to spectra with the more classical model category approach of symmetric spectra can be found in Lurie's Higher Algebra 4.4.4.9. Examples It is easiest to write down concrete examples of -ring spectra in symmetric/orthogonal spectra. The most fundamental example is the sphere spectrum with the (canonical) multiplication map . It is also not hard to write down multiplication maps for Eilenberg-MacLane spectra (representing ordinary cohomology) and certain Thom spectra (representing bordism theories). Topological (real or complex) K-theory is also an example, but harder to obtain: in symmetric spectra one uses a C*-algebra interpretation of K-theory, in the operad approach one uses a machine of multiplicative infinite loop space theory. A more recent approach for finding -refinements of multiplicative cohomology theories is Goerss–Hopkins obstruction theory. It succeeded in finding -ring structures on Lubin–Tate spectra and on elliptic spectra. By a similar (but older) method, it could also be shown that Morava K-theory and also other variants of Brown-Peterson cohomology possess an -ring structure (see e.g. Baker and Jeanneret, 2002). Basterra and Mandell have shown that Brown–Peterson cohomology has even an -ring structure, where an -structure is defined by replacing the operad of infinite-dimensional cubes in infinite-dimensional space by 4-dimensional cubes in 4-dimensional space in the definition of -ring spectra. On the other hand, Tyler Lawson has shown that Brown–Peterson cohomology does not have an structure. Constructions Highly structured ring spectra allow many constructions. They form a model category, and therefore (homotopy) limits and colimits exist. Modules over a highly structured ring spectrum form a stable model category. In particular, their homotopy category is triangulated. If the ring spectrum has an -structure, the category of modules has a monoidal smash product; if it is at least , then it has a symmetric monoidal (smash) product. One can form group ring spectra. One can define the algebraic K-theory, topological Hochschild homology, and so on, of a highly structured ring spectrum. One can define the space of units, which is crucial for some questions of orientability of bundles. See also Commutative ring spectrum En-ring References References on E∞-ring spectra References about structure of E∞-ring spectra Basterra, M.; Mandell, M.A. (2005). "Homology and Cohomology of E-infinity Ring Spectra" (PDF) References about specific examples General references on related spectra Algebraic topology Spectra (topology)
Highly structured ring spectrum
[ "Mathematics" ]
2,468
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
27,343,427
https://en.wikipedia.org/wiki/World%20Renewable%20Energy%20Network
WREN is a major non-profit organization registered in the United Kingdom with charitable status and affiliated to UNESCO, the Deputy Director General of which is its honorary President. It has a Governing Council, an Executive Committee and a Director General. It maintains links with many United Nations, governmental and non-governmental organisations. Established in 1992 during the second World Renewable Energy Congress in Reading, UK, WREN supports and enhances the utilisation and implementation of renewable energy sources that are both environmentally safe and economically sustainable. This is done through a worldwide network of agencies, laboratories, institutions, companies and individuals, all working together towards the international diffusion of renewable energy technologies and applications. Representing most countries in the world, it aims to promote the communication and technical education of scientists, engineers, technicians and managers in this field and to address itself to the energy needs of both developing and developed countries. Over two billion dollars have now been allocated to projects dealing with renewable energy and the environment by the World Solar Summit and World Solar Decade along with the World Bank. Global Activities of WREC/WREN The global activities of the World Renewable Energy Congress / Network encompass: Newsletter Regional meetings Scientific publications Targeted books and annual magazine Workshops on renewable energy topics Journal publication "Renewable Energy" Competitions and awards promoting renewable energy International congresses (World Renewable Energy Congress, WREC) Mission statement With the accelerated approach of the global climate-change point-of-no-return the need to address the pivotal role of renewable energy in the formation of coping strategies, rather than prevention, is more crucial than ever. Sustainability, green buildings, and the development of the large-scale renewable energy industry must be at the top of all development, economic, financial and political agendas. The time for action has arrived. Prevention and questioning how and why we face this great challenge is a luxury we can no longer indulge. We welcome the establishment of the long overdue International Renewable Energy Agency which we hope will work side-by-side with similar intergovernmental agencies striving for the adoption of renewable energies. Major events The major event organised by WREC/WREN is the biennial congress, normally held during the summer of every even year. The congresses are mostly run and organised by the WREC headquarters which are in Brighton, UK. All members of WREC/WREN are entitled to bid to host the Congress. The WREC/WREN Council meets and decides the location based on: availability of local funding and sponsorship; ease of travel to the location; extent of host government and institutional support; benefits to the local country. All local organisation and services must be provided by the host country. The first three congresses were held in the UK (Reading), followed by a move to Denver (United States) and then to Florence (Italy). In the year 2000 the congress returned to the UK (Brighton) with every effort being made to ensure that this event enhanced the recognition of Renewable Energies in the new millennium. In 2002 the congress took place in Cologne (Germany) and 2004 once more in Denver (USA). In 2006 the congress was held in Florence (Italy) and in 2008 in Glasgow (UK). The next congresses will be in Abu Dhabi (UAE) in 2010 and in Denver (USA) in 2012 respectively. The following table shows the statistics for the previous WREC conferences: Purpose of WREC At no time in modern history has energy played a more crucial role in the development and well being of nations than at present. The source and nature of energy, the security of supply and the equity of distribution, the environmental impact of its supply and utilization, are all crucial matters to be addressed by suppliers, consumers, governments, industry, academia, and financial institutions. The World Renewable Energy Congress (WREC), a major recognised forum for networking between these sectors, addresses these issues through regular meetings and exhibitions, bringing together representatives of all those involved in the supply, distribution, consumption and development of energy sources which are benign, sustainable, accessible and economically viable. WREC enables policy makers, researchers, manufacturers, economists, financiers, sociologists, environmentalists and others to present their views in Plenary and Technical Sessions and to participate in discussions, both formal and informal, thus facilitating the transfer of knowledge between nations, institutions, disciplines and individuals. WREC Renewable Energy Awards The WREC Renewable Energy Awards were established in 1998, during the 5th edition of the WREC Congress in Florence as a way to recognize outstanding achievement and vision in the global renewable energy sector. The WREC Renewable Energy Awards aim at highlighting the worldwide best-implemented policies, projects and research in the following topics: Fuel Cells and Hydrogen Low Energy Architecture Solar Energy Wind Technology Biomass Sustainable Transport Green Energy Business WREC/WREN Aims and Objectives WREN is a non-profit UK company (reg. no. 1874667) limited by guarantee and not having a share capital, incorporated in 1990 as a registered charity (No. 1009879), with registered offices in England. The aims and objectives of WREC/WREN are as follows: Ensuring renewable energy takes its proper place in the sustainable supply and use of energy for greatest benefit of all, taking due account of research requirements, energy efficiency, conservation, and cost criteria. Assisting and promoting the real local, regional and global environmental benefits of renewable energy. Promoting the innovation, diffusion and efficient application of economic renewable energy technologies. Enhancing energy supply security without damage to the environment. Widening energy availability, especially in developing countries and rural areas. Promoting business opportunities for renewable energy projects and their successful implementation. Ensuring the financing of, and institutional support for, economic renewable energy projects. Encouraging improved information and education on renewable energy. Involving young people in information and education on renewable energy with a parallel, closely #integrated programme. Providing a technical exhibition where manufacturers and others can display their products and services. Strengthen and expand the effectiveness of Networking among nations, institutions, agencies, organizations and individuals in research, application, commercialization and education of renewable energy technology. Providing a forum within which participants voice their achievement and thought at various parts of the world. References External links Official Website Renewable Energy Expo The Ultimate Electricity Plans Guide International Renewable Energy Congress The International Solar Energy Society (ISES) Solar Energy and Renewable Energy Events, Fairs and Conferences Renewable energy organizations
World Renewable Energy Network
[ "Engineering" ]
1,269
[ "Renewable energy organizations", "Energy organizations" ]
27,344,508
https://en.wikipedia.org/wiki/Abstract%20elementary%20class
In model theory, a discipline within mathematical logic, an abstract elementary class, or AEC for short, is a class of models with a partial order similar to the relation of an elementary substructure of an elementary class in first-order model theory. They were introduced by Saharon Shelah. Definition , for a class of structures in some language , is an AEC if it has the following properties: is a partial order on . If then is a substructure of . Isomorphisms: is closed under isomorphisms, and if and then Coherence: If and then Tarski–Vaught chain axioms: If is an ordinal and is a chain (i.e. ), then: If , for all , then Löwenheim–Skolem axiom: There exists a cardinal , such that if is a subset of the universe of , then there is in whose universe contains such that and . We let denote the least such and call it the Löwenheim–Skolem number of . Note that we usually do not care about the models of size less than the Löwenheim–Skolem number and often assume that there are none (we will adopt this convention in this article). This is justified since we can always remove all such models from an AEC without influencing its structure above the Löwenheim–Skolem number. A -embedding is a map for such that and is an isomorphism from onto . If is clear from context, we omit it. Examples The following are examples of abstract elementary classes: An Elementary class is the most basic example of an AEC: If T is a first-order theory, then the class of models of T together with elementary substructure forms an AEC with Löwenheim–Skolem number |T|. If is a sentence in the infinitary logic , and is a countable fragment containing , then is an AEC with Löwenheim–Skolem number . This can be generalized to other logics, like , or , where expresses "there exists uncountably many". If T is a first-order countable superstable theory, the set of -saturated models of T, together with elementary substructure, is an AEC with Löwenheim–Skolem number . Zilber's pseudo-exponential fields form an AEC. Common assumptions AECs are very general objects and one usually make some of the assumptions below when studying them: An AEC has joint embedding if any two model can be embedded inside a common model. An AEC has no maximal model if any model has a proper extension. An AEC has amalgamation if for any triple with , , there is and embeddings of and inside that fix pointwise. Note that in elementary classes, joint embedding holds whenever the theory is complete, while amalgamation and no maximal models are well-known consequences of the compactness theorem. These three assumptions allow us to build a universal model-homogeneous monster model , exactly as in the elementary case. Another assumption that one can make is tameness. Shelah's categoricity conjecture Shelah introduced AECs to provide a uniform framework in which to generalize first-order classification theory. Classification theory started with Morley's categoricity theorem, so it is natural to ask whether a similar result holds in AECs. This is Shelah's eventual categoricity conjecture. It states that there should be a Hanf number for categoricity: For every AEC K there should be a cardinal depending only on such that if K is categorical in some (i.e. K has exactly one (up to isomorphism) model of size ), then K is categorical in for all . Shelah also has several stronger conjectures: The threshold cardinal for categoricity is the Hanf number of pseudoelementary classes in a language of cardinality LS(K). More specifically when the class is in a countable language and axiomaziable by an sentence the threshold number for categoricity is . This conjecture dates back to 1976. Several approximations have been published (see for example the results section below), assuming set-theoretic assumptions (such as the existence of large cardinals or variations of the generalized continuum hypothesis), or model-theoretic assumptions (such as amalgamation or tameness). As of 2014, the original conjecture remains open. Results The following are some important results about AECs. Except for the last, all results are due to Shelah. Shelah's Presentation Theorem: Any AEC is : it is a reduct of a class of models of a first-order theory omitting at most types. Hanf number for existence: Any AEC which has a model of size has models of arbitrarily large sizes. Amalgamation from categoricity: If K is an AEC categorical in and and , then K has amalgamation for models of size . Existence from categoricity: If K is a AEC with Löwenheim–Skolem number and K is categorical in and , then K has a model of size . In particular, no sentence of can have exactly one uncountable model. Approximations to Shelah's categoricity conjecture: Downward transfer from a successor: If K is an abstract elementary class with amalgamation that is categorical in a "high-enough" successor , then K is categorical in all high-enough . Shelah's categoricity conjecture for a successor from large cardinals: If there are class-many strongly compact cardinals, then Shelah's categoricity conjecture holds when we start with categoricity at a successor. See also Tame abstract elementary class Notes References Model theory Category theory
Abstract elementary class
[ "Mathematics" ]
1,207
[ "Functions and mappings", "Mathematical structures", "Mathematical logic", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Mathematical relations", "Model theory" ]
27,345,986
https://en.wikipedia.org/wiki/Gold%20to%20Go
Gold to Go is a product brand made by the TG Gold-Super-Markt corporation designed to dispense items made of pure gold from automated banking vending machines. The first gold-plated vending machine was located in the lobby of the Emirates Palace hotel in Abu Dhabi, dispensed 320 items made of gold, including 10-gram gold bars and customized gold coins. There are currently six vending machines installed across Europe and Peru. The first vending machine in the United States was installed in Boca Raton, Florida in December 2010. The "gold ATMs" are designed to be placed in shopping malls and airports and are meant to make ordinary people comfortable with the idea of investing in gold. The vending machines update their prices to market value every minute over an encrypted internet connection. History The concept was developed by Thomas Geissler, who has previously created an online platform for trading precious metals. He stated that his initial inspiration was observing the "seemingly endless" line of traditional toiletries vending machines at airports and train stations, and during his search for advertising models for an online marketplace. The initial prototype system was installed in Frankfurt in 2009, where it dispensed 1-gram pieces of gold at a 30% premium above market price. The German corporation planned to distribute 500 "gold ATMs" throughout airports and rail stations in Germany, Austria, and Switzerland. Meanwhile, the company FONDS-Zentrum Nürnberg in Nuremberg Germany bought the assets of the Gold ATMs to distribute the Gold vending machines to private companies or persons. In 2017, the first gold ATM that had been installed in Dubai, was permanently closed. Machines The vending machines are covered in gold leaf, and include a touch screen, cash and credit card slots, and a lighted display showcase. Users must scan identification for purchases exceeding the money laundering limit within a given time. The machines are fitted "like an armored vehicle" and tested with explosives to prevent theft, and include surveillance cameras that record all transactions. Features It has a state-of-the-art ID scanner with AssureTech validation, which determines quality and value and creates an offer within minutes. It monitors real-time prices and gives instant cash for gold, silver, and cryptocurrency. It captures and authenticates the user by examining the ID hologram. A Video Teller Agent examines each customer in real-time and helps. In the case of a technical issue, a video call will be automatically made to customer support. Items Gold bars made of 24-carat gold are sold in 1, 5, 10, 20, 50, 100 and 250 gram and 1 oz sizes. Other items for sale include gift boxes of gold coins with symbols such as the Krugerrand, a maple leaf, or a kangaroo, and are dispensed in "handsome" boxes. Each gold bar is sealed in plastic with an anti-counterfeit hologram label, and comes with a description of its purity and price per gram, as well as information about the sale and the company's 10-day return policy. References External links Gold ATM KIOSK Vending machines Gold investments Commercial machines
Gold to Go
[ "Physics", "Technology", "Engineering" ]
645
[ "Machines", "Commercial machines", "Vending machines", "Automation", "Physical systems" ]
27,346,453
https://en.wikipedia.org/wiki/Cool%20flame
A cool flame is a flame having a typical temperature of about . In contrast to an ordinary hot flame, the reaction is not vigorous and releases little heat, light, or carbon dioxide. Cool flames are difficult to observe and are uncommon in everyday life, but they are responsible for engine knock – the undesirable, erratic, and noisy combustion of low-octane fuels in internal combustion engines. History Cool flames were accidentally discovered in the 1810s by Sir Humphry Davy, who inserted a hot platinum wire into a mixture of air and diethyl ether vapor. "When the experiment on the slow combustion of ether is made in the dark, a pale phosphorescent light is perceived above the wire, which of course is most distinct when the wire ceases to be ignited. This appearance is connected with the formation of a peculiar acrid volatile substance possessed of acid properties." After noticing that certain types of flame did not burn his fingers or ignite a match, he also found that those unusual flames could change into hot flames and that at certain compositions and temperatures, they did not require an external ignition source, such as a spark or hot material. Harry Julius Emeléus was the first to record their emission spectra, and in 1929 he coined the term "cold flame". Parameters Cool flames can occur in hydrocarbons, alcohols, aldehydes, oils, acids, waxes, and even methane. The lowest temperature of a cool flame is poorly defined and is conventionally set as a temperature at which the flame can be detected by eye in a dark room (cool flames are hardly visible in daylight). This temperature slightly depends on the fuel to oxygen ratio and strongly depends on gas pressure – there is a threshold below which cool flame is not formed. A specific example is 50% n-butane–50% oxygen (by volume) which has a cool flame temperature (CFT) of about at . One of the lowest CFTs () was reported for a CHOCH + O + N mixture at . The CFT is significantly lower than the auto-ignition temperature (AIT) of conventional flame (see table). The spectra of cool flames consist of several bands and are dominated by the blue and violet ones – thus the flame usually appears pale blue. The blue component originates from the excited state of formaldehyde (CHO*) which is formed via chemical reactions in the flame: A cool flame does not start instantaneously after the threshold pressure and temperature are applied, but has an induction time. The induction time shortens and the glow intensity increases with increasing pressure. With increasing temperature, the intensity may decrease because of the disappearance of peroxy radicals required for the above glow reactions. Self-sustained, stable cool diffusion flames have been established by adding ozone into oxidizer stream. Mechanism Whereas in a hot flame molecules break down to small fragments and combine with oxygen producing carbon dioxide (i.e., burn), in a cool flame, the fragments are relatively large and easily recombine with each other. Therefore, much less heat, light and carbon dioxide is released; the premixed combustion process is oscillatory and can sustain for a long time. A typical temperature increase upon ignition of a cool flame is a few tens of degrees Celsius whereas it is on the order of for a hot flame. Most experimental data can be explained by the model which considers cool flame just as a slow chemical reaction where the rate of heat generation is higher than the heat loss. This model also explains the oscillatory character of the cool premixed flame: the reaction accelerates as it produces more heat until the heat loss becomes appreciable and temporarily quenches the process. Cool diffusion flames Unlike a cool premixed flame, a cool diffusion flame (CDF) burns in the presence of an equivalence ratio gradient. CDFs were first observed in 2012 in droplet experiments aboard the International Space Station. Since then CDFs have been observed in microgravity spherical flames burning droplets and gases and in normal gravity counterflow and stratified flames. Applications Cool flames may contribute to engine knock – the undesirable, erratic, and noisy combustion of low-octane fuels in internal combustion engines. In a normal spark-ignition engine, the hot premixed flame front travels smoothly in the combustion chamber from the spark plug, compressing the fuel/air mixture ahead. However, the concomitant increase in pressure and temperature may produce a cool flame in the last unburned fuel-air mixture (the so-called end gasses) and participate in the autoignition of the end gasses. This sudden, localized heat release generates a shock wave which travels through the combustion chamber, with its sudden pressure rise causing an audible knocking sound. Worse, the shock wave disrupts the thermal boundary layer on the piston surface, causing overheating and eventual melting. The output power decreases and, unless the throttle (or load) is cut off quickly, the engine can be damaged as described in a few minutes. The sensitivity of a fuel to a cool-flame ignition strongly depends on the temperature, pressure and composition. The cool flame initiation of the knock process is likely only in highly throttled operating conditions, since cool flames are observed at low pressures. Under normal operating conditions, autoignition occurs without being triggered by a cool flame. Whereas the temperature and pressure of the combustion are largely determined by the engine, the composition can be controlled by various antiknock additives. The latter mainly aim at removing the radicals (such as CH2O* mentioned above) thereby suppressing the major source of the cool flame. See also Fire Flame Plasma (physics) References Further reading - an explanation of the oscillatory nature of cool flame. Fire
Cool flame
[ "Chemistry" ]
1,179
[ "Combustion", "Fire" ]
27,347,213
https://en.wikipedia.org/wiki/Radical%20cyclization
Radical cyclization reactions are organic chemical transformations that yield cyclic products through radical intermediates. They usually proceed in three basic steps: selective radical generation, radical cyclization, and conversion of the cyclized radical to product. Introduction Radical cyclization reactions produce mono- or polycyclic products through the action of radical intermediates. Because they are intramolecular transformations, they are often very rapid and selective. Selective radical generation can be achieved at carbons bound to a variety of functional groups, and reagents used to effect radical generation are numerous. The radical cyclization step usually involves the attack of a radical on a multiple bond. After this step occurs, the resulting cyclized radicals are quenched through the action of a radical scavenger, a fragmentation process, or an electron-transfer reaction. Five- and six-membered rings are the most common products; formation of smaller and larger rings is rarely observed. Three conditions must be met for an efficient radical cyclization to take place: A method must be available to generate a radical selectively on the substrate. Radical cyclization must be faster than trapping of the initially formed radical. All steps must be faster than undesired side reactions such as radical recombination or reaction with solvent. Advantages: because radical intermediates are not charged species, reaction conditions are often mild and functional group tolerance is high and orthogonal to that of many polar processes. Reactions can be carried out in a variety of solvents (including arenes, alcohols, and water), as long as the solvent does not have a weak bond that can undergo abstraction, and products are often synthetically useful compounds that can be carried on using existing functionality or groups introduced during radical trapping. Disadvantages: the relative rates of the various stages of radical cyclization reactions (and any side reactions) must be carefully controlled so that cyclization and trapping of the cyclized radical is favored. Side reactions are sometimes a problem, and cyclization is especially slow for small and large rings (although macrocyclizations, which resemble intermolecular radical reactions, are often high yielding). Mechanism and stereochemistry Prevailing mechanism Because many reagents exist for radical generation and trapping, establishing a single prevailing mechanism is not possible. However, once a radical is generated, it can react with multiple bonds in an intramolecular fashion to yield cyclized radical intermediates. The two ends of the multiple bond constitute two possible sites of reaction. If the radical in the resulting intermediate ends up outside of the ring, the attack is termed "exo"; if it ends up inside the newly formed ring, the attack is called "endo." In many cases, exo cyclization is favored over endo cyclization (macrocyclizations constitute the major exception to this rule). 5-hexenyl radicals are the most synthetically useful intermediates for radical cyclizations, because cyclization is extremely rapid and exo selective. Although the exo radical is less thermodynamically stable than the endo radical, the more rapid exo cyclization is rationalized by better orbital overlap in the chair-like exo transition state (see below). (1) Substituents that affect the stability of these transition states can have a profound effect on the site selectivity of the reaction. Carbonyl substituents at the 2-position, for instance, encourage 6-endo ring closure. Alkyl substituents at positions 2, 3, 4, or 6 enhance selectivity for 5-exo closure. Cyclization of the homologous 6-heptenyl radical is still selective, but is much slower—as a result, competitive side reactions are an important problem when these intermediates are involved. Additionally, 1,5-shifts can yield stabilized allylic radicals at comparable rates in these systems. In 6-hexenyl radical substrates, polarization of the reactive double bond with electron-withdrawing functional groups is often necessary to achieve high yields. Stabilizing the initially formed radical with electron-withdrawing groups provides access to more stable 6-endo cyclization products preferentially. (2) Cyclization reactions of vinyl, aryl, and acyl radicals are also known. Under conditions of kinetic control, 5-exo cyclization takes place preferentially. However, low concentrations of a radical scavenger establish thermodynamic control and provide access to 6-endo products—not via 6-endo cyclization, but by 5-exo cyclization followed by 3-exo closure and subsequent fragmentation (Dowd-Beckwith rearrangement). Whereas at high concentrations of the exo product is rapidly trapped preventing subsequent rearrangement to the endo product Aryl radicals exhibit similar reactivity. (3) Cyclization can involve heteroatom-containing multiple bonds such as nitriles, oximes, and carbonyls. Attack at the carbon atom of the multiple bond is almost always observed. In the latter case attack is reversible; however alkoxy radicals can be trapped using a stannane trapping agent. Stereoselectivity The diastereoselectivity of radical cyclizations is often high. In most all-carbon cases, selectivity can be rationalized according to Beckwith's guidelines, which invoke the reactant-like, exo transition state shown above. Placing substituents in pseudoequatorial positions in the transition state leads to cis products from simple secondary radicals. Introducing polar substituents can favor trans products due to steric or electronic repulsion between the polar groups. In more complex systems, the development of transition state models requires consideration of factors such as allylic strain and boat-like transition states (4) Chiral auxiliaries have been used in enantioselective radical cyclizations with limited success. Small energy differences between early transition states constitute a profound barrier to success in this arena. In the example shown, diastereoselectivity (for both configurations of the left-hand stereocenter) is low and enantioselectivity is only moderate. (5) Substrates with stereocenters between the radical and multiple bond are often highly stereoselective. Radical cyclizations to form polycyclic products often take advantage of this property. Scope and limitations Radical generation methods The use of metal hydrides (tin, silicon and mercury hydrides) is common in radical cyclization reactions; the primary limitation of this method is the possibility of reduction of the initially formed radical by H-M. Fragmentation methods avoid this problem by incorporating the chain-transfer reagent into the substrate itself—the active chain-carrying radical is not released until after cyclization has taken place. The products of fragmentation methods retain a double bond as a result, and extra synthetic steps are usually required to incorporate the chain-carrying group. Atom-transfer methods rely on the movement of an atom from the acyclic starting material to the cyclic radical to generate the product. These methods use catalytic amounts of weak reagents, preventing problems associated with the presence of strong reducing agents (such as tin hydride). Hydrogen- and halogen-transfer processes are known; the latter tend to be more synthetically useful. (6) Oxidative and reductive cyclization methods also exist. These procedures require fairly electrophilic and nucleophilic radicals, respectively, to proceed effectively. Cyclic radicals are either oxidized or reduced and quenched with either external or internal nucleophiles or electrophiles, respectively. Ring sizes In general, radical cyclization to produce small rings is difficult. However, it is possible to trap the cyclized radical before re-opening. This process can be facilitated by fragmentation (see the three-membered case below) or by stabilization of the cyclized radical (see the four-membered case). Five- and six-membered rings are the most common sizes produced by radical cyclization. (7) Polycycles and macrocycles can also be formed using radical cyclization reactions. In the former case, rings can be pre-formed and a single ring closed with radical cyclization, or multiple rings can be formed in a tandem process (as below). Macrocyclizations, which lack the FMO requirement of cyclizations of smaller substrates, have the unique property of exhibiting endo selectivity. (8) Comparison with other methods In comparison to cationic cyclizations, radical cyclizations avoid issues associated with Wagner-Meerwein rearrangements, do not require strongly acidic conditions, and can be kinetically controlled. Cationic cyclizations are usually thermodynamically controlled. Radical cyclizations are much faster than analogous anionic cyclizations, and avoid β-elimination side reactions. Anionic Michael-type cyclization is an alternative to radical cyclization of activated olefins. Metal-catalyzed cyclization reactions usually require mildly basic conditions, and substrates must be chosen to avoid β-hydride elimination. The primary limitation of radical cyclizations with respect to these other methods is the potential for radical side reactions. Experimental conditions and procedure Typical conditions Radical reactions must be carried out under inert atmosphere as dioxygen is a triplet radical which will intercept radical intermediates. Because the relative rates of a number of processes are important to the reaction, concentrations must be carefully adjusted to optimize reaction conditions. Reactions are generally carried out in solvents whose bonds have high bond dissociation energies (BDEs), including benzene, methanol or benzotrifluoride. Even aqueous conditions are tolerated, since water has a strong O-H bond with a BDE of 494 kJ/mol. This is in contrast to many polar processes, where hydroxylic solvents (or polar X-H bonds in the substrate itself) may not be tolerated due to the nucleophilicity or acidity of the functional group. Example procedure (9) A mixture of bromo acetal 1 (549 mg, 1.78 mmol), AIBN (30.3 mg, 0.185 mmol), and Bu3SnH (0.65 mL, 2.42 mmol) in dry benzene (12 mL) was heated under reflux for 1 hour and then evaporated under reduced pressure. Silicagel column chromatography of the crude product with hexane–EtOAc (92:8) as eluant gave tetrahydropyran 2 (395 mg, 97%) as an oily mixture of two diastereomers. (c 0.43, CHCl3); IR (CHCl3):1732 cm–1;1H NMR (CDCl3)δ 4.77–4.89 (m, 0.6H), 4.66–4.69 (m, 0.4H), 3.40–4.44 (m, 4H), 3.68 (s, 3H), 2.61 (dd, J = 15.2, 4.2 Hz, 1H), 2.51 (dd, J = 15.2, 3.8 Hz, 1H), 0.73–1.06 (m, 3H); mass spectrum: m/z 215 (M+–Me); Anal. Calcd for C12H22O4: C, 62.6; H, 9.65. Found: C, 62.6; H, 9.7. References Organic reactions
Radical cyclization
[ "Chemistry" ]
2,446
[ "Organic reactions" ]
27,351,235
https://en.wikipedia.org/wiki/Stamped%20circuit%20board
A stamped circuit board (SCB) is used to mechanically support and electrically connect electronic components using conductive pathways, tracks or traces etched from copper sheets laminated onto a non-conductive substrate. This technology is used for small circuits, for instance in the production of LEDs. Similar to printed circuit boards this layer structure may comprise glass-fibre reinforced epoxy resin and copper. Basically, in the case of LED substrates three variations are possible: the PCB (printed circuit board), plastic-injection molding and the SCB. Using the SCB technology it is possible to structure and laminate the most widely differing material combinations in a reel-to-reel production process. As the layers are structured separately, improved design concepts are able to be implemented. Consequently, a far better and quicker heat dissipation from within the chip is achieved. Production Both the plastic and the metal are initially processed on separate reels, .i.e. in accordance with the requirements the materials are individually structured by stamping (“brought into form“) and then merged. Advantages The engineering respectively choice of substrates actually comes down to the particular application, module design/substrate assembly, material and thickness of the material involved. Taking these parameters it is possible to attain a good thermal management by using SCB technology, because rapid heat dissipation from beneath the chip means a longer service life for the system. Furthermore, SCB technology allows the material to be chosen to correspond to the pertinent requirements and then to optimize the design to arrive at a “perfect fit”. References Electrical engineering Electronics manufacturing Electronic engineering
Stamped circuit board
[ "Technology", "Engineering" ]
330
[ "Electrical engineering", "Electronic engineering", "Electronics manufacturing", "Computer engineering" ]
27,353,323
https://en.wikipedia.org/wiki/NeuroML
NeuroML is an XML (Extensible Markup Language) based model description language that aims to provide a common data format for defining and exchanging models in computational neuroscience. The focus of NeuroML is on models which are based on the biophysical and anatomical properties of real neurons. History The idea of creating NeuroML as a language for describing neuroscience models was first introduced by Goddard et al. (2001) following meetings in Edinburgh where initial templates for the language structures were discussed. This initial proposal was based on general purpose structures proposed by Gardner et al. (2001). At that time, the concept of NeuroML was closely linked with the idea of developing a software architecture in which a base application loads a range of plug-in components to handle different aspects of a simulation problem. Neosim (2003) was developed based on this goal, and early NeuroML development was closely aligned to this approach. Along with creating Neosim, Howell and Cannon developed a software library, the NeuroML Development Kit (NDK), to simplify the process of serializing models in XML. The NeuroML Development Kit implemented a particular dialect of XML, including the "listOfXXX" structure, which also found its way into SBML (Systems Biology Markup Language), but did not define any particular structures at the model description level. Instead, developers of plug-ins for Neosim were free to invent their own structures and serialize them via the NDK, in the hope that some consensus would emerge around the most useful ones. In practice, few developers beyond the Edinburgh group developed or used such structures and the resulting XML was too application specific to gain wider adoption. The Neosim project ended in 2005. Based on the ideas in Goddard et al. (2001) and discussions with the Edinburgh group, Sharon Crook began a collaborative effort to develop a language for describing neuronal morphologies in XML called MorphML. From the beginning, the idea behind MorphML was to develop a format for describing morphological structures that would include all of the necessary components to serve as a common data format with the added advantages of XML. At the same time, Padraig Gleeson and Angus Silver were developing neuroConstruct for generating neuronal simulations for the NEURON and GENESIS simulators. At that time, neuroConstruct utilized an internal simulator-independent representation for morphologies, channel and networks. It was agreed that these efforts should be merged under the banner of NeuroML, and the current structure of NeuroML was created. The schema was divided into levels (e.g. MorphML, ChannelML, and NetworkML) to allow different applications to support different part of the language. Since 2006 the XML Schema files for this version of the standard have been available from the NeuroML development site. The language Aims The main aims of the NeuroML initiative are to: To create specifications for a language (in XML) to describe the biophysics, anatomy and network architecture of neuronal systems at multiple scales To facilitate the exchange of complex neuronal network models between researchers, allowing for greater transparency and accessibility of models To promote software tools supporting NeuroML and to support the development of new software and databases To encourage researchers who create models within the scope of NeuroML to exchange and publish their models in this format. Structure NeuroML is focused on biophysical and anatomical detailed models, i.e. incorporating real neuronal morphologies and membrane conductances (conductance based models), and network models based on known anatomical connectivity. The NeuroML structure is composed of Levels, where each Level deals with a particular biophysical scale. The modular nature of the specifications makes them easier to develop, understand, and use since one can focus on one module at a time; however, the modules are designed to fit together seamlessly. There are currently three Levels of NeuroML defined: Level 1 focuses on the anatomical aspects of cells and consists of a schema for Metadata and the main MorphML schema. Tools which model the detailed neuronal morphologies (such as NeuronLand) can use the informations contained in this Level. Level 2 describes the biophysical properties of cells and also the properties of channel and synaptic mechanisms using ChannelML. Software which simulate neuronal spiking behaviour (such as NEURON and MOOSE) can use this Level of model description. Level 3 describes the positions of cell in space and the network connectivity. This kind of information in NetworkML can be used by software (such as CX3D and PCSIM) to exchange details on network architecture. Level 3 files containing cell morphology and connectivity can also be used by applications such as neuroConstruct for reproducing and analysing networks of conductance based cell models. Current schemas in readable form are available on the NeuroML specifications page. Application support for NeuroML A list of software packages which support all or part of NeuroML is available on the NeuroML website. Community NeuroML is an international, free and open community effort. The NeuroML Team implements the NeuroML specifications, maintains the website and the validator, organizes annual workshops and other events, and manages specific funding for coordinating the further development of NeuroML. Version 2.0 of the NeuroML language is being developed by the Specification Committees. NeuroML also participates in the International Neuroinformatics Coordinating Facility Program on Multiscale Modeling. See also OpenXDF References External links neuroml.org XML-based standards Neuroinformatics
NeuroML
[ "Technology", "Biology" ]
1,166
[ "Bioinformatics", "Computer standards", "XML-based standards", "Neuroinformatics" ]
27,353,663
https://en.wikipedia.org/wiki/List%20of%20software%20for%20Monte%20Carlo%20molecular%20modeling
This is a list of computer programs that use Monte Carlo methods for molecular modeling. Abalone classical Hybrid MC BOSS classical CASINO quantum Cassandra classical CP2K FEASST classical GOMC classical Internal_Coordinate_Mechanics ICM by MolSoft classical MacroModel classical Materials Studio classical ms2classical RASPA classical QMCPACK quantum Spartan classical Tinker classical TransRot classical Towhee classical See also List of quantum chemistry and solid state physics software Comparison of software for molecular mechanics modeling Comparison of nucleic acid simulation software Molecular design software Molecule editor www.molsoft.com References Molecular modelling software Monte Carlo molecular modelling software
List of software for Monte Carlo molecular modeling
[ "Chemistry" ]
126
[ "Molecular modelling", "Molecular modelling software", "Computational chemistry software" ]
26,978,338
https://en.wikipedia.org/wiki/Gale%E2%80%93Shapley%20algorithm
In mathematics, economics, and computer science, the Gale–Shapley algorithm (also known as the deferred acceptance algorithm, propose-and-reject algorithm, or Boston Pool algorithm) is an algorithm for finding a solution to the stable matching problem. It is named for David Gale and Lloyd Shapley, who published it in 1962, although it had been used for the National Resident Matching Program since the early 1950s. Shapley and Alvin E. Roth (who pointed out its prior application) won the 2012 Nobel Prize in Economics for work including this algorithm. The stable matching problem seeks to pair up equal numbers of participants of two types, using preferences from each participant. The pairing must be stable: no pair of unmatched participants should mutually prefer each other to their assigned match. In each round of the Gale–Shapley algorithm, unmatched participants of one type propose a match to the next participant on their preference list. Each proposal is accepted if its recipient prefers it to their current match. The resulting procedure is a truthful mechanism from the point of view of the proposing participants, who receive their most-preferred pairing consistent with stability. In contrast, the recipients of proposals receive their least-preferred pairing. The algorithm can be implemented to run in time quadratic in the number of participants, and linear in the size of the input to the algorithm. The stable matching problem, and the Gale–Shapley algorithm solving it, have widespread real-world applications, including matching American medical students to residencies and French university applicants to schools. For more, see . Background The stable matching problem, in its most basic form, takes as input equal numbers of two types of participants ( job applicants and employers, for example), and an ordering for each participant giving their preference for whom to be matched to among the participants of the other type. A matching pairs each participant of one type with a participant of the other type. A matching is not stable if: In other words, a matching is stable when there is no pair (A, B) where both participants prefer each other to their matched partners. If such a pair exists, the matching is not stable, in the sense that the members of this pair would prefer to leave the system and be matched to each other, possibly leaving other participants unmatched. A stable matching always exists, and the algorithmic problem solved by the Gale–Shapley algorithm is to find one. The stable matching problem has also been called the stable marriage problem, using a metaphor of marriage between men and women, and many sources describe the Gale–Shapley algorithm in terms of marriage proposals. However, this metaphor has been criticized as both sexist and unrealistic: the steps of the algorithm do not accurately reflect typical or even stereotypical human behavior. Solution In 1962, David Gale and Lloyd Shapley proved that, for any equal number of participants of each type, it is always possible to find a matching in which all pairs are stable. They presented an algorithm to do so. In 1984, Alvin E. Roth observed that essentially the same algorithm had already been in practical use since the early 1950s, as the "Boston Pool algorithm" used by the National Resident Matching Program. The Gale–Shapley algorithm involves a number of "rounds" (or "iterations"). In terms of job applicants and employers, it can be expressed as follows: In each round, one or more employers with open job positions each make a job offer to the applicant they prefer, among the ones they have not yet already made an offer to. Each applicant who has received an offer evaluates it against their current position (if they have one). If the applicant is not yet employed, or if they receive an offer from an employer they like better than their current employer, they accept the best new offer and become matched to the new employer (possibly leaving a previous employer with an open position). Otherwise, they reject the new offer. This process is repeated until all employers have either filled their positions or exhausted their lists of applicants. Implementation details and time analysis To implement the algorithm efficiently, each employer needs to be able to find its next applicant quickly, and each applicant needs to be able to compare employers quickly. One way to do this is to number each applicant and each employer from 1 to , where is the number of employers and applicants, and to store the following data structures: A set of employers with unfilled positions A one-dimensional array indexed by employers, specifying the preference index of the next applicant to whom the employer would send an offer, initially 1 for each employer A one-dimensional array indexed by applicants, specifying their current employer, initially a sentinel value such as 0 indicating they are unemployed A two-dimensional array indexed by an applicant and an employer, specifying the position of that employer in the applicant's preference list A two-dimensional array indexed by an employer and a number from 1 to , naming the applicant who is each employer's preference Setting up these data structures takes time. With these structures it is possible to find an employer with an unfilled position, make an offer from that employer to their next applicant, determine whether the offer is accepted, and update all of the data structures to reflect the results of these steps, in constant time per offer. Once the algorithm terminates, the resulting matching can be read off from the array of employers for each applicant. There can be offers before each employer runs out of offers to make, so the total time is . Although this time bound is quadratic in the number of participants, it may be considered as linear time when measured in terms of the size of the input, two matrices of preferences of size . Correctness guarantees This algorithm guarantees that: Everyone gets matched At the end, there cannot be an applicant and employer both unmatched. An employer left unmatched at the end of the process must have made an offer to all applicants. But an applicant who receives an offer remains employed for the rest of the process, so there can be no unemployed applicants. Since the numbers of applicants and job openings are equal, there can also be no open positions remaining. The matches are stable No applicant X and employer Y can prefer each other over their final match. If Y makes an offer to X, then X would only reject Y after receiving an even better offer, so X cannot prefer Y to their final match. And if Y stops making offers before reaching X in their preference list, Y cannot prefer X to their final match. In either case, X and Y do not form an unstable pair. Optimality of the solution There may be many stable matchings for the same system of preferences. This raises the question: which matching is returned by the Gale–Shapley algorithm? Is it the matching better for applicants, for employers, or an intermediate one? As it turns out, the Gale–Shapley algorithm in which employers make offers to applicants always yields the same stable matching (regardless of the order in which job offers are made), and its choice is the stable matching that is the best for all employers and worst for all applicants among all stable matchings. In a reversed form of the algorithm, each round consists of unemployed applicants writing a single job application to their preferred employer, and the employer either accepting the application (possibly firing an existing employee to do so) or rejecting it. This produces a matching that is best for all applicants and worst for all employers among all stable matchings. These two matchings are the top and bottom elements of the lattice of stable matchings. In both forms of the algorithm, one group of participants proposes matches, and the other group decides whether to accept or reject each proposal. The matching is always best for the group that makes the propositions, and worst for the group that decides how to handle each proposal. Strategic considerations The Gale–Shapley algorithm is a truthful mechanism from the point of view of the proposing side. This means that no proposer can get a better matching by misrepresenting their preferences. Moreover, the Gale–Shapley algorithm is even group-strategy proof for proposers, i.e., no coalition of proposers can coordinate a misrepresentation of their preferences such that all proposers in the coalition are strictly better-off. However, it is possible for some coalition to misrepresent their preferences such that some proposers are better-off, and the others retain the same partner. The Gale–Shapley algorithm is non-truthful for the non-proposing participants. Each may be able to misrepresent their preferences and get a better match. A particular form of manipulation is truncation: presenting only the topmost alternatives, implying that the bottom alternatives are not acceptable at all. Under complete information, it is sufficient to consider misrepresentations of the form of truncation strategies. However, successful misrepresentation requires knowledge of the other agents' preferences; without such knowledge, misrepresentation can give an agent a worse assignment. Moreover, even after an agent sees the final matching, they cannot deduce a strategy that would guarantee a better outcome in hindsight. This makes the Gale–Shapley algorithm a regret-free truth-telling mechanism. Moreover, in the Gale–Shapley algorithm, truth-telling is the only strategy that guarantees no regret. The Gale–Shapley algorithm is the only regret-free mechanism in the class of quantile-stable matching mechanisms. Generalizations In their original work on the problem, Gale and Shapley considered a more general form of the stable matching problem, suitable for university and college admission. In this problem, each university or college may have its own quota, a target number of students to admit, and the number of students applying for admission may differ from the sum of the quotas, necessarily causing either some students to remain unmatched or some quotas to remain unfilled. Additionally, preference lists may be incomplete: if a university omits a student from their list, it means they would prefer to leave their quota unfilled than to admit that student, and if a student omits a university from their list, it means they would prefer to remain unadmitted than to go to that university. Nevertheless, it is possible to define stable matchings for this more general problem, to prove that stable matchings always exist, and to apply the same algorithm to find one. A form of the Gale–Shapley algorithm, performed through a real-world protocol rather than calculated on computers, has been used for coordinating higher education admissions in France since 2018, through the Parcoursup system. In this process, over the course of the summer before the start of school, applicants receive offers of admission, and must choose in each round of the process whether to accept any new offer (and if so turn down any previous offer that they accepted). The method is complicated by additional constraints that make the problem it solves not exactly the stable matching problem. It has the advantage that the students do not need to commit to their preferences at the start of the process, but rather can determine their own preferences as the algorithm progresses, on the basis of head-to-head comparisons between offers that they have received. It is important that this process performs a small number of rounds of proposals, so that it terminates before the start date of the schools, but although high numbers of rounds can occur in theory, they tend not to occur in practice. It has been shown theoretically that, if the Gale–Shapley algorithm needs to be terminated early, after a small number of rounds in which every vacant position makes a new offer, it nevertheless produces matchings that have a high ratio of matched participants to unstable pairs. Recognition Shapley and Roth were awarded the 2012 Nobel Memorial Prize in Economic Sciences "for the theory of stable allocations and the practice of market design". Gale had died in 2008, making him ineligible for the prize. See also Deferred-acceptance auction Stable roommates problem References Stable matching Lloyd Shapley Combinatorial algorithms
Gale–Shapley algorithm
[ "Mathematics" ]
2,464
[ "Combinatorial algorithms", "Computational mathematics", "Combinatorics" ]
26,983,835
https://en.wikipedia.org/wiki/Electrically%20detected%20magnetic%20resonance
Electrically detected magnetic resonance (EDMR) is a materials characterisation technique that improves upon electron spin resonance. It involves measuring the change in electrical resistance of a sample when exposed to certain microwave frequencies. It can be used to identify very small numbers (down to a few hundred atoms) of impurities in semiconductors. Outline of technique To perform a pulsed EDMR experiment, the system is first initialised by placing it in a magnetic field. This orients the spins of the electrons occupying the donor and acceptor in the direction of the magnetic field. To study the donor, we apply a microwave pulse ("γ" in the diagram) at a resonant frequency of the donor. This flips the spin of the electron on the donor. The donor electron can then decay to the acceptor energy state (it was forbidden from doing that before it was flipped due to the Pauli exclusion principle) and from there to the valence band, where it recombines with a hole. With more recombination, there will be fewer conduction electrons in the conduction band and a corresponding increase in the resistance, which can be directly measured. Above-bandgap light is used throughout the experiment to ensure that there are many electrons in the conduction band. By scanning the frequency of the microwave pulse, we can find which frequencies are resonant, and with knowledge of the strength of the magnetic field, we can identify the donor's energy levels from the resonant frequency and knowledge of the Zeeman effect. The donor's energy levels act as a 'fingerprint' by which we can identify the donor and its local electronic environment. By changing the frequency slightly, we can study the acceptor instead. Recent developments EDMR has been demonstrated on a single electron from a quantum dot. Measurements of less than 100 donors and theoretical analyses of such a measurement have been published, relying on the Pb interface defect to act as the acceptor. References Quantum electronics Spectroscopy
Electrically detected magnetic resonance
[ "Physics", "Chemistry", "Materials_science" ]
406
[ "Molecular physics", "Spectrum (physical sciences)", "Quantum electronics", "Instrumental analysis", "Quantum mechanics", "Condensed matter physics", "Nanotechnology", "Spectroscopy" ]
26,984,136
https://en.wikipedia.org/wiki/Arruda%E2%80%93Boyce%20model
In continuum mechanics, an Arruda–Boyce model is a hyperelastic constitutive model used to describe the mechanical behavior of rubber and other polymeric substances. This model is based on the statistical mechanics of a material with a cubic representative volume element containing eight chains along the diagonal directions. The material is assumed to be incompressible. The model is named after Ellen Arruda and Mary Cunningham Boyce, who published it in 1993. The strain energy density function for the incompressible Arruda–Boyce model is given by where is the number of chain segments, is the Boltzmann constant, is the temperature in kelvins, is the number of chains in the network of a cross-linked polymer, where is the first invariant of the left Cauchy–Green deformation tensor, and is the inverse Langevin function which can be approximated by For small deformations the Arruda–Boyce model reduces to the Gaussian network based neo-Hookean solid model. It can be shown that the Gent model is a simple and accurate approximation of the Arruda–Boyce model. Alternative expressions for the Arruda–Boyce model An alternative form of the Arruda–Boyce model, using the first five terms of the inverse Langevin function, is where is a material constant. The quantity can also be interpreted as a measure of the limiting network stretch. If is the stretch at which the polymer chain network becomes locked, we can express the Arruda–Boyce strain energy density as We may alternatively express the Arruda–Boyce model in the form where and If the rubber is compressible, a dependence on can be introduced into the strain energy density; being the deformation gradient. Several possibilities exist, among which the Kaliske–Rothert extension has been found to be reasonably accurate. With that extension, the Arruda-Boyce strain energy density function can be expressed as where is a material constant and . For consistency with linear elasticity, we must have where is the bulk modulus. Consistency condition For the incompressible Arruda–Boyce model to be consistent with linear elasticity, with as the shear modulus of the material, the following condition has to be satisfied: From the Arruda–Boyce strain energy density function, we have, Therefore, at , Substituting in the values of leads to the consistency condition Stress-deformation relations The Cauchy stress for the incompressible Arruda–Boyce model is given by Uniaxial extension For uniaxial extension in the -direction, the principal stretches are . From incompressibility . Hence . Therefore, The left Cauchy–Green deformation tensor can then be expressed as If the directions of the principal stretches are oriented with the coordinate basis vectors, we have If , we have Therefore, The engineering strain is . The engineering stress is Equibiaxial extension For equibiaxial extension in the and directions, the principal stretches are . From incompressibility . Hence . Therefore, The left Cauchy–Green deformation tensor can then be expressed as If the directions of the principal stretches are oriented with the coordinate basis vectors, we have The engineering strain is . The engineering stress is Planar extension Planar extension tests are carried out on thin specimens which are constrained from deforming in one direction. For planar extension in the directions with the direction constrained, the principal stretches are . From incompressibility . Hence . Therefore, The left Cauchy–Green deformation tensor can then be expressed as If the directions of the principal stretches are oriented with the coordinate basis vectors, we have The engineering strain is . The engineering stress is Simple shear The deformation gradient for a simple shear deformation has the form where are reference orthonormal basis vectors in the plane of deformation and the shear deformation is given by In matrix form, the deformation gradient and the left Cauchy–Green deformation tensor may then be expressed as Therefore, and the Cauchy stress is given by Statistical mechanics of polymer deformation The Arruda–Boyce model is based on the statistical mechanics of polymer chains. In this approach, each macromolecule is described as a chain of segments, each of length . If we assume that the initial configuration of a chain can be described by a random walk, then the initial chain length is If we assume that one end of the chain is at the origin, then the probability that a block of size around the origin will contain the other end of the chain, , assuming a Gaussian probability density function, is The configurational entropy of a single chain from Boltzmann statistical mechanics is where is a constant. The total entropy in a network of chains is therefore where an affine deformation has been assumed. Therefore the strain energy of the deformed network is where is the temperature. Notes and references See also Hyperelastic material Rubber elasticity Finite strain theory Continuum mechanics Strain energy density function Neo-Hookean solid Mooney–Rivlin solid Yeoh (hyperelastic model) Gent (hyperelastic model) Continuum mechanics Elasticity (physics) Non-Newtonian fluids Rubber properties Solid mechanics Polymer chemistry
Arruda–Boyce model
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,066
[ "Physical phenomena", "Solid mechanics", "Continuum mechanics", "Elasticity (physics)", "Deformation (mechanics)", "Classical mechanics", "Materials science", "Mechanics", "Polymer chemistry", "Physical properties" ]
26,984,165
https://en.wikipedia.org/wiki/Antiroll%20tanks
Antiroll tanks are tanks fitted onto ships in order to improve the ship's response to roll motion. Fitted with baffles intended to slow the rate of water transfer from the port side of the tank to the starboard side and the reverse, the tanks are designed such that a larger amount of water is trapped on the higher side of the vessel. This is intended to reduce the roll period of the hull by acting in opposition to the free surface effect. They can be broadly classified into active and passive antiroll tanks. Passive antiroll tanks Free surface tanks A single partially filled tank that extends across the full breadth of the vessel. Its shape, size and internal baffles allow the liquid inside to slosh from side to side in response to the roll motion of the ship. The phasing of the roll moments acting on the ship and the resultant liquid motion will be such that it reduces the roll motion. This type of tank was first investigated by William Froude, but did not receive much attention until the 1950s when it was revived and used in many naval vessels. They have the added advantage that it is possible to vary tank natural frequency by changes in water level and thus accommodate changes in a ship's metacentric height. Free surface tanks are commonly referred to as "flume" tanks. U-tube tanks The use of these tanks was pioneered by Herr H. Frahm in Germany at the start of the 20th century and they are often referred to as Frahm tanks. These partially filled tanks consist of two wing tanks connected at the bottom by a substantial crossover duct. The air columns above the liquid in the two tanks are also connected by a duct. As in the free surface tanks, as the ship begins to roll the fluid flows from wing tank to wing tank causing a time varying roll moment to the ship and with careful design this roll moment is of correct phasing to reduce the roll motion of the ship. They do not restrict fore and aft passage as space above and below the water-crossover duct is available for other purposes. External stabilizer tanks This was another concept introduced by Frahm and used in several ships in the early 1900s. In this concept the two wing tanks are connected only by an air duct at the top. Water flows in and out of each tank via an opening in the hull to the sea. This eliminates the need for a crossover duct as in the other designs, but has its own set of disadvantages. This design promoted corrosion of the tanks due to the explicit interaction with sea water. The holes on the hull cause resistance to forward motion. The force required to accelerate sea water outside the ship (which is initially at rest) to the speed of the ship as it enters the ship is a substantial drag component (momentum drag) as its magnitude increases with the square of ship speed. More recently, a variation of these tanks has been used in oil drilling rig applications where forward motion is of little relevance. Controlled passive antiroll tanks Active u-tube tanks This is similar to a U-tube tank but the water crossover duct is much larger and the air crossover contains a servo-controlled valve system. Since this valve controls the flow of air very little power is required. When the valve is closed, passage of air from one tank to the other is prevented and the resulting compression of air in the tank prevents flow of water also. When the valve opens, free movement of water and air is possible. Active antiroll tanks The border between controlled-passive and active stabilisation is not that distinct. Active stabilisation generally implies that the system requires the use of machinery of significant power and the system must be much more effective in reducing roll in order to justify this high cost. Active tank stabilizer This concept utilises an axial flow pump to force the water from one side of the ship to other rather than allowing it to slosh as in passive systems. Webster (1967) studied the design of such a tank in detail. The main disadvantage to this is that when the pump is operated there is a time lag for a sizeable amount of fluid to arrive at a tank, thus limiting instant roll stabilization. Hence, compared to fin stabiliser systems, this is highly inefficient. Active tank stabilizer with energy recovery Instead of consuming energy to control the flow inside the tank, this concept utilises a water turbine to produce electricity using the water that sloshes into the tank as in passive systems. The main advantages to this are operating cost reduction by replacing part of fuel consumption and control of water flow without any mechanical mobile device. See also Slosh dynamics References Principles of Naval Architecture Vol.III, SNAME, 1989, Pg: 127 External links https://web.archive.org/web/20100917163912/http://allatsea.net/article/February_2007/Anti-Roll_Tanks_-_A_Simple_Way_to_Stabilize http://www.hoppe-marine.com/ http://www.geps-techno.com/ Watercraft components Control devices
Antiroll tanks
[ "Engineering" ]
1,037
[ "Control devices", "Control engineering" ]
1,326,107
https://en.wikipedia.org/wiki/Steady%20state
In systems theory, a system or a process is in a steady state if the variables (called state variables) which define the behavior of the system or the process are unchanging in time. In continuous time, this means that for those properties p of the system, the partial derivative with respect to time is zero and remains so: In discrete time, it means that the first difference of each property is zero and remains so: The concept of a steady state has relevance in many fields, in particular thermodynamics, economics, and engineering. If a system is in a steady state, then the recently observed behavior of the system will continue into the future. In stochastic systems, the probabilities that various states will be repeated will remain constant. For example, see for the derivation of the steady state. In many systems, a steady state is not achieved until some time after the system is started or initiated. This initial situation is often identified as a transient state, start-up or warm-up period. For example, while the flow of fluid through a tube or electricity through a network could be in a steady state because there is a constant flow of fluid or electricity, a tank or capacitor being drained or filled with fluid is a system in transient state, because its volume of fluid changes with time. Often, a steady state is approached asymptotically. An unstable system is one that diverges from the steady state. See for example Linear difference equation#Stability. In chemistry, a steady state is a more general situation than dynamic equilibrium. While a dynamic equilibrium occurs when two or more reversible processes occur at the same rate, and such a system can be said to be in a steady state, a system that is in a steady state may not necessarily be in a state of dynamic equilibrium, because some of the processes involved are not reversible. In other words, dynamic equilibrium is just one manifestation of a steady state. Applications Economics A steady state economy is an economy (especially a national economy but possibly that of a city, a region, or the world) of stable size featuring a stable population and stable consumption that remain at or below carrying capacity. In the economic growth model of Robert Solow and Trevor Swan, the steady state occurs when gross investment in physical capital equals depreciation and the economy reaches economic equilibrium, which may occur during a period of growth. Electrical engineering In electrical engineering and electronic engineering, steady state is an equilibrium condition of a circuit or network that occurs as the effects of transients are no longer important. Steady state is also used as an approximation in systems with on-going transient signals, such as audio systems, to allow simplified analysis of first order performance. Sinusoidal Steady State Analysis is a method for analyzing alternating current circuits using the same techniques as for solving DC circuits. The ability of an electrical machine or power system to regain its original/previous state is called Steady State Stability. The stability of a system refers to the ability of a system to return to its steady state when subjected to a disturbance. As mentioned before, power is generated by synchronous generators that operate in synchronism with the rest of the system. A generator is synchronized with a bus when both of them have same frequency, voltage and phase sequence. We can thus define the power system stability as the ability of the power system to return to steady state without losing synchronicity. Usually power system stability is categorized into steady state, transient and dynamic stability. Steady State Stability studies are restricted to small and gradual changes in the system operating conditions. In this we basically concentrate on restricting the bus voltages close to their nominal values. We also ensure that phase angles between two buses are not too large and check for the overloading of the power equipment and transmission lines. These checks are usually done using power flow studies. Transient Stability involves the study of the power system following a major disturbance. Following a large disturbance in the synchronous alternator the machine power (load) angle changes due to sudden acceleration of the rotor shaft. The objective of the transient stability study is to ascertain whether the load angle returns to a steady value following the clearance of the disturbance. The ability of a power system to maintain stability under continuous small disturbances is investigated under the name of Dynamic Stability (also known as small-signal stability). These small disturbances occur due to random fluctuations in loads and generation levels. In an interconnected power system, these random variations can lead catastrophic failure as this may force the rotor angle to increase steadily. Steady state determination is an important topic, because many design specifications of electronic systems are given in terms of the steady-state characteristics. Periodic steady-state solution is also a prerequisite for small signal dynamic modeling. Steady-state analysis is therefore an indispensable component of the design process. In some cases, it is useful to consider constant envelope vibration—vibration that never settles down to motionlessness, but continues to move at constant amplitude—a kind of steady-state condition. Chemical engineering In chemistry, thermodynamics, and other chemical engineering, a steady state is a situation in which all state variables are constant in spite of ongoing processes that strive to change them. For an entire system to be at steady state, i.e. for all state variables of a system to be constant, there must be a flow through the system (compare mass balance). One of the simplest examples of such a system is the case of a bathtub with the tap open but without the bottom plug: after a certain time the water flows in and out at the same rate, so the water level (the state variable being Volume) stabilizes and the system is at steady state. Of course the Volume stabilizing inside the tub depends on the size of the tub, the diameter of the exit hole and the flowrate of water in. Since the tub can overflow, eventually a steady state can be reached where the water flowing in equals the overflow plus the water out through the drain. A steady state flow process requires conditions at all points in an apparatus remain constant as time changes. There must be no accumulation of mass or energy over the time period of interest. The same mass flow rate will remain constant in the flow path through each element of the system. Thermodynamic properties may vary from point to point, but will remain unchanged at any given point. Mechanical engineering When a periodic force is applied to a mechanical system, it will typically reach a steady state after going through some transient behavior. This is often observed in vibrating systems, such as a clock pendulum, but can happen with any type of stable or semi-stable dynamic system. The length of the transient state will depend on the initial conditions of the system. Given certain initial conditions, a system may be in steady state from the beginning. Biochemistry In biochemistry, the study of biochemical pathways is an important topic. Such pathways will often display steady-state behavior where the chemical species are unchanging, but there is a continuous dissipation of flux through the pathway. Many, but not all, biochemical pathways evolve to stable, steady states. As a result, the steady state represents an important reference state to study. This is also related to the concept of homeostasis, however, in biochemistry, a steady state can be stable or unstable such as in the case of sustained oscillations or bistable behavior. Physiology Homeostasis (from Greek ὅμοιος, hómoios, "similar" and στάσις, stásis, "standing still") is the property of a system that regulates its internal environment and tends to maintain a stable, constant condition. Typically used to refer to a living organism, the concept came from that of milieu interieur that was created by Claude Bernard and published in 1865. Multiple dynamic equilibrium adjustment and regulation mechanisms make homeostasis possible. Fiber optics In fiber optics, "steady state" is a synonym for equilibrium mode distribution. Pharmacokinetics In pharmacokinetics, steady state is a dynamic equilibrium in the body where drug concentrations consistently stay within a therapeutic limit over time. See also Attractor Carrying capacity Control theory Dynamical system Ecological footprint Economic growth Engine test stand Equilibrium point List of types of equilibrium Evolutionary economics Growth curve Herman Daly Homeostasis Limit cycle Limits to Growth Population dynamics Simulation State function Steady state economy Steady State theory Systems theory Thermodynamic equilibrium Transient state References Systems theory Control theory
Steady state
[ "Mathematics" ]
1,728
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
1,326,932
https://en.wikipedia.org/wiki/Beamforming
Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception. This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array. Beamforming can be used for radio or sound waves. It has found numerous applications in radar, sonar, seismology, wireless communications, radio astronomy, acoustics and biomedicine. Adaptive beamforming is used to detect and estimate the signal of interest at the output of a sensor array by means of optimal (e.g. least-squares) spatial filtering and interference rejection. Techniques To change the directionality of the array when transmitting, a beamformer controls the phase and relative amplitude of the signal at each transmitter, in order to create a pattern of constructive and destructive interference in the wavefront. When receiving, information from different sensors is combined in a way where the expected pattern of radiation is preferentially observed. For example, in sonar, to send a sharp pulse of underwater sound towards a ship in the distance, simply simultaneously transmitting that sharp pulse from every sonar projector in an array fails because the ship will first hear the pulse from the speaker that happens to be nearest the ship, then later pulses from speakers that happen to be further from the ship. The beamforming technique involves sending the pulse from each projector at slightly different times (the projector closest to the ship last), so that every pulse hits the ship at exactly the same time, producing the effect of a single strong pulse from a single powerful projector. The same technique can be carried out in air using loudspeakers, or in radar/radio using antennas. In passive sonar, and in reception in active sonar, the beamforming technique involves combining delayed signals from each hydrophone at slightly different times (the hydrophone closest to the target will be combined after the longest delay), so that every signal reaches the output at exactly the same time, making one loud signal, as if the signal came from a single, very sensitive hydrophone. Receive beamforming can also be used with microphones or radar antennas. With narrowband systems the time delay is equivalent to a "phase shift", so in this case the array of antennas, each one shifted a slightly different amount, is called a phased array. A narrow band system, typical of radars, is one where the bandwidth is only a small fraction of the center frequency. With wideband systems this approximation no longer holds, which is typical in sonars. In the receive beamformer the signal from each antenna may be amplified by a different "weight." Different weighting patterns (e.g., Dolph–Chebyshev) can be used to achieve the desired sensitivity patterns. A main lobe is produced together with nulls and sidelobes. As well as controlling the main lobe width (beamwidth) and the sidelobe levels, the position of a null can be controlled. This is useful to ignore noise or jammers in one particular direction, while listening for events in other directions. A similar result can be obtained on transmission. For the full mathematics on directing beams using amplitude and phase shifts, see the mathematical section in phased array. Beamforming techniques can be broadly divided into two categories: conventional (fixed or switched beam) beamformers adaptive beamformers or phased array Desired signal maximization mode Interference signal minimization or cancellation mode Conventional beamformers, such as the Butler matrix, use a fixed set of weightings and time-delays (or phasings) to combine the signals from the sensors in the array, primarily using only information about the location of the sensors in space and the wave directions of interest. In contrast, adaptive beamforming techniques (e.g., MUSIC, SAMV) generally combine this information with properties of the signals actually received by the array, typically to improve rejection of unwanted signals from other directions. This process may be carried out in either the time or the frequency domain. As the name indicates, an adaptive beamformer is able to automatically adapt its response to different situations. Some criterion has to be set up to allow the adaptation to proceed such as minimizing the total noise output. Because of the variation of noise with frequency, in wide band systems it may be desirable to carry out the process in the frequency domain. Beamforming can be computationally intensive. Sonar phased array has a data rate low enough that it can be processed in real time in software, which is flexible enough to transmit or receive in several directions at once. In contrast, radar phased array has a data rate so high that it usually requires dedicated hardware processing, which is hard-wired to transmit or receive in only one direction at a time. However, newer field programmable gate arrays are fast enough to handle radar data in real time, and can be quickly re-programmed like software, blurring the hardware/software distinction. Sonar beamforming requirements Sonar beamforming utilizes a similar technique to electromagnetic beamforming, but varies considerably in implementation details. Sonar applications vary from 1 Hz to as high as 2 MHz, and array elements may be few and large, or number in the hundreds yet very small. This will shift sonar beamforming design efforts significantly between demands of such system components as the "front end" (transducers, pre-amplifiers and digitizers) and the actual beamformer computational hardware downstream. High frequency, focused beam, multi-element imaging-search sonars and acoustic cameras often implement fifth-order spatial processing that places strains equivalent to Aegis radar demands on the processors. Many sonar systems, such as on torpedoes, are made up of arrays of up to 100 elements that must accomplish beam steering over a 100 degree field of view and work in both active and passive modes. Sonar arrays are used both actively and passively in 1-, 2-, and 3-dimensional arrays. 1-dimensional "line" arrays are usually in multi-element passive systems towed behind ships and in single- or multi-element side-scan sonar. 2-dimensional "planar" arrays are common in active/passive ship hull mounted sonars and some side-scan sonar. 3-dimensional spherical and cylindrical arrays are used in 'sonar domes' in the modern submarine and ships. Sonar differs from radar in that in some applications such as wide-area-search all directions often need to be listened to, and in some applications broadcast to, simultaneously. Thus a multibeam system is needed. In a narrowband sonar receiver, the phases for each beam can be manipulated entirely by signal processing software, as compared to present radar systems that use hardware to 'listen' in a single direction at a time. Sonar also uses beamforming to compensate for the significant problem of the slower propagation speed of sound as compared to that of electromagnetic radiation. In side-look-sonars, the speed of the towing system or vehicle carrying the sonar is moving at sufficient speed to move the sonar out of the field of the returning sound "ping". In addition to focusing algorithms intended to improve reception, many side scan sonars also employ beam steering to look forward and backward to "catch" incoming pulses that would have been missed by a single sidelooking beam. Schemes A conventional beamformer can be a simple beamformer also known as delay-and-sum beamformer. All the weights of the antenna elements can have equal magnitudes. The beamformer is steered to a specified direction only by selecting appropriate phases for each antenna. If the noise is uncorrelated and there are no directional interferences, the signal-to-noise ratio of a beamformer with antennas receiving a signal of power , (where is Noise variance or Noise power), is: A null-steering beamformer is optimized to have zero response in the direction of one or more interferers. A frequency-domain beamformer treats each frequency bin as a narrowband signal, for which the filters are complex coefficients (that is, gains and phase shifts), separately optimized for each frequency. Evolved Beamformer The delay-and-sum beamforming technique uses multiple microphones to localize sound sources. One disadvantage of this technique is that adjustments of the position or of the number of microphones changes the performance of the beamformer nonlinearly. Additionally, due to the number of combinations possible, it is computationally hard to find the best configuration. One of the techniques to solve this problem is the use of genetic algorithms. Such algorithm searches for the microphone array configuration that provides the highest signal-to-noise ratio for each steered orientation. Experiments showed that such algorithm could find the best configuration of a constrained search space comprising ~33 million solutions in a matter of seconds instead of days. History in wireless communication standards Beamforming techniques used in cellular phone standards have advanced through the generations to make use of more complex systems to achieve higher density cells, with higher throughput. Passive mode: (almost) non-standardized solutions Wideband code division multiple access (WCDMA) supports direction of arrival (DOA) based beamforming Active mode: mandatory standardized solutions 2G – Transmit antenna selection as an elementary beamforming 3G – WCDMA: transmit antenna array (TxAA) beamforming 3G evolution – LTE/UMB: multiple-input multiple-output (MIMO) precoding based beamforming with partial space-division multiple access (SDMA) Beyond 3G (4G, 5G...) – More advanced beamforming solutions to support SDMA such as closed-loop beamforming and multi-dimensional beamforming are expected An increasing number of consumer 802.11ac Wi-Fi devices with MIMO capability can support beamforming to boost data communication rates. Digital, analog, and hybrid To receive (but not transmit), there is a distinction between analog and digital beamforming. For example, if there are 100 sensor elements, the "digital beamforming" approach entails that each of the 100 signals passes through an analog-to-digital converter to create 100 digital data streams. Then these data streams are added up digitally, with appropriate scale-factors or phase-shifts, to get the composite signals. By contrast, the "analog beamforming" approach entails taking the 100 analog signals, scaling or phase-shifting them using analog methods, summing them, and then usually digitizing the single output data stream. Digital beamforming has the advantage that the digital data streams (100 in this example) can be manipulated and combined in many possible ways in parallel, to get many different output signals in parallel. The signals from every direction can be measured simultaneously, and the signals can be integrated for a longer time when studying far-off objects and simultaneously integrated for a shorter time to study fast-moving close objects, and so on. This cannot be done as effectively for analog beamforming, not only because each parallel signal combination requires its own circuitry, but more fundamentally because digital data can be copied perfectly but analog data cannot. (There is only so much analog power available, and amplification adds noise.) Therefore, if the received analog signal is split up and sent into a large number of different signal combination circuits, it can reduce the signal-to-noise ratio of each. In MIMO communication systems with large number of antennas, so called massive MIMO systems, the beamforming algorithms executed at the digital baseband can get very complex. In addition, if all beamforming is done at baseband, each antenna needs its own RF feed. At high frequencies and with large number of antenna elements, this can be very costly, and increase loss and complexity in the system. To remedy these issues, hybrid beamforming has been suggested where some of the beamforming is done using analog components and not digital. There are many possible different functions that can be performed using analog components instead of at the digital baseband. Beamforming, whether done digitally, or by means of analog architecture, has recently been applied in integrated sensing and communication technology. For instance, a beamformer was suggested, in imperfect channel state information situations to perform communication tasks, while at the same time performing target detection to sense targets in the scene. For speech audio Beamforming can be used to try to extract sound sources in a room, such as multiple speakers in the cocktail party problem. This requires the locations of the speakers to be known in advance, for example by using the time of arrival from the sources to mics in the array, and inferring the locations from the distances. Compared to carrier-wave telecommunications, natural audio contains a variety of frequencies. It is advantageous to separate frequency bands prior to beamforming because different frequencies have different optimal beamform filters (and hence can be treated as separate problems, in parallel, and then recombined afterward). Properly isolating these bands involves specialized non-standard filter banks. In contrast, for example, the standard fast Fourier transform (FFT) band-filters implicitly assume that the only frequencies present in the signal are exact harmonics; frequencies which lie between these harmonics will typically activate all of the FFT channels (which is not what is wanted in a beamform analysis). Instead, filters can be designed in which only local frequencies are detected by each channel (while retaining the recombination property to be able to reconstruct the original signal), and these are typically non-orthogonal unlike the FFT basis. See also References General Louay M. A. Jalloul and Sam. P. Alex, "Evaluation Methodology and Performance of an IEEE 802.16e System", Presented to the IEEE Communications and Signal Processing Society, Orange County Joint Chapter (ComSig), December 7, 2006. Available at: https://web.archive.org/web/20110414143801/http://chapters.comsoc.org/comsig/meet.html H. L. Van Trees, Optimum Array Processing, Wiley, NY, 2002. Jian Li, and Petre Stoica, eds. Robust adaptive beamforming. New Jersey: John Wiley, 2006. M. Soltanalian. Signal Design for Active Sensing and Communications. Uppsala Dissertations from the Faculty of Science and Technology (printed by Elanders Sverige AB), 2014. "A Primer on Digital Beamforming" by Toby Haynes, March 26, 1998 "What Is Beamforming?", an introduction to sonar beamforming by Greg Allen. "Dolph–Chebyshev Weights" antenna-theory.com A collection of pages providing a simple introduction to microphone array beamforming External links MU-MIMO Beamforming by Constructive Interference, Wolfram Demonstrations Project Acoustic measurement Antennas (radio) Signal processing Sonar Speech processing
Beamforming
[ "Technology", "Engineering" ]
3,105
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
1,328,116
https://en.wikipedia.org/wiki/Neutrino%20oscillation
Neutrino oscillation is a quantum mechanical phenomenon in which a neutrino created with a specific lepton family number ("lepton flavor": electron, muon, or tau) can later be measured to have a different lepton family number. The probability of measuring a particular flavor for a neutrino varies between three known states, as it propagates through space. First predicted by Bruno Pontecorvo in 1957, neutrino oscillation has since been observed by a multitude of experiments in several different contexts. Most notably, the existence of neutrino oscillation resolved the long-standing solar neutrino problem. Neutrino oscillation is of great theoretical and experimental interest, as the precise properties of the process can shed light on several properties of the neutrino. In particular, it implies that the neutrino has a non-zero mass outside the Einstein-Cartan torsion, which requires a modification to the Standard Model of particle physics. The experimental discovery of neutrino oscillation, and thus neutrino mass, by the Super-Kamiokande Observatory and the Sudbury Neutrino Observatories was recognized with the 2015 Nobel Prize for Physics. Observations A great deal of evidence for neutrino oscillation has been collected from many sources, over a wide range of neutrino energies and with many different detector technologies. The 2015 Nobel Prize in Physics was shared by Takaaki Kajita and Arthur B. McDonald for their early pioneering observations of these oscillations. Neutrino oscillation is a function of the ratio, where is the distance traveled and is the neutrino's energy. (Details in below.) All available neutrino sources produce a range of energies, and oscillation is measured at a fixed distance for neutrinos of varying energy. The limiting factor in measurements is the accuracy with which the energy of each observed neutrino can be measured. Because current detectors have energy uncertainties of a few percent, it is satisfactory to know the distance to within 1%. Solar neutrino oscillation The first experiment that detected the effects of neutrino oscillation was Ray Davis' Homestake experiment in the late 1960s, in which he observed a deficit in the flux of solar neutrinos with respect to the prediction of the Standard Solar Model, using a chlorine-based detector. This gave rise to the solar neutrino problem. Many subsequent radiochemical and water Cherenkov detectors confirmed the deficit, but neutrino oscillation was not conclusively identified as the source of the deficit until the Sudbury Neutrino Observatory provided clear evidence of neutrino flavor change in 2001. Solar neutrinos have energies below 20 MeV. At energies above 5 MeV, solar neutrino oscillation actually takes place in the Sun through a resonance known as the MSW effect, a different process from the vacuum oscillation described later in this article. Atmospheric neutrino oscillation Following the theories that were proposed in the 1970s suggesting unification of electromagnetic, weak, and strong forces, a few experiments on proton decay followed in the 1980s. Large detectors such as IMB, MACRO, and Kamiokande II have observed a deficit in the ratio of the flux of muon to electron flavor atmospheric neutrinos (see muon decay). The Super-Kamiokande experiment provided a very precise measurement of neutrino oscillation in an energy range of hundreds of MeV to a few TeV, and with a baseline of the diameter of the Earth; the first experimental evidence for atmospheric neutrino oscillations was announced in 1998. Reactor neutrino oscillation Many experiments have searched for oscillation of electron anti-neutrinos produced in nuclear reactors. No oscillations were found until a detector was installed at a distance 1–2 km. Such oscillations give the value of the parameter . Neutrinos produced in nuclear reactors have energies similar to solar neutrinos, of around a few MeV. The baselines of these experiments have ranged from tens of meters to over 100 km (parameter ). Mikaelyan and Sinev proposed to use two identical detectors to cancel systematic uncertainties in reactor experiment to measure the parameter . In December 2011, the Double Chooz experiment found that Then, in 2012, the Daya Bay experiment found thatwith a significance of These results have since been confirmed by RENO. Beam neutrino oscillation Neutrino beams produced at a particle accelerator offer the greatest control over the neutrinos being studied. Many experiments have taken place that study the same oscillations as in atmospheric neutrino oscillation using neutrinos with a few GeV of energy and several-hundred-km baselines. The MINOS, K2K, and Super-K experiments have all independently observed muon neutrino disappearance over such long baselines. Data from the LSND experiment appear to be in conflict with the oscillation parameters measured in other experiments. Results from the MiniBooNE appeared in Spring 2007 and contradicted the results from LSND, although they could support the existence of a fourth neutrino type, the sterile neutrino. In 2010, the INFN and CERN announced the observation of a tauon particle in a muon neutrino beam in the OPERA detector located at Gran Sasso, 730 km away from the source in Geneva. T2K, using a neutrino beam directed through 295 km of earth and the Super-Kamiokande detector, measured a non-zero value for the parameter in a neutrino beam. NOνA, using the same beam as MINOS with a baseline of 810 km, is sensitive to the same. Theory Neutrino oscillation arises from mixing between the flavor and mass eigenstates of neutrinos. That is, the three neutrino states that interact with the charged leptons in weak interactions are each a different superposition of the three (propagating) neutrino states of definite mass. Neutrinos are emitted and absorbed in weak processes in flavor eigenstates but travel as mass eigenstates. As a neutrino superposition propagates through space, the quantum mechanical phases of the three neutrino mass states advance at slightly different rates, due to the slight differences in their respective masses. This results in a changing superposition mixture of mass eigenstates as the neutrino travels; but a different mixture of mass eigenstates corresponds to a different mixture of flavor states. For example, a neutrino born as an electron neutrino will be some mixture of electron, mu, and tau neutrino after traveling some distance. Since the quantum mechanical phase advances in a periodic fashion, after some distance the state will nearly return to the original mixture, and the neutrino will be again mostly electron neutrino. The electron flavor content of the neutrino will then continue to oscillate – as long as the quantum mechanical state maintains coherence. Since mass differences between neutrino flavors are small in comparison with long coherence lengths for neutrino oscillations, this microscopic quantum effect becomes observable over macroscopic distances. In contrast, due to their larger masses, the charged leptons (electrons, muons, and tau leptons) have never been observed to oscillate. In nuclear beta decay, muon decay, pion decay, and kaon decay, when a neutrino and a charged lepton are emitted, the charged lepton is emitted in incoherent mass eigenstates such as because of its large mass. Weak-force couplings compel the simultaneously emitted neutrino to be in a "charged-lepton-centric" superposition such as which is an eigenstate for a "flavor" that is fixed by the electron's mass eigenstate, and not in one of the neutrino's own mass eigenstates. Because the neutrino is in a coherent superposition that is not a mass eigenstate, the mixture that makes up that superposition oscillates significantly as it travels. No analogous mechanism exists in the Standard Model that would make charged leptons detectably oscillate. In the four decays mentioned above, where the charged lepton is emitted in a unique mass eigenstate, the charged lepton will not oscillate, as single mass eigenstates propagate without oscillation. The case of (real) W boson decay is more complicated: W boson decay is sufficiently energetic to generate a charged lepton that is not in a mass eigenstate; however, the charged lepton would lose coherence, if it had any, over interatomic distances (0.1 nm) and would thus quickly cease any meaningful oscillation. More importantly, no mechanism in the Standard Model is capable of pinning down a charged lepton into a coherent state that is not a mass eigenstate, in the first place; instead, while the charged lepton from the W boson decay is not initially in a mass eigenstate, neither is it in any "neutrino-centric" eigenstate, nor in any other coherent state. It cannot meaningfully be said that such a featureless charged lepton oscillates or that it does not oscillate, as any "oscillation" transformation would just leave it the same generic state that it was before the oscillation. Therefore, detection of a charged lepton oscillation from W boson decay is infeasible on multiple levels. Pontecorvo–Maki–Nakagawa–Sakata matrix The idea of neutrino oscillation was first put forward in 1957 by Bruno Pontecorvo, who proposed that neutrino–antineutrino transitions may occur in analogy with neutral kaon mixing. Although such matter–antimatter oscillation had not been observed, this idea formed the conceptual foundation for the quantitative theory of neutrino flavor oscillation, which was first developed by Maki, Nakagawa, and Sakata in 1962 and further elaborated by Pontecorvo in 1967. One year later the solar neutrino deficit was first observed, and that was followed by the famous article by Gribov and Pontecorvo published in 1969 titled "Neutrino astronomy and lepton charge". The concept of neutrino mixing is a natural outcome of gauge theories with massive neutrinos, and its structure can be characterized in general. In its simplest form it is expressed as a unitary transformation relating the flavor and mass eigenbasis and can be written as where is a neutrino with definite flavor = (electron), (muon) or (tauon), is a neutrino with definite mass the superscript asterisk () represents a complex conjugate; for antineutrinos, the complex conjugate should be removed from the first equation and inserted into the second. The symbol represents the Pontecorvo–Maki–Nakagawa–Sakata matrix (also called the PMNS matrix, lepton mixing matrix, or sometimes simply the MNS matrix). It is the analogue of the CKM matrix describing the analogous mixing of quarks. If this matrix were the identity matrix, then the flavor eigenstates would be the same as the mass eigenstates. However, experiment shows that it is not. When the standard three-neutrino theory is considered, the matrix is 3×3. If only two neutrinos are considered, a 2×2 matrix is used. If one or more sterile neutrinos are added (see later), it is 4×4 or larger. In the 3×3 form, it is given by where and The phase factors and are physically meaningful only if neutrinos are Majorana particles—i.e. if the neutrino is identical to its antineutrino (whether or not they are is unknown)—and do not enter into oscillation phenomena regardless. If neutrinoless double beta decay occurs, these factors influence its rate. The phase factor is non-zero only if neutrino oscillation violates CP symmetry; this has not yet been observed experimentally. If experiment shows this 3×3 matrix to be not unitary, a sterile neutrino or some other new physics is required. Propagation and interference Since are mass eigenstates, their propagation can be described by plane wave solutions of the form where quantities are expressed in natural units and is the energy of the mass-eigenstate , is the time from the start of the propagation, is the three-dimensional momentum, is the current position of the particle relative to its starting position In the ultrarelativistic limit, we can approximate the energy as where is the energy of the wavepacket (particle) to be detected. This limit applies to all practical (currently observed) neutrinos, since their masses are less than 1 eV and their energies are at least 1 MeV, so the Lorentz factor, , is greater than in all cases. Using also where is the distance traveled and also dropping the phase factors, the wavefunction becomes Eigenstates with different masses propagate with different frequencies. The heavier ones oscillate faster compared to the lighter ones. Since the mass eigenstates are combinations of flavor eigenstates, this difference in frequencies causes interference between the corresponding flavor components of each mass eigenstate. Constructive interference causes it to be possible to observe a neutrino created with a given flavor to change its flavor during its propagation. The probability that a neutrino originally of flavor will later be observed as having flavor is This is more conveniently written as where The phase that is responsible for oscillation is often written as (with and restored) where 1.27 is unitless. In this form, it is convenient to plug in the oscillation parameters since: The mass differences, , are known to be on the order of  eV = ( eV) Oscillation distances, , in modern experiments are on the order of kilometers Neutrino energies, , in modern experiments are typically on order of MeV or GeV. If there is no CP-violation ( is zero), then the second sum is zero. Otherwise, the CP asymmetry can be given as In terms of Jarlskog invariant the CP asymmetry is expressed as Two-neutrino case The above formula is correct for any number of neutrino generations. Writing it explicitly in terms of mixing angles is extremely cumbersome if there are more than two neutrinos that participate in mixing. Fortunately, there are several meaningful cases in which only two neutrinos participate significantly. In this case, it is sufficient to consider the mixing matrix Then the probability of a neutrino changing its flavor is Or, using SI units and the convention introduced above This formula is often appropriate for discussing the transition in atmospheric mixing, since the electron neutrino plays almost no role in this case. It is also appropriate for the solar case of where is a mix (superposition) of and These approximations are possible because the mixing angle is very small and because two of the mass states are very close in mass compared to the third. Classical analogue of neutrino oscillation The basic physics behind neutrino oscillation can be found in any system of coupled harmonic oscillators. A simple example is a system of two pendulums connected by a weak spring (a spring with a small spring constant). The first pendulum is set in motion by the experimenter while the second begins at rest. Over time, the second pendulum begins to swing under the influence of the spring, while the first pendulum's amplitude decreases as it loses energy to the second. Eventually all of the system's energy is transferred to the second pendulum and the first is at rest. The process then reverses. The energy oscillates between the two pendulums repeatedly until it is lost to friction. The behavior of this system can be understood by looking at its normal modes of oscillation. If the two pendulums are identical then one normal mode consists of both pendulums swinging in the same direction with a constant distance between them, while the other consists of the pendulums swinging in opposite (mirror image) directions. These normal modes have (slightly) different frequencies because the second involves the (weak) spring while the first does not. The initial state of the two-pendulum system is a combination of both normal modes. Over time, these normal modes drift out of phase, and this is seen as a transfer of motion from the first pendulum to the second. The description of the system in terms of the two pendulums is analogous to the flavor basis of neutrinos. These are the parameters that are most easily produced and detected (in the case of neutrinos, by weak interactions involving the W boson). The description in terms of normal modes is analogous to the mass basis of neutrinos. These modes do not interact with each other when the system is free of outside influence. When the pendulums are not identical the analysis is slightly more complicated. In the small-angle approximation, the potential energy of a single pendulum system is , where g is the standard gravity, L is the length of the pendulum, m is the mass of the pendulum, and x is the horizontal displacement of the pendulum. As an isolated system the pendulum is a harmonic oscillator with a frequency of . The potential energy of a spring is where k is the spring constant and x is the displacement. With a mass attached it oscillates with a period of . With two pendulums (labeled a and b) of equal mass but possibly unequal lengths and connected by a spring, the total potential energy is This is a quadratic form in xa and xb, which can also be written as a matrix product: The 2×2 matrix is real symmetric and so (by the spectral theorem) it is orthogonally diagonalizable. That is, there is an angle θ such that if we define then where λ1 and λ2 are the eigenvalues of the matrix. The variables x1 and x2 describe normal modes which oscillate with frequencies of and . When the two pendulums are identical (La = Lb), θ is 45°. The angle θ is analogous to the Cabibbo angle (though that angle applies to quarks rather than neutrinos). When the number of oscillators (particles) is increased to three, the orthogonal matrix can no longer be described by a single angle; instead, three are required (Euler angles). Furthermore, in the quantum case, the matrices may be complex. This requires the introduction of complex phases in addition to the rotation angles, which are associated with CP violation but do not influence the observable effects of neutrino oscillation. Theory, graphically Two neutrino probabilities in vacuum In the approximation where only two neutrinos participate in the oscillation, the probability of oscillation follows a simple pattern: The blue curve shows the probability of the original neutrino retaining its identity. The red curve shows the probability of conversion to the other neutrino. The maximum probability of conversion is equal to sin22θ. The frequency of the oscillation is controlled by Δm2. Three neutrino probabilities If three neutrinos are considered, the probability for each neutrino to appear is somewhat complex. The graphs below show the probabilities for each flavor, with the plots in the left column showing a long range to display the slow "solar" oscillation, and the plots in the right column zoomed in, to display the fast "atmospheric" oscillation. The parameters used to create these graphs (see below) are consistent with current measurements, but since some parameters are still quite uncertain, some aspects of these plots are only qualitatively correct. The illustrations were created using the following parameter values: sin2(2θ13) = 0.10 (Determines the size of the small wiggles.) sin2(2θ23) = 0.97 sin2(2θ12) = 0.861 δ = 0 (If the actual value of this phase is large, the probabilities will be somewhat distorted, and will be different for neutrinos and antineutrinos.) Normal mass hierarchy: m1 ≤ m2 ≤ m3 Δm = Δm ≈ Δm = Observed values of oscillation parameters  . PDG combination of Daya Bay, RENO, and Double Chooz results.  . This corresponds to θsol (solar), obtained from KamLand, solar, reactor and accelerator data. at 90% confidence level, corresponding to (atmospheric) (normal mass hierarchy) and the sign of are currently unknown. Solar neutrino experiments combined with KamLAND have measured the so-called solar parameters and Atmospheric neutrino experiments such as Super-Kamiokande together with the K2K and MINOS long baseline accelerator neutrino experiment have determined the so-called atmospheric parameters and The last mixing angle, 13, has been measured by the experiments Daya Bay, Double Chooz and RENO as For atmospheric neutrinos the relevant difference of masses is about and the typical energies are ; for these values the oscillations become visible for neutrinos traveling several hundred kilometres, which would be those neutrinos that reach the detector traveling through the earth, from below the horizon. The mixing parameter 13 is measured using electron anti-neutrinos from nuclear reactors. The rate of anti-neutrino interactions is measured in detectors sited near the reactors to determine the flux prior to any significant oscillations and then it is measured in far detectors (placed kilometres from the reactors). The oscillation is observed as an apparent disappearance of electron anti-neutrinos in the far detectors (i.e. the interaction rate at the far site is lower than predicted from the observed rate at the near site). From atmospheric and solar neutrino oscillation experiments, it is known that two mixing angles of the MNS matrix are large and the third is smaller. This is in sharp contrast to the CKM matrix in which all three angles are small and hierarchically decreasing. The CP-violating phase of the MNS matrix is as of April 2020 to lie somewhere between −2 and −178 degrees, from the T2K experiment. If the neutrino mass proves to be of Majorana type (making the neutrino its own antiparticle), it is then possible that the MNS matrix has more than one phase. Since experiments observing neutrino oscillation measure the squared mass difference and not absolute mass, one might claim that the lightest neutrino mass is exactly zero, without contradicting observations. This is however regarded as unlikely by theorists. Origins of neutrino mass The question of how neutrino masses arise has not been answered conclusively. In the Standard Model of particle physics, fermions only have intrinsic mass because of interactions with the Higgs field (see Higgs boson). These interactions require both left- and right-handed versions of the fermion (see chirality). However, only left-handed neutrinos have been observed so far. Neutrinos may have another source of mass through the Majorana mass term. This type of mass applies for electrically neutral particles since otherwise it would allow particles to turn into anti-particles, which would violate conservation of electric charge. The smallest modification to the Standard Model, which only has left-handed neutrinos, is to allow these left-handed neutrinos to have Majorana masses. The problem with this is that the neutrino masses are surprisingly smaller than the rest of the known particles (at least 600,000 times smaller than the mass of an electron), which, while it does not invalidate the theory, is widely regarded as unsatisfactory as this construction offers no insight into the origin of the neutrino mass scale. The next simplest addition would be to add into the Standard Model right-handed neutrinos that interact with the left-handed neutrinos and the Higgs field in an analogous way to the rest of the fermions. These new neutrinos would interact with the other fermions solely in this way and hence would not be directly observable, so are not phenomenologically excluded. The problem of the disparity of the mass scales remains. Seesaw mechanism The most popular conjectured solution currently is the seesaw mechanism, where right-handed neutrinos with very large Majorana masses are added. If the right-handed neutrinos are very heavy, they induce a very small mass for the left-handed neutrinos, which is proportional to the reciprocal of the heavy mass. If it is assumed that the neutrinos interact with the Higgs field with approximately the same strengths as the charged fermions do, the heavy mass should be close to the GUT scale. Because the Standard Model has only one fundamental mass scale, all particle masses must arise in relation to this scale. There are other varieties of seesaw and there is currently great interest in the so-called low-scale seesaw schemes, such as the inverse seesaw mechanism. The addition of right-handed neutrinos has the effect of adding new mass scales, unrelated to the mass scale of the Standard Model, hence the observation of heavy right-handed neutrinos would reveal physics beyond the Standard Model. Right-handed neutrinos would help to explain the origin of matter through a mechanism known as leptogenesis. Other sources There are alternative ways to modify the standard model that are similar to the addition of heavy right-handed neutrinos (e.g., the addition of new scalars or fermions in triplet states) and other modifications that are less similar (e.g., neutrino masses from loop effects and/or from suppressed couplings). One example of the last type of models is provided by certain versions supersymmetric extensions of the standard model of fundamental interactions, where R parity is not a symmetry. There, the exchange of supersymmetric particles such as squarks and sleptons can break the lepton number and lead to neutrino masses. These interactions are normally excluded from theories as they come from a class of interactions that lead to unacceptably rapid proton decay if they are all included. These models have little predictive power and are not able to provide a cold dark matter candidate. Oscillations in the early universe During the early universe when particle concentrations and temperatures were high, neutrino oscillations could have behaved differently. Depending on neutrino mixing-angle parameters and masses, a broad spectrum of behavior may arise including vacuum-like neutrino oscillations, smooth evolution, or self-maintained coherence. The physics for this system is non-trivial and involves neutrino oscillations in a dense neutrino gas. See also MSW effect Majoron Neutral kaon mixing Lorentz-violating neutrino oscillations Neutral particle oscillation Neutrino astronomy Notes References Further reading External links Review Articles on arxiv.org Ganesan Srinivasan was elected in 1984 a fellow of the Indian Academy of Sciences. Neutrinos Standard Model Electroweak theory Physics beyond the Standard Model
Neutrino oscillation
[ "Physics" ]
5,778
[ "Standard Model", "Physical phenomena", "Unsolved problems in physics", "Electroweak theory", "Fundamental interactions", "Particle physics", "Physics beyond the Standard Model" ]
24,094,494
https://en.wikipedia.org/wiki/C21H22O9
{{DISPLAYTITLE:C21H22O9}} The molecular formula C21H22O9 (molar mass: 418.39 g/mol, exact mass: 418.1264 u) may refer to: Aloin, also known as barbaloin Liquiritin Natsudaidain, a flavanol Molecular formulas
C21H22O9
[ "Physics", "Chemistry" ]
79
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,094,910
https://en.wikipedia.org/wiki/C21H20O12
{{DISPLAYTITLE:C21H20O12}} The molecular formula C21H20O12 (molar mass: 464.37 g/mol, exact mass: 464.095476 u) may refer to: Hyperoside Isoquercetin Myricitrin, a flavonol Spiraeoside, a flavonol Molecular formulas
C21H20O12
[ "Physics", "Chemistry" ]
84
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,098,258
https://en.wikipedia.org/wiki/Lattice%20density%20functional%20theory
Lattice density functional theory (LDFT) is a statistical theory used in physics and thermodynamics to model a variety of physical phenomena with simple lattice equations. Description Lattice models with nearest-neighbor interactions have been used extensively to model a wide variety of systems and phenomena, including the lattice gas, binary liquid solutions, order-disorder phase transitions, ferromagnetism, and antiferromagnetism. Most calculations of correlation functions for nonrandom configurations are based on statistical mechanical techniques, which lead to equations that usually need to be solved numerically. In 1925, Ising gave an exact solution to the one-dimensional (1D) lattice problem. In 1944 Onsager was able to get an exact solution to a two-dimensional (2D) lattice problem at the critical density. However, to date, no three-dimensional (3D) problem has had a solution that is both complete and exact. Over the last ten years, Aranovich and Donohue have developed lattice density functional theory (LDFT) based on a generalization of the Ono-Kondo equations to three-dimensions, and used the theory to model a variety of physical phenomena. The theory starts by constructing an expression for free energy, A=U-TS, where internal energy U and entropy S can be calculated using mean field approximation. The grand potential is then constructed as Ω=A-μΦ, where μ is a Lagrange multiplier which equals to the chemical potential, and Φ is a constraint given by the lattice. It is then possible to minimize the grand potential with respect to the local density, which results in a mean-field expression for local chemical potential. The theory is completed by specifying the chemical potential for a second (possibly bulk) phase. In an equilibrium process, μI=μII. Lattice density functional theory has several advantages over more complicated free volume techniques such as Perturbation theory and the statistical associating fluid theory, including mathematical simplicity and ease of incorporating complex boundary conditions. Although this approach is known to give only qualitative information about the thermodynamic behavior of a system, it provides important insights about the mechanisms of various complex phenomena such as phase transition, aggregation, configurational distribution, surface-adsorption, self-assembly, crystallization, as well as steady state diffusion. References B. Bakhti, "Development of lattice density functionals and applications to structure formation in condensed matter systems". PhD thesis, Universität Osnabrück, Germany. Statistical mechanics Density functional theory Lattice models
Lattice density functional theory
[ "Physics", "Chemistry", "Materials_science" ]
521
[ "Density functional theory", "Quantum chemistry", "Quantum mechanics", "Lattice models", "Computational physics", "Condensed matter physics", "Statistical mechanics" ]
8,487,678
https://en.wikipedia.org/wiki/Pyrimidine%20dimer
Pyrimidine dimers represent molecular lesions originating from thymine or cytosine bases within DNA, resulting from photochemical reactions. These lesions, commonly linked to direct DNA damage, are induced by ultraviolet light (UV), particularly UVC, result in the formation of covalent bonds between adjacent nitrogenous bases along the nucleotide chain near their carbon–carbon double bonds, the photo-coupled dimers are fluorescent. Such dimerization, which can also occur in double-stranded RNA (dsRNA) involving uracil or cytosine, leads to the creation of cyclobutane pyrimidine dimers (CPDs) and 6–4 photoproducts. These pre-mutagenic lesions modify the DNA helix structure, resulting in abnormal non-canonical base pairing and, consequently, adjacent thymines or cytosines in DNA will form a cyclobutane ring when joined together and cause a distortion in the DNA. This distortion prevents DNA replication and transcription mechanisms beyond the dimerization site. While up to 100 such reactions per second may transpire in a skin cell exposed to sunlight resulting in DNA damage, they are typically rectified promptly through DNA repair, such as through photolyase reactivation or nucleotide excision repair, with the latter being prevalent in humans. Conversely, certain bacteria utilize photolyase, powered by sunlight, to repair pyrimidine dimer-induced DNA damage. Unrepaired lesions may lead to erroneous nucleotide incorporation by polymerase machinery. Overwhelming DNA damage can precipitate mutations within an organism's genome, potentially culminating in cancer cell formation. Unrectified lesions may also interfere with polymerase function, induce transcription or replication errors, or halt replication. Notably, pyrimidine dimers contribute to sunburn and melanin production, and are a primary factor in melanoma development in humans. Types of pyrimidine dimers Pyrimidine dimers encompass several types, each with distinct structures and implications for DNA integrity. Cyclobutane pyrimidine dimer (CPD) is a dimer which features a four-membered ring formed by the fusion of two double-bonded carbons from adjacent pyrimidines. CPDs disrupt the formation of the base pair during DNA replication, potentially leading to mutations. The 6–4 photoproduct (6–4 pyrimidine–pyrimidone, or 6–4 pyrimidine–pyrimidinone) is an alternate dimer configuration consisting of a single covalent bond linking the carbon at the 6 (C6) position of one pyrimidine ring and carbon at the 4 (C4) position of the adjoining base's ring. This type of conversion occurs at one third the frequency of CPDs and has a higher mutagenic risk. A third type of molecular lesion is a Dewar pyrimidinone, resulting from the reversible isomerization of a 6–4 photoproduct under further light exposure. Mutagenesis Mutagenesis, the process of mutation formation, is significantly influenced by translesion polymerases which often introduce mutations at sites of pyrimidine dimers. This occurrence is noted both in prokaryotes, through the SOS response to mutagenesis, and in eukaryotes. Despite thymine-thymine CPDs being the most common lesions induced by UV, translesion polymerases show a tendency to incorporate adenines, resulting in the accurate replication of thymine dimers more often than not. Conversely, cytosines that are part of CPDs are susceptible to deamination, leading to a cytosine to thymine transition, thereby contributing to the mutation process. DNA repair Pyrimidine dimers introduce local conformational changes in the DNA structure, which allow recognition of the lesion by repair enzymes. In most organisms (excluding placental mammals such as humans) they can be repaired by photoreactivation. Photoreactivation is a repair process in which photolyase enzymes reverse CPDs using photochemical reactions. In addition, some photolyases can also repair 6-4 photoproducts of UV induced DNA damage. Photolyase enzymes utilize flavin adenine dinucleotide (FAD) as a cofactor in the repair process. The UV dose that reduces a population of wild-type yeast cells to 37% survival is equivalent (assuming a Poisson distribution of hits) to the UV dose that causes an average of one lethal hit to each of the cells of the population. The number of pyrimidine dimers induced per haploid genome at this dose was measured as 27,000. A mutant yeast strain defective in the three pathways by which pyrimidine dimers were known to be repaired in yeast was also tested for UV sensitivity. It was found in this case that only one or, at most, two unrepaired pyrimidine dimers per haploid genome are lethal to the cell. These findings thus indicate that the repair of thymine dimers in wild-type yeast is highly efficient. Nucleotide excision repair, sometimes termed "dark reactivation", is a more general mechanism for repair of lesions and is the most common form of DNA repair for pyrimidine dimers in humans. This process works by using cellular machinery to locate the dimerized nucleotides and excise the lesion. Once the CPD is removed, there is a gap in the DNA strand that must be filled. DNA machinery uses the undamaged complementary strand to synthesize nucleotides off of and consequently fill in the gap on the previously damaged strand. Xeroderma pigmentosum (XP) is a rare genetic disease in humans in which genes that encode for NER proteins are mutated and result in decreased ability to combat pyrimidine dimers that form as a result of UV damage. Individuals with XP are also at a much higher risk of cancer than others, with a greater than 5,000 fold increased risk of developing skin cancers. Some common features and symptoms of XP include skin discoloration, and the formation of multiple tumors proceeding UV exposure. A few organisms have other ways to perform repairs: Spore photoproduct lyase is found in spore-forming bacteria. It returns thymine dimers to their original state. Deoxyribodipyrimidine endonucleosidase is found in bacteriophage T4. It is a base excision repair enzyme specific for pyrimidine dimers. It is then able to cut open the AP site. Another type of repair mechanism that is conserved in humans and other non-mammals is translesion synthesis. Typically, the lesion associated with the pyrimidine dimer blocks cellular machinery from synthesizing past the damaged site. However, in translesion synthesis, the CPD is bypassed by translesion polymerases, and replication and or transcription machinery can continue past the lesion. One specific translesion DNA polymerase, DNA polymerase η, is deficient in individuals with XPD. Effect of topical sunscreen and effect of absorbed sunscreen Direct DNA damage is reduced by sunscreen, which also reduces the risk of developing a sunburn. When the sunscreen is at the surface of the skin, it filters the UV rays, which attenuates the intensity. Even when the sunscreen molecules have penetrated into the skin, they protect against direct DNA damage, because the UV light is absorbed by the sunscreen and not by the DNA. Sunscreen primarily works by absorbing the UV light from the sun through the use of organic compounds, such as oxybenzone or avobenzone. These compounds are able to absorb UV energy from the sun and transition into higher-energy states. Eventually, these molecules return to lower energy states, and in doing so, the initial energy from the UV light can be transformed into heat. This process of absorption works to reduce the risk of DNA damage and the formation of pyrimidine dimers. UVA light makes up 95% of the UV light that reaches earth, whereas UVB light makes up only about 5%. UVB light is the form of UV light that is responsible for tanning and burning. Sunscreens work to protect from both UVA and UVB rays. Overall, sunburns exemplify DNA damage caused by UV rays, and this damage can come in the form of free radical species, as well as dimerization of adjacent nucleotides. See also DNA repair References DNA Mutation Dimers (chemistry) DNA replication and repair-deficiency disorders Senescence Cyclobutanes
Pyrimidine dimer
[ "Chemistry", "Materials_science", "Biology" ]
1,810
[ "Dimers (chemistry)", "Senescence", "Cellular processes", "Metabolism", "Polymer chemistry", "DNA replication and repair-deficiency disorders" ]
8,491,096
https://en.wikipedia.org/wiki/Poynting%20effect
The Poynting effect may refer to two unrelated physical phenomena. Neither should be confused with the Poynting–Robertson effect. All of these effects are named after John Henry Poynting, an English physicist. Solid mechanics In solid mechanics, the Poynting effect is a finite strain theory effect observed when an elastic cube is sheared between two plates and stress is developed in the direction normal to the sheared faces, or when a cylinder is subjected to torsion and the axial length changes. The Poynting phenomenon in torsion was noticed experimentally by J. H. Poynting. Chemistry and thermodynamics In thermodynamics, the Poynting effect generally refers to the change in the fugacity of a liquid when a non-condensable gas is mixed with the vapor at saturated conditions. Equivalently in terms of vapor pressure, if one assumes that the vapor and the non-condensable gas behave as ideal gases and an ideal mixture, it can be shown that: where is the modified vapor pressure is the unmodified vapor pressure is the liquid molar volume is the liquid/vapor's gas constant is the temperature is the total pressure (vapor pressure + non-condensable gas) A common example is the production of the medicine Entonox, a high-pressure mixture of nitrous oxide and oxygen. The ability to combine and at high pressure while remaining in the gaseous form is due to the Poynting effect. References Elasticity (physics) Rubber properties Gases
Poynting effect
[ "Physics", "Chemistry", "Materials_science" ]
315
[ "Physical phenomena", "Matter", "Elasticity (physics)", "Deformation (mechanics)", "Phases of matter", "Statistical mechanics", "Physical properties", "Gases" ]
8,491,596
https://en.wikipedia.org/wiki/Davydov%20soliton
In quantum biology, the Davydov soliton (after the Soviet Ukrainian physicist Alexander Davydov) is a quasiparticle representing an excitation propagating along the self-trapped amide I groups within the α-helices of proteins. It is a solution of the Davydov Hamiltonian. The Davydov model describes the interaction of the amide I vibrations with the hydrogen bonds that stabilize the α-helices of proteins. The elementary excitations within the α-helix are given by the phonons which correspond to the deformational oscillations of the lattice, and the excitons which describe the internal amide I excitations of the peptide groups. Referring to the atomic structure of an α-helix region of protein the mechanism that creates the Davydov soliton (polaron, exciton) can be described as follows: vibrational energy of the C=O stretching (or amide I) oscillators that is localized on the α-helix acts through a phonon coupling effect to distort the structure of the α-helix, while the helical distortion reacts again through phonon coupling to trap the amide I oscillation energy and prevent its dispersion. This effect is called self-localization or self-trapping. Solitons in which the energy is distributed in a fashion preserving the helical symmetry are dynamically unstable, and such symmetrical solitons once formed decay rapidly when they propagate. On the other hand, an asymmetric soliton which spontaneously breaks the local translational and helical symmetries possesses the lowest energy and is a robust localized entity. Davydov Hamiltonian Davydov Hamiltonian is formally similar to the Fröhlich-Holstein Hamiltonian for the interaction of electrons with a polarizable lattice. Thus the Hamiltonian of the energy operator is where is the exciton Hamiltonian, which describes the motion of the amide I excitations between adjacent sites; is the phonon Hamiltonian, which describes the vibrations of the lattice; and is the interaction Hamiltonian, which describes the interaction of the amide I excitation with the lattice. The exciton Hamiltonian is where the index counts the peptide groups along the α-helix spine, the index counts each α-helix spine, zJ is the energy of the amide I vibration (CO stretching), zJ is the dipole-dipole coupling energy between a particular amide I bond and those ahead and behind along the same spine, zJ is the dipole-dipole coupling energy between a particular amide I bond and those on adjacent spines in the same unit cell of the protein α-helix, and are respectively the boson creation and annihilation operator for an amide I exciton at the peptide group . The phonon Hamiltonian is where is the displacement operator from the equilibrium position of the peptide group , is the momentum operator of the peptide group , is the mass of the peptide group , N/m is an effective elasticity coefficient of the lattice (the spring constant of a hydrogen bond) and N/m is the lateral coupling between the spines. Finally, the interaction Hamiltonian is where pN is an anharmonic parameter arising from the coupling between the exciton and the lattice displacements (phonon) and parameterizes the strength of the exciton-phonon interaction. The value of this parameter for α-helix has been determined via comparison of the theoretically calculated absorption line shapes with the experimentally measured ones. Davydov soliton properties There are three possible fundamental approaches for deriving equations of motion from Davydov Hamiltonian: quantum approach, in which both the amide I vibration (excitons) and the lattice site motion (phonons) are treated quantum mechanically; mixed quantum-classical approach, in which the amide I vibration is treated quantum mechanically but the lattice is classical; classical approach, in which both the amide I and the lattice motions are treated classically. The mathematical techniques that are used to analyze the Davydov soliton are similar to some that have been developed in polaron theory. In this context, the Davydov soliton corresponds to a polaron that is: large so the continuum limit approximation is justified, acoustic because the self-localization arises from interactions with acoustic modes of the lattice, weakly coupled because the anharmonic energy is small compared with the phonon bandwidth. The Davydov soliton is a quantum quasiparticle and it obeys Heisenberg's uncertainty principle. Thus any model that does not impose translational invariance is flawed by construction. Supposing that the Davydov soliton is localized to 5 turns of the α-helix results in significant uncertainty in the velocity of the soliton m/s, a fact that is obscured if one models the Davydov soliton as a classical object. References Biological matter Biophysics Proteins Quantum biology
Davydov soliton
[ "Physics", "Chemistry", "Biology" ]
1,042
[ "Biomolecules by chemical classification", "Applied and interdisciplinary physics", "Quantum mechanics", "Biophysics", "nan", "Molecular biology", "Proteins", "Quantum biology" ]
8,494,049
https://en.wikipedia.org/wiki/Donald%20F.%20Hunt
Donald F. Hunt is the University Professor of Chemistry and Pathology at the University of Virginia. He is known for his research in the field of mass spectrometry, he developed electron capture negative ion mass spectrometry. He has received multiple awards for his work including the Distinguished Contribution Award from the American Society for Mass Spectrometry and the Thomson Medal from the International Mass Spectrometry Society. Early life and education He received his B.S. and Ph.D. from the University of Massachusetts Amherst and was a National Institutes of Health Postdoctoral trainee under Klaus Biemann at MIT. The Hunt laboratory The Hunt laboratory develops new methodology and instrumentation centered on mass spectrometry based proteomics for the characterization of proteins and their modifications. Research interests Among his many research interests, Hunt investigates how the immune system uses peptides to kill diseased cells, and how modifications to chromatin-associated proteins called histones create a "Code" that may be involved in many gene regulation events. Awards Hunt has been awarded several honors including the Distinguished Contribution Award from the American Society for Mass Spectrometry in 1994; the Christian B. Anfinsen Award from the Protein Society; the Chemical Instrumentation Award and Field and Franklin Award from the American Chemical Society; the Thomson Medal from the International Mass Spectrometry Society; the Human Proteome Organization's Distinguished Achievement Award in Proteomics, and the Association of Biomolecular Resource Facilities 2007 Award. In addition, he also received the Charles H. Stone Award (American Chemical Society) and the Pehr Edman Award for outstanding achievements in the application of mass spectrometry. He received the Chemical Instrumentation Award sponsored by the American Chemical Society in 1997. References University of Virginia faculty Living people Year of birth missing (living people) Thomson Medal recipients Mass spectrometrists 21st-century American chemists
Donald F. Hunt
[ "Physics", "Chemistry" ]
382
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
8,495,142
https://en.wikipedia.org/wiki/Eckart%20conditions
The Eckart conditions, named after Carl Eckart, simplify the nuclear motion (rovibrational) Hamiltonian that arises in the second step of the Born–Oppenheimer approximation. They make it possible to approximately separate rotation from vibration. Although the rotational and vibrational motions of the nuclei in a molecule cannot be fully separated, the Eckart conditions minimize the coupling close to a reference (usually equilibrium) configuration. The Eckart conditions are explained by Louck and Galbraith. Definition of Eckart conditions The Eckart conditions can only be formulated for a semi-rigid molecule, which is a molecule with a potential energy surface V(R1, R2,..RN) that has a well-defined minimum for RA0 (). These equilibrium coordinates of the nuclei—with masses MA—are expressed with respect to a fixed orthonormal principal axes frame and hence satisfy the relations Here λi0 is a principal inertia moment of the equilibrium molecule. The triplets RA0 = (RA10, RA20, RA30) satisfying these conditions, enter the theory as a given set of real constants. Following Biedenharn and Louck, we introduce an orthonormal body-fixed frame, the Eckart frame, . If we were tied to the Eckart frame, which—following the molecule—rotates and translates in space, we would observe the molecule in its equilibrium geometry when we would draw the nuclei at the points, . Let the elements of RA be the coordinates with respect to the Eckart frame of the position vector of nucleus A (). Since we take the origin of the Eckart frame in the instantaneous center of mass, the following relation holds. We define displacement coordinates . Clearly the displacement coordinates satisfy the translational Eckart conditions, The rotational Eckart conditions for the displacements are: where indicates a vector product. These rotational conditions follow from the specific construction of the Eckart frame, see Biedenharn and Louck, loc. cit., page 538. Finally, for a better understanding of the Eckart frame it may be useful to remark that it becomes a principal axes frame in the case that the molecule is a rigid rotor, that is, when all N displacement vectors are zero. Separation of external and internal coordinates The N position vectors of the nuclei constitute a 3N dimensional linear space R3N: the configuration space. The Eckart conditions give an orthogonal direct sum decomposition of this space The elements of the 3N-6 dimensional subspace Rint are referred to as internal coordinates, because they are invariant under overall translation and rotation of the molecule and, thus, depend only on the internal (vibrational) motions. The elements of the 6-dimensional subspace Rext are referred to as external coordinates, because they are associated with the overall translation and rotation of the molecule. To clarify this nomenclature we define first a basis for Rext. To that end we introduce the following 6 vectors (i=1,2,3): An orthogonal, unnormalized, basis for Rext is, A mass-weighted displacement vector can be written as For i=1,2,3, where the zero follows because of the translational Eckart conditions. For i=4,5,6 where the zero follows because of the rotational Eckart conditions. We conclude that the displacement vector belongs to the orthogonal complement of Rext, so that it is an internal vector. We obtain a basis for the internal space by defining 3N-6 linearly independent vectors The vectors could be Wilson's s-vectors or could be obtained in the harmonic approximation by diagonalizing the Hessian of V. We next introduce internal (vibrational) modes, The physical meaning of qr depends on the vectors . For instance, qr could be a symmetric stretching mode, in which two C—H bonds are simultaneously stretched and contracted. We already saw that the corresponding external modes are zero because of the Eckart conditions, Overall translation and rotation The vibrational (internal) modes are invariant under translation and infinitesimal rotation of the equilibrium (reference) molecule if and only if the Eckart conditions apply. This will be shown in this subsection. An overall translation of the reference molecule is given by ' for any arbitrary 3-vector . An infinitesimal rotation of the molecule is given by where Δφ is an infinitesimal angle, Δφ >> (Δφ)², and is an arbitrary unit vector. From the orthogonality of to the external space follows that the satisfy Now, under translation Clearly, is invariant under translation if and only if because the vector is arbitrary. So, the translational Eckart conditions imply the translational invariance of the vectors belonging to internal space and conversely. Under rotation we have, Rotational invariance follows if and only if The external modes, on the other hand, are not invariant and it is not difficult to show that they change under translation as follows: where M is the total mass of the molecule. They change under infinitesimal rotation as follows where I0 is the inertia tensor of the equilibrium molecule. This behavior shows that the first three external modes describe the overall translation of the molecule, while the modes 4, 5, and, 6 describe the overall rotation. Vibrational energy The vibrational energy of the molecule can be written in terms of coordinates with respect to the Eckart frame as Because the Eckart frame is non-inertial, the total kinetic energy comprises also centrifugal and Coriolis energies. These stay out of the present discussion. The vibrational energy is written in terms of the displacement coordinates, which are linearly dependent because they are contaminated by the 6 external modes, which are zero, i.e., the dA's satisfy 6 linear relations. It is possible to write the vibrational energy solely in terms of the internal modes qr (r =1, ..., 3N-6) as we will now show. We write the different modes in terms of the displacements The parenthesized expressions define a matrix B relating the internal and external modes to the displacements. The matrix B may be partitioned in an internal (3N-6 x 3N) and an external (6 x 3N) part, We define the matrix M by and from the relations given in the previous sections follow the matrix relations and We define By using the rules for block matrix multiplication we can show that where G−1 is of dimension (3N-6 x 3N-6) and N−1 is (6 x 6). The kinetic energy becomes where we used that the last 6 components of v are zero. This form of the kinetic energy of vibration enters Wilson's GF method. It is of some interest to point out that the potential energy in the harmonic approximation can be written as follows where H is the Hessian of the potential in the minimum and F, defined by this equation, is the F matrix of the GF method. Relation to the harmonic approximation In the harmonic approximation to the nuclear vibrational problem, expressed in displacement coordinates, one must solve the generalized eigenvalue problem where H is a 3N × 3N symmetric matrix of second derivatives of the potential . H is the Hessian matrix of V in the equilibrium . The diagonal matrix M contains the masses on the diagonal. The diagonal matrix contains the eigenvalues, while the columns of C contain the eigenvectors. It can be shown that the invariance of V under simultaneous translation over t of all nuclei implies that vectors T = (t, ..., t) are in the kernel of H. From the invariance of V under an infinitesimal rotation of all nuclei around s, it can be shown that also the vectors S = (s x R10, ..., s x RN0) are in the kernel of H : Thus, six columns of C corresponding to eigenvalue zero are determined algebraically. (If the generalized eigenvalue problem is solved numerically, one will find in general six linearly independent linear combinations of S and T). The eigenspace corresponding to eigenvalue zero is at least of dimension 6 (often it is exactly of dimension 6, since the other eigenvalues, which are force constants, are never zero for molecules in their ground state). Thus, T and S correspond to the overall (external) motions: translation and rotation, respectively. They are zero-energy modes because space is homogeneous (force-free) and isotropic (torque-free). By the definition in this article, the non-zero frequency modes are internal modes, since they are within the orthogonal complement of Rext. The generalized orthogonalities: applied to the "internal" (non-zero eigenvalue) and "external" (zero-eigenvalue) columns of C are equivalent to the Eckart conditions. References Further reading The classic work is: More advanced book are: Molecular physics Quantum chemistry
Eckart conditions
[ "Physics", "Chemistry" ]
1,877
[ "Quantum chemistry", "Molecular physics", "Quantum mechanics", "Theoretical chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
4,971,619
https://en.wikipedia.org/wiki/Log%20wind%20profile
The log wind profile is a semi-empirical relationship commonly used to describe the vertical distribution of horizontal mean wind speeds within the lowest portion of the planetary boundary layer. The relationship is well described in the literature. The logarithmic profile of wind speeds is generally limited to the lowest 100 m of the atmosphere (i.e., the surface layer of the atmospheric boundary layer). The rest of the atmosphere is composed of the remaining part of the planetary boundary layer (up to around 1000 m) and the troposphere or free atmosphere. In the free atmosphere, geostrophic wind relationships should be used. Definition The equation to estimate the mean wind speed () at height (meters) above the ground is: where is the friction velocity (m s−1), is the Von Kármán constant (~0.41), is the zero plane displacement (in metres), is the surface roughness (in meters), and is a stability term where is the Obukhov length from Monin-Obukhov similarity theory. Under neutral stability conditions, and drops out and the equation is simplified to, Zero-plane displacement () is the height in meters above the ground at which zero mean wind speed is achieved as a result of flow obstacles such as trees or buildings. This displacement can be approximated as 2/3 to 3/4 of the average height of the obstacles. For example, if estimating winds over a forest canopy of height 30 m, the zero-plane displacement could be estimated as d = 20 m. Roughness length () is a corrective measure to account for the effect of the roughness of a surface on wind flow. That is, the value of the roughness length depends on the terrain. The exact value is subjective and references indicate a range of values, making it difficult to give definitive values. In most cases, references present a tabular format with the value of given for certain terrain descriptions. For example, for very flat terrain (snow, desert) the roughness length may be in the range 0.001 to 0.005 m. Similarly, for open terrain (grassland) the typical range is 0.01-0.05 m. For cropland, and brush/forest the ranges are 0.1-0.25 m and 0.5-1.0 m respectively. When estimating wind loads on structures the terrains may be described as suburban or dense urban, for which the ranges are typically 0.1-0.5 m and 1-5 m respectively. In order to estimate the mean wind speed at one height () based on that at another (), the formula would be rearranged, where is the mean wind speed at height . Limits The log wind profile is generally considered to be a more reliable estimator of mean wind speed than the wind profile power law in the lowest 10–20 m of the planetary boundary layer. Between 20 m and 100 m both methods can produce reasonable predictions of mean wind speed in neutral atmospheric conditions. From 100 m to near the top of the atmospheric boundary layer the power law produces more accurate predictions of mean wind speed (assuming neutral atmospheric conditions). The neutral atmospheric stability assumption discussed above is reasonable when the hourly mean wind speed at a height of 10 m exceeds 10 m/s where turbulent mixing overpowers atmospheric instability. Applications Log wind profiles are generated and used in many atmospheric pollution dispersion models. See also Wind profile power law List of atmospheric dispersion models References Atmospheric dispersion modeling Boundary layer meteorology
Log wind profile
[ "Chemistry", "Mathematics", "Engineering", "Environmental_science" ]
723
[ "Functions and mappings", "Mathematical objects", "Vertical distributions", "Atmospheric dispersion modeling", "Mathematical relations", "Environmental engineering", "Environmental modelling" ]
25,486,644
https://en.wikipedia.org/wiki/Shields%20parameter
The Shields parameter, also called the Shields criterion or Shields number, is a nondimensional number used to calculate the initiation of motion of sediment in a fluid flow. It is a dimensionalization of a shear stress, and is typically denoted or . This parameter has been developed by Albert F. Shields, and is called later Shields parameter. The Shields parameter is the main parameter of the Shields formula. The Shields parameter is given by: where: is a dimensional shear stress; is the density of the sediment; is the density of the fluid; is acceleration due to gravity; is a characteristic particle diameter of the sediment. The critical shear stress and also the critical Shields number ( and ) describe the conditions when the sediment starts moving. Note that the shear stress is a property of the current, while the critical shear stress is a property of the sediment. Physical meaning By multiplying the top and bottom of the Shields parameter by D2, you can see that it is proportional to the ratio of fluid force on the particle to the weight of the particle. References External links Sedimentology Dimensionless numbers of physics Fluid dynamics
Shields parameter
[ "Chemistry", "Engineering" ]
225
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
25,488,366
https://en.wikipedia.org/wiki/Holographic%20screen
A holographic screen is a two-dimensional display technology that uses coated glass media for the projection surface of a video projector. "Holographic" refers not to a stereoscopic effect, but to the coating that bundles light using formed microlenses. The lens design and attributes match the holographic area. The lenses may appear similar to the Fresnel lenses used in overhead projectors. The resulting effect is that of a free-space display, because the image carrier appears very transparent. Additionally, the beam manipulation by the lenses can be used to make the image appear to be floating in front of or behind the glass, rather than directly on it. However, this display is only two-dimensional and not true three-dimensional. It is unclear if such a technology will be able to provide acceptable three-dimensional images in the future. Working principle The display design can use either front or rear projection, in which one or more video projectors are directed at the glass plate. Each projector's beam widens as it approaches the surface and then is bundled again by the lenses' arrangement on the glass. This forms a virtual point of origin, so that the image source appears to be an imaginary object somewhere close to the glass. In rear projection (the common use case), the light passes through the glass; in front projection it is reflected. Interactive holographic screens Interactive holographic screens add gesture support to holographic screens. These systems contain three basic components: A projector A computer Two films The computer sends the image to the projector. The projector generates light beams which form the image on the screen. When the user touches the screen, a tactile membrane film reacts to these movements, generating electrical impulses that are sent back to the computer. The computer interprets the received impulses and modifies the projected image according to the information. The projector generates the beams of light that will form the image on the screen's film, which is adhered to the crystal support. These crystal lenses can be a maximum of across. The projector is usually located behind the screen and must be placed a certain angle above or below the user's line of sight to avoid the dazzling the user. Therefore, it must be trapezoidal projector, so it can compensate for the deforming of the images at this angle of displacement. The films are thin sheets of plastic applied to the crystal that allow both visualization and interactivity. There are two types of films: Screen film: This film can be opaque or transparent. It is possible to work with different degrees of opacity that can vary between 90% and 98%, depending on the application (interior, exterior, natural lighting, artificial lighting, etc.). Tactile membrane: This film enables interactivity. Capacitive projected technology catches user gestures and sends impulses to the computer. Uses Most initial uses of this technology are advertising-related, such as shop windows. An interactive holographic screen can be mounted on the shop windows so that passersby can interact with it. Non-interactive holographic screens in shop windows can be coupled with artificial vision software to adapt ads based on the viewer's characteristics (age, sex, etc.). These types of screens are often used for the display of Vocaloids at concerts because they simulate the illusion of a holographically-projected virtual performer. See also 3D display Free-space display Large-screen television technology Phantasmagoria Rear-projection television Video projector References Adwindow adhesive projection screens (In English, Spanish, French) Eresmultimedia (In Spanish) Globalzepp (In Spanish) Iberhermes Orizom (In Spanish) MediaScreen GmbH - Germany Holography Display technology Touchscreens
Holographic screen
[ "Engineering" ]
781
[ "Electronic engineering", "Display technology" ]
25,489,830
https://en.wikipedia.org/wiki/Classification%20of%20Pharmaco-Therapeutic%20Referrals
The Classification of Pharmaco-Therapeutic Referrals (CPR) is a taxonomy focused on defining and grouping together situations requiring a referral from pharmacists to physicians (and vice versa) regarding the pharmacotherapy used by the patients. It has been published in 2008. It is bilingual: English/Spanish (Clasificación de Derivaciones Fármaco-terapéuticas). It is a simple and efficient classification of pharmaco-therapeutic referrals between physicians and pharmacists permitting a common inter-professional language. It is adapted to any type of referrals among health professionals, and to increase its specificity it can be combined with ATC codes, ICD-10, and ICPC-2 PLUS. It is a part of the MEDAFAR Project, whose objective is to improve, through different scientific activities, the coordination processes between physicians and pharmacists working in primary health care. Supporting institutions Pharmaceutical Care Foundation of Spain (Fundación Pharmaceutical Care España) Spanish Society of Primary Care Doctors (Sociedad Española de Médicos de Atención Primaria) (SEMERGEN) Authors Raimundo Pastor Sánchez (Family practice, "Miguel de Cervantes" Primary Health Centre SERMAS Alcalá de Henares – Madrid – Spain) Carmen Alberola Gómez-Escolar (Pharmacist, Vice-President Fundación Pharmaceutical Care España) Flor Álvarez de Toledo Saavedra (Community pharmacist, Past-President Fundación Pharmaceutical Care España) Nuria Fernández de Cano Martín (Family practice, "Daroca" Primary Health Centre SERMAS Madrid – Spain) Nancy Solá Uthurry (Doctor in Pharmacy, Fundación Pharmaceutical Care España) Structure It is structured in 4 chapters (E, I, N, S) and 38 rubrics. The terminology used follows the rules of ICPC-2. Each rubric consists in an alphanumeric code (the letter corresponds to the chapters and the number to the component) and each title of the rubric (the assigned name) is expressed and explained by: – A series of terms related with the title of the rubric. – A definition expressing the meaning of the rubric. – A list of inclusion criteria and another list with exclusion criteria to select and qualify the contents corresponding to a rubric. – Some example to illustrate every term. It also includes a glossary of 51 terms defined by consensus, an alphabetical index with 350 words used in the rubrics; and a standardized model of inter-professional referral form, to facilitate referrals from community pharmacists to primary care physicians. Classification of Pharmaco-Therapeutic Referrals MEDAFAR E. Effectiveness / efficiency E 0. Effectiveness / Efficiency, unspecified E 1. Indication E 2. Prescription and dispensing conditions E 3. Active substance / excipient E 4. Pharmaceutical form / how supplied E 5. Dosage E 6. Quality E 7. Storage E 8. Consumption E 9. Outcome. I. Information / health education I 0. Information / Health education, unspecified I 1. Situation / reason for encounter I 2. Health problem I 3. Complementary examination I 4. Risk I 5. Pharmacological treatment I 6. No pharmacological treatment I 7. Treatment goal I 8. Socio-healthcare system. N. Need N 0. Need, unspecified N 1. Treatment based on symptoms and/or signs N 2. Treatment based on socio–economic-work issues N 3. Treatment based on public health issues N 4. Prevention N 5. Healthcare provision N 6. Complementary test for treatment control N 7. Administrative activity N 8. On patient request (fears, doubts, wants). S. Safety S 0. Safety, unspecified S 1. Toxicity S 2. Interaction S 3. Allergy S 4. Addiction (dependence) S 5. Other side effects S 6. Contraindication S 7. Medicalisation S 8. Non-regulate substance S 9. Data / confidentiality. See also Pharmaceutical care Referral (medicine) References Bibliography Pastor Sánchez R, Alberola Gómez-Escolar C, Álvarez de Toledo Saavedra F, Fernández de Cano Martín N, Solá Uthurry N. Clasificación de Derivaciones Fármaco-terapéuticas (CDF). MEDAFAR. Madrid: IMC; 2008. Álvarez de Toledo Saavedra F, Fernández de Cano Martín N, coordinadores. MEDAFAR Asma. Madrid: IMC; 2007. Álvarez de Toledo Saavedra F, Fernández de Cano Martín N, coordinadores. MEDAFAR Hipertensión. Madrid: IMC; 2007. Aranaz JM, Aibar C, Vitaller J, Mira JJ, Orozco D, Terol E, Agra Y. Estudio sobre la seguridad de los pacientes en atención primaria de salud (Estudio APEAS). Madrid: Ministerio de Sanidad y Consumo; 2008. Aranaz JM, Aibar C, Vitaller J, Ruiz P. Estudio Nacional sobre los Efectos Adversos ligados a la Hospitalización. ENEAS 2005. Madrid: Ministerio de Sanidad y Consumo; 2006. Criterios de derivación del farmacéutico al médico general/familia, ante mediciones esporádicas de presión arterial. Consenso entre la Sociedad Valenciana de Hipertensión y Riesgo Vascular (SVHTAyFV) y la Sociedad de Farmacia Comunitaria de la Comunidad Valenciana (SFaC-CV). 2007. Fleming DM (ed). The European study of referrals from primary to secondary care. Exeter: Royal College of General Practitioners; 1992. Foro de Atención Farmacéutica. Documento de consenso 2008. Madrid: MSC, RANF, CGCOF, SEFAP, SEFAC, SEFH, FPCE, GIAFUG. 2008. García Olmos L. Análisis de la demanda derivada en las consultas de medicina general en España. Tesis doctoral. Madrid: Universidad Autónoma de Madrid; 1993. Garjón Parra J, Gorricho Mendívil J. Seguridad del paciente: cuidado con los errores de medicación. Boletín de Información Farmacoterapéutica de Navarra. 2010;18(3) Gérvas J. Introducción a las classificaciones en Atención Primaria, con una valoración técnica de los "Consensos de Granada". Pharm Care Esp. 2003; 5(2):98-104. Hospital Ramón y Cajal, Área 4 Atención Primaria de Madrid. Guía Farmacoterapéutica. Madrid; 2005. CD-ROM. Ley 29/2006, de 26 de julio, de garantías y uso racional de los medicamentos y productos sanitarios. BOE. 2006 julio 27; (178): 28122-65. Ley 41/2002, de 14 de noviembre, básica reguladora de la autonomía del paciente y de derechos y obligaciones en materia de información y documentación clínica. BOE. 2002 noviembre 15; (274): 40126-32. Ley Orgánica 15/1999, de 13 de diciembre, de Protección de Datos de Carácter Personal. BOE. 1999 diciembre 14; (298): 43088-99. Organización Médica Colegial. Código de ética y deontología médica. Madrid: OMC; 1999. Palacio Lapuente F. Actuaciones para la mejora de la seguridad del paciente en atención primaria [editorial]. FMC. 2008; 15(7): 405-7. Panel de consenso ad hoc. Consenso de Granada sobre Problemas Relacionados con medicamentos. Pharm Care Esp. 1999; 1(2):107-12. Prado Prieto L, García Olmos L, Rodríguez Salvanés F, Otero Puime A. Evaluación de la demanda derivada en atención primaria. Aten Primaria. 2005; 35:146-51. Starfield B. Research in general practice: co-morbidity, referrals, and the roles of general practitioners and specialists. SEMERGEN. 2003; 29(Supl 1):7-16. WONCA Classification Committee. An international glossary for general/family practice. Fam Pract. 1995; 12(3): 341-69. World Alliance for Patient Safety. International Classification for Patient Safety (ICPS). 2007. External links Classification of Pharmaco-Terapeutic Referrals (CPR) Clasificación de Derivaciones Fármaco-terapéuticas (CDF) MEDAFAR SEMERGEN Fundación Pharmaceutical Care España ICPC-2e (by the Norwegian Centre for Informatics in Health and Social Care)] International Classification of Diseases (ICD) ICD-10 Código ATC (Anatomical Therapeutic Chemical drug classification) Medical manuals Pharmacological classification systems Primary care Clinical procedure classification
Classification of Pharmaco-Therapeutic Referrals
[ "Chemistry" ]
2,012
[ "Pharmacological classification systems", "Pharmacology" ]
25,491,298
https://en.wikipedia.org/wiki/Thymus%20transplantation
Thymus transplantation is a form of organ transplantation where the thymus is moved from one body to another. It is used in certain immunodeficiencies, such as DiGeorge Syndrome. Indications Thymus transplantation is used to treat infants with DiGeorge syndrome, which results in an absent or hypoplastic thymus, in turn causing problems with the immune system's T-cell mediated response. It is used in people with complete DiGeorge anomaly, which are entirely athymic. This subgroup represents less than 1% of DiGeorge syndrome patients. Nezelof syndrome is another thymus-related disease where it can be used. Thymus transplantation can also be used in pediatric patients with a Foxn1 deficiency. Co-transplantation with other organs In the 2000s, promising animal experiments into transplanting thymic tissue and another organ at the same time were carried out, in order to improve the recipient's tolerance of the transplanted organ, and to reduce the need for immunosuppressing drugs like tacrolimus. Such trials have been performed with kidney and heart transplants, drastically extending the time the animals were surviving without immunosuppressing drugs. The first human heart-and-thymus co-transplantation was performed on Easton Sinnamon in 2022, a newborn who suffered from both a lack of T cells, and a serious heart defect. Depending on the development, it is planned to wean him off immunosuppressant drugs, but it remains to be seen whether the same technique is viable in adults, as the thymus shrinks with age, with the bone marrow taking over T cell production. Effects and prognosis A study of 54 DiGeorge syndrome infants resulted in all tested subjects having developed polyclonal T-cell repertoires and proliferative responses to mitogens. The procedure was well tolerated and resulted in stable immunoreconstitution in these infants. It had a survival rate of 75%, having a follow-up as long as 13 years. Complications include an increased susceptibility to infections while the T cells have not yet developed, rashes and erythema. Graft-versus-host disease Theoretically, thymus transplantation could cause two types of graft-versus-host disease (GVHD): First, it could cause a donor T cell-related GVHD, because of T cells from the donor that are present in the transplanted thymus that recognizes the recipient as foreign. Donor T cells can be detected in the recipient after transplantation, but there is no evidence of any donor T cell-related graft-versus-host disease. Second, a thymus transplantation can cause a non-donor T cell-related GVHD because the recipients thymocytes would use the donor thymus cells as models when going through the negative selection to recognize self-antigens, and could therefore still mistake own structures in the rest of the body for being non-self. This is a rather indirect GVHD because it is not directly cells in the graft itself that causes it, but cells in the graft that make the recipient's T cells act like donor T cells. It would also be of relatively late-onset because it requires the formation of new T cells. It can be seen as a multiple-organ autoimmunity in xenotransplantation experiments of the thymus between different species. Autoimmune disease is a frequent complication after human allogeneic thymus transplantation, found in 42% of subjects over 1 year post transplantation. However, this is partially explained by that the indication itself, that is, complete DiGeorge syndrome, increases the risk of autoimmune disease. References Organ transplantation Immunology Lymphatic organ surgery
Thymus transplantation
[ "Biology" ]
790
[ "Immunology" ]
25,491,839
https://en.wikipedia.org/wiki/Sheer%20%28ship%29
The sheer is a measure of longitudinal main deck curvature in naval architecture. The sheer forward is usually twice that aft. Increases in the rise of the sheer forward and aft build volume into the hull, and in turn increase its buoyancy forward and aft, thereby keeping the ends from diving into an oncoming wave and slowing the ship. In the early days of sail, one discussed a hull's sheer in terms of how much "hang" it had. William Sutherland's The Ship-builders Assistant (1711) covers this information in more detail. The practice of building sheer into a ship dates back to the era of small sailing ships. These vessels were built with the decks curving upwards at the bow and stern in order to increase stability by preventing the ship from pitching up and down. Sheer on exposed decks also makes a ship more seaworthy by raising the deck at fore and aft ends further from the water and by reducing the volume of water coming on deck. See also Camber (ship) References Naval architecture
Sheer (ship)
[ "Engineering" ]
205
[ "Naval architecture", "Marine engineering" ]
22,589,574
https://en.wikipedia.org/wiki/Instance-based%20learning
In machine learning, instance-based learning (sometimes called memory-based learning) is a family of learning algorithms that, instead of performing explicit generalization, compare new problem instances with instances seen in training, which have been stored in memory. Because computation is postponed until a new instance is observed, these algorithms are sometimes referred to as "lazy." It is called instance-based because it constructs hypotheses directly from the training instances themselves. This means that the hypothesis complexity can grow with the data: in the worst case, a hypothesis is a list of n training items and the computational complexity of classifying a single new instance is O(n). One advantage that instance-based learning has over other methods of machine learning is its ability to adapt its model to previously unseen data. Instance-based learners may simply store a new instance or throw an old instance away. Examples of instance-based learning algorithms are the k-nearest neighbors algorithm, kernel machines and RBF networks. These store (a subset of) their training set; when predicting a value/class for a new instance, they compute distances or similarities between this instance and the training instances to make a decision. To battle the memory complexity of storing all training instances, as well as the risk of overfitting to noise in the training set, instance reduction algorithms have been proposed. See also Analogical modeling References Machine learning
Instance-based learning
[ "Engineering" ]
282
[ "Artificial intelligence engineering", "Machine learning" ]
22,590,101
https://en.wikipedia.org/wiki/Hasse%E2%80%93Arf%20theorem
In mathematics, specifically in local class field theory, the Hasse–Arf theorem is a result concerning jumps of the upper numbering filtration of the Galois group of a finite Galois extension. A special case of it when the residue fields are finite was originally proved by Helmut Hasse, and the general result was proved by Cahit Arf. Statement Higher ramification groups The theorem deals with the upper numbered higher ramification groups of a finite abelian extension . So assume is a finite Galois extension, and that is a discrete normalised valuation of K, whose residue field has characteristic p > 0, and which admits a unique extension to L, say w. Denote by the associated normalised valuation ew of L and let be the valuation ring of L under . Let have Galois group G and define the s-th ramification group of for any real s ≥ −1 by So, for example, G−1 is the Galois group G. To pass to the upper numbering one has to define the function ψL/K which in turn is the inverse of the function ηL/K defined by The upper numbering of the ramification groups is then defined by Gt(L/K) = Gs(L/K) where s = ψL/K(t). These higher ramification groups Gt(L/K) are defined for any real t ≥ −1, but since vL is a discrete valuation, the groups will change in discrete jumps and not continuously. Thus we say that t is a jump of the filtration {Gt(L/K) : t ≥ −1} if Gt(L/K) ≠ Gu(L/K) for any u > t. The Hasse–Arf theorem tells us the arithmetic nature of these jumps. Statement of the theorem With the above set up, the theorem states that the jumps of the filtration {Gt(L/K) : t ≥ −1} are all rational integers. Example Suppose G is cyclic of order , residue characteristic and be the subgroup of of order . The theorem says that there exist positive integers such that ... Non-abelian extensions For non-abelian extensions the jumps in the upper filtration need not be at integers. Serre gave an example of a totally ramified extension with Galois group the quaternion group of order 8 with The upper numbering then satisfies   for   for   for so has a jump at the non-integral value . Notes References Galois theory Theorems in algebraic number theory Turkish inventions
Hasse–Arf theorem
[ "Mathematics" ]
521
[ "Theorems in algebraic number theory", "Theorems in number theory" ]
22,590,455
https://en.wikipedia.org/wiki/Sonom%C3%A8tre%20of%20Louli%C3%A9
The () is a tuning device that French musician Étienne Loulié invented to facilitate the tuning of stringed instruments. Sébastien de Brossard considered this device to be "one of the finest inventions" of the 17th century. On July 4, 1699, Étienne Loulié had the honor of presenting two inventions before the Royal Academy of Sciences in Paris. The records of the Academy (Procès verbaux) for that day read: "Monsieur Loulié showed the Company a new machine he has invented, which he calls the sonomètre, by means of which anyone who has never tuned a harpsichord, as long as he has a sufficiently good ear to tune one string in unison with another, and one octave in unison with another, will on the first attempt tune the harpsichord as quickly, more easily, and in less time than the best music master tuning it by the ordinary method." The devices Loulié invented both a table-top and a portable model of the sonomètre. Both versions are based on the same components. The first component is an instrument string (shown in red on the modified engravings) that is strung from one end to the other of a shallow rectangular wooden box. This string represents the "monochord" with which the sources equate the invention. The second component consists of one or more movable pieces of wood (shown in yellow on the engravings) that are marked to represent the 11 notes of the musical scale. The third component consists of something with which to pluck the string (shown in green on the engravings), in one case the user's finger, in the other a lever. The box of the portable sonomètre is only 5 inches long. It could fit into a large pocket. To obtain a specific note of the scale – for example the F shown in the engraving – a graduated strip of wood (highlighted in yellow) is slid out from at opening at the of the box and is held in position (at "Fa") with a pin inserted in a hole in the strip. Pulling out this graduated strip of wood causes the inner end of the movable piece to rest on the string, shortening or lengthening its length so that the desired pitch will be sounded. When everything is in place, the user plucks the string with his finger, at the point marked N (circled in green), and F is sounded. He plucks the string as often as needed, until the F-sharp key of the instrument being tuned matches the sound being produced by the sonomètre. At that point he tunes all the F-sharp strings on the instrument, to put them in unison with the original F. And so on, for the entire octave. At first glance the larger sonomètre resembles the keyboard of a harpsichord, with its dark and light keys. However, the 11 raised dark objects are not keys, they are low triangular wedges that function like the frets of a viol. That is to say, when one of the 11 levers is pulled forward – as UT, C, is in the engraving (yellow) – the little wedge comes into contact with the string (red). A small L-shaped tool is then positioned over the string, pressing it down firmly, as a finger presses the string of a viol onto a fret. When everything is ready, the user presses a lever (green) in the front of the box and activates a harpsichord-like jack that rises and plucks the string, sounding a C. And so on, until the user has set the 11 pitches that will permit him to tune the entire keyboard. The original illustration of this table-top device included the "proportions" of an "exact division of the notes" into the larger or smaller intervals characteristic of the unequal temperament preferred by Loulié. In 1701 Loulié's former colleague, Joseph Sauveur, pointed out to his colleagues at the Academy of Sciences the shortcomings of Loulié's sonomètre and presented his own version of the device. The Academy's description The sonomètre invented by Monsieur Loulié: [In Figure 1] AB is a box that contains a sliding piece, DEF, that runs along the other piece, LM, that is attached to the bottom of the box. The end at ED comes out through an opening similar to the one cut in B. The other end [of the sliding piece], F, has a sort of right-angle bar [équerre] that is attached to it with a screw and that is pushed by a spring so that this right angle will pluck the string HNG at the place marked I. The second figure [the sliding piece of wood] is the actual size and is divided according to the necessary proportions that will permit the string to produce the sound one desires for tuning any instrument whatsoever, which is done as follows: At each dividing line on piece DE, which one passes through the opening B in the box. there is a little pin. When one wishes to hear a note, one pulls on the piece, to move the pin for that note. Then, applying that pin with precision against the opening in the box, one plucks the string with one's finger at N, and the string produces the desired sound. This effect is produced by the different positions in which one puts the sliding piece (DE) and the right-angle piece over the string (HG) The different distances from H create the different sounds. This instrument is portable. It fits easily into a pocket. It is even used by harpsichord makers, who employ it to tune those sorts of instruments. Another sonomètre invented by Monsieur Loulié The top of the box (ABCD) has, in its mid-length, several bridges fixed to the ends of that same number of little movable boards, which fit between [the horizontal sticks] FG and EH. There are twelve of these little boards, which mark the divisions of all the notes of the entire octave, including flats and sharps. A string (OQP) produces the sound of these different notes when it is pinched by the little jack [sauterau] that is activated inside the box by the key R, where it rests upon a little fulcrum. Note that do [ut or C] is exactly in the middle of the two fixed points O and P. When one wishes to tune an instrument, one pulls toward oneself the [little board] of the note one wishes to produce, so that the raised bridge is under the string. And to ensure that the string will touch the bridge firmly, one places over it a piece in the form of a right angle square [équerre]. For example, if one wants a do, one pulls board LI and the bridge affixed to it (MN); and one puts the square behind the bridge, then plucks the string with the jack. The third figure shows the exact division of the notes, the different distances being marked out. References Further reading Étienne Loulié, Nouveau Sistème de musique... avec la description et l'usage du sonomètre (Paris: Ballard, 1698) Musical instrument parts and accessories Acoustics Musical tuning String instruments
Sonomètre of Loulié
[ "Physics", "Technology" ]
1,476
[ "Musical instrument parts and accessories", "Classical mechanics", "Acoustics", "Components" ]
22,590,461
https://en.wikipedia.org/wiki/Plant%20disease%20resistance
Plant disease resistance protects plants from pathogens in two ways: by pre-formed structures and chemicals, and by infection-induced responses of the immune system. Relative to a susceptible plant, disease resistance is the reduction of pathogen growth on or in the plant (and hence a reduction of disease), while the term disease tolerance describes plants that exhibit little disease damage despite substantial pathogen levels. Disease outcome is determined by the three-way interaction of the pathogen, the plant, and the environmental conditions (an interaction known as the disease triangle). Defense-activating compounds can move cell-to-cell and systematically through the plant's vascular system. However, plants do not have circulating immune cells, so most cell types exhibit a broad suite of antimicrobial defenses. Although obvious qualitative differences in disease resistance can be observed when multiple specimens are compared (allowing classification as "resistant" or "susceptible" after infection by the same pathogen strain at similar inoculum levels in similar environments), a gradation of quantitative differences in disease resistance is more typically observed between plant strains or genotypes. Plants consistently resist certain pathogens but succumb to others; resistance is usually specific to certain pathogen species or pathogen strains. Background Plant disease resistance is crucial to the reliable production of food, and it provides significant reductions in agricultural use of land, water, fuel, and other inputs. Plants in both natural and cultivated populations carry inherent disease resistance, but this has not always protected them. The late blight Great Famine of Ireland of the 1840s was caused by the oomycete Phytophthora infestans. The world's first mass-cultivated banana cultivar Gros Michel was lost in the 1920s to Panama disease caused by the fungus Fusarium oxysporum. The current wheat stem rust, leaf rust, and yellow stripe rust epidemics spreading from East Africa into the Indian subcontinent are caused by rust fungi Puccinia graminis and P. striiformis. Other epidemics include chestnut blight, as well as recurrent severe plant diseases such as rice blast, soybean cyst nematode, and citrus canker. Plant pathogens can spread rapidly over great distances, vectored by water, wind, insects, and humans. Across large regions and many crop species, it is estimated that diseases typically reduce plant yields by 10% every year in more developed nations or agricultural systems, but yield loss to diseases often exceeds 20% in less developed settings. However, disease control is reasonably successful for most crops. Disease control is achieved by use of plants that have been bred for good resistance to many diseases, and by plant cultivation approaches such as crop rotation, pathogen-free seed, appropriate planting date and plant density, control of field moisture, and pesticide use. Common disease resistance mechanisms Pre-formed structures and compounds Plant cuticle/surface Plant cell walls Antimicrobial chemicals (for example: polyphenols, sesquiterpene lactones, saponins) Antimicrobial peptides Enzyme inhibitors Detoxifying enzymes that break down pathogen-derived toxins Receptors that perceive pathogen presence and activate inducible plant defences Inducible post-infection plant defenses Cell wall reinforcement (cellulose, lignin, suberin, callose, cell wall proteins) Antimicrobial chemicals, including reactive oxygen species such as hydrogen peroxide or peroxynitrite, or more complex phytoalexins such as genistein or camalexin Antimicrobial proteins such as defensins, thionins, or PR-1 Antimicrobial enzymes such as chitinases, beta-glucanases, or peroxidases Hypersensitive response – a rapid host cell death response associated with defence induction. Immune system The plant immune system carries two interconnected tiers of receptors, one most frequently sensing molecules outside the cell and the other most frequently sensing molecules inside the cell. Both systems sense the intruder and respond by activating antimicrobial defenses in the infected cell and neighboring cells. In some cases, defense-activating signals spread to the rest of the plant or even to neighboring plants. The two systems detect different types of pathogen molecules and classes of plant receptor proteins. The first tier is primarily governed by pattern recognition receptors that are activated by recognition of evolutionarily conserved pathogen or microbial–associated molecular patterns (PAMPs or MAMPs). Activation of PRRs leads to intracellular signaling, transcriptional reprogramming, and biosynthesis of a complex output response that limits colonization. The system is known as PAMP-triggered immunity or as pattern-triggered immunity (PTI). The second tier, primarily governed by R gene products, is often termed effector-triggered immunity (ETI). ETI is typically activated by the presence of specific pathogen "effectors" and then triggers strong antimicrobial responses (see R gene section below). In addition to PTI and ETI, plant defenses can be activated by the sensing of damage-associated compounds (DAMP), such as portions of the plant cell wall released during pathogenic infection. Responses activated by PTI and ETI receptors include ion channel gating, oxidative burst, cellular redox changes, or protein kinase cascades that directly activate cellular changes (such as cell wall reinforcement or antimicrobial production), or activate changes in gene expression that then elevate other defensive responses. Plant immune systems show some mechanistic similarities with the immune systems of insects and mammals, but also exhibit many plant-specific characteristics. The two above-described tiers are central to plant immunity but do not fully describe plant immune systems. In addition, many specific examples of apparent PTI or ETI violate common PTI/ETI definitions, suggesting a need for broadened definitions and/or paradigms. The term quantitative resistance (discussed below) refers to plant disease resistance that is controlled by multiple genes and multiple molecular mechanisms that each have small effects on the overall resistance trait. Quantitative resistance is often contrasted to ETI resistance mediated by single major-effect R genes. Pattern-triggered immunity PAMPs, conserved molecules that inhabit multiple pathogen genera, are referred to as MAMPs by many researchers. The defenses induced by MAMP perception are sufficient to repel most pathogens. However, pathogen effector proteins (see below) are adapted to suppress basal defenses such as PTI. Many receptors for MAMPs (and DAMPs) have been discovered. MAMPs and DAMPs are often detected by transmembrane receptor-kinases that carry LRR or LysM extracellular domains. Effector triggered immunity Effector triggered immunity (ETI) is activated by the presence of pathogen effectors. The ETI response is reliant on R genes, and is activated by specific pathogen strains. Plant ETI often causes an apoptotic hypersensitive response. R genes and R proteins Plants have evolved R genes (resistance genes) whose products mediate resistance to specific virus, bacteria, oomycete, fungus, nematode or insect strains. R gene products are proteins that allow recognition of specific pathogen effectors, either through direct binding or by recognition of the effector's alteration of a host protein. Many R genes encode NB-LRR proteins (proteins with nucleotide-binding and leucine-rich repeat domains, also known as NLR proteins or STAND proteins, among other names). Most plant immune systems carry a repertoire of 100–600 different R gene homologs. Individual R genes have been demonstrated to mediate resistance to specific virus, bacteria, oomycete, fungus, nematode or insect strains. R gene products control a broad set of disease resistance responses whose induction is often sufficient to stop further pathogen growth/spread. Studied R genes usually confer specificity for particular strains of a pathogen species (those that express the recognized effector). As first noted by Harold Flor in his mid-20th century formulation of the gene-for-gene relationship, a plant R gene has specificity for a pathogen avirulence gene (Avr gene). Avirulence genes are now known to encode effectors. The pathogen Avr gene must have matched specificity with the R gene for that R gene to confer resistance, suggesting a receptor/ligand interaction for Avr and R genes. Alternatively, an effector can modify its host cellular target (or a molecular decoy of that target), and the R gene product (NLR protein) activates defenses when it detects the modified form of the host target or decoy. Effector biology Effectors are central to the pathogenic or symbiotic potential of microbes and microscopic plant-colonizing animals such as nematodes. Effectors typically are proteins that are delivered outside the microbe and into the host cell. These colonist-derived effectors manipulate the host's cell physiology and development. As such, effectors offer examples of co-evolution (example: a fungal protein that functions outside of the fungus but inside of plant cells has evolved to take on plant-specific functions). Pathogen host range is determined, among other things, by the presence of appropriate effectors that allow colonization of a particular host. Pathogen-derived effectors are a powerful tool to identify plant functions that play key roles in disease and in disease resistance. Apparently most effectors function to manipulate host physiology to allow disease to occur. Well-studied bacterial plant pathogens typically express a few dozen effectors, often delivered into the host by a Type III secretion apparatus. Fungal, oomycete and nematode plant pathogens apparently express a few hundred effectors. So-called "core" effectors are defined operationally by their wide distribution across the population of a particular pathogen and their substantial contribution to pathogen virulence. Genomics can be used to identify core effectors, which can then be used to discover new R gene alleles, which can be used in plant breeding for disease resistance. Small RNAs and RNA interference Plant sRNA pathways are understood to be important components of pathogen-associated molecular pattern (PAMP)-triggered immunity (PTI) and effector-triggered immunity (ETI). Bacteria‐induced microRNAs (miRNAs) in Arabidopsis have been shown to influence hormonal signalling including auxin, abscisic acid (ABA), jasmonic acid (JA) and salicylic acid (SA). Advances in genome‐wide studies revealed a massive adaptation of host miRNA expression patterns after infection by fungal pathogens Fusarium virguliforme, Erysiphe graminis, Verticillium dahliae, and Cronartium quercuum, and the oomycete Phytophthora sojae. Changes to sRNA expression in response to fungal pathogens indicate that gene silencing may be involved in this defense pathway. However, there is also evidence that the antifungal defense response to Colletotrichum spp. infection in maize is not entirely regulated by specific miRNA induction, but may instead act to fine-tune the balance between genetic and metabolic components upon infection. Transport of sRNAs during infection is likely facilitated by extracellular vesicles (EVs) and multivesicular bodies (MVBs). The composition of RNA in plant EVs has not been fully evaluated, but it is likely that they are, in part, responsible for trafficking RNA. Plants can transport viral RNAs, mRNAs, miRNAs and small interfering RNAs (siRNAs) systemically through the phloem. This process is thought to occur through the plasmodesmata and involves RNA-binding proteins that assist RNA localization in mesophyll cells. Although they have been identified in the phloem with mRNA, there is no determinate evidence that they mediate long-distant transport of RNAs. EVs may therefore contribute to an alternate pathway of RNA loading into the phloem, or could possibly transport RNA through the apoplast. There is also evidence that plant EVs can allow for interspecies transfer of sRNAs by RNA interference such as Host-Induced Gene Silencing (HIGS). The transport of RNA between plants and fungi seems to be bidirectional as sRNAs from the fungal pathogen Botrytis cinerea have been shown to target host defense genes in Arabidopsis and tomato. Species-level resistance In a small number of cases, plant genes are effective against an entire pathogen species, even though that species is pathogenic on other genotypes of that host species. Examples include barley MLO against powdery mildew, wheat Lr34 against leaf rust and wheat Yr36 against wheat stripe rust. An array of mechanisms for this type of resistance may exist depending on the particular gene and plant-pathogen combination. Other reasons for effective plant immunity can include a lack of coadaptation (the pathogen and/or plant lack multiple mechanisms needed for colonization and growth within that host species), or a particularly effective suite of pre-formed defenses. Signaling mechanisms Perception of pathogen presence Plant defense signaling is activated by the pathogen-detecting receptors that are described in an above section. The activated receptors frequently elicit reactive oxygen and nitric oxide production, calcium, potassium and proton ion fluxes, altered levels of salicylic acid and other hormones and activation of MAP kinases and other specific protein kinases. These events in turn typically lead to the modification of proteins that control gene transcription, and the activation of defense-associated gene expression. Transcription factors and the hormone response Numerous genes and/or proteins as well as other molecules have been identified that mediate plant defense signal transduction. Cytoskeleton and vesicle trafficking dynamics help to orient plant defense responses toward the point of pathogen attack. Mechanisms of transcription factors and hormones Plant immune system activity is regulated in part by signaling hormones such as: Salicylic acid Jasmonic acid Ethylene There can be substantial cross-talk among these pathways. Regulation by degradation As with many signal transduction pathways, plant gene expression during immune responses can be regulated by degradation. This often occurs when hormone binding to hormone receptors stimulates ubiquitin-associated degradation of repressor proteins that block expression of certain genes. The net result is hormone-activated gene expression. Examples: Auxin: binds to receptors that then recruit and degrade repressors of transcriptional activators that stimulate auxin-specific gene expression. Jasmonic acid: similar to auxin, except with jasmonate receptors impacting jasmonate-response signaling mediators such as JAZ proteins. Gibberellic acid: Gibberellin causes receptor conformational changes and binding and degradation of Della proteins. Ethylene: Inhibitory phosphorylation of the EIN2 ethylene response activator is blocked by ethylene binding. When this phosphorylation is reduced, EIN2 protein is cleaved and a portion of the protein moves to the nucleus to activate ethylene-response gene expression. Ubiquitin and E3 signaling Ubiquitination plays a central role in cell signaling that regulates processes including protein degradation and immunological response. Although one of the main functions of ubiquitin is to target proteins for destruction, it is also useful in signaling pathways, hormone release, apoptosis and translocation of materials throughout the cell. Ubiquitination is a component of several immune responses. Without ubiquitin's proper functioning, the invasion of pathogens and other harmful molecules would increase dramatically due to weakened immune defenses. E3 signaling The E3 ubiquitin ligase enzyme is a main component that provides specificity in protein degradation pathways, including immune signaling pathways. The E3 enzyme components can be grouped by which domains they contain and include several types. These include the Ring and U-box single subunit, HECT, and CRLs. Plant signaling pathways including immune responses are controlled by several feedback pathways, which often include negative feedback; and they can be regulated by De-ubiquitination enzymes, degradation of transcription factors and the degradation of negative regulators of transcription. Quantitative resistance Differences in plant disease resistance are often incremental or quantitative rather than qualitative. The term quantitative resistance (QR) refers to plant disease resistance that is controlled by multiple genes and multiple molecular mechanisms that each have small or minor effects on the overall resistance trait. QR is important in plant breeding because the resulting resistance is often more durable (effective for more years), and more likely to be effective against most or all strains of a particular pathogen. QR is typically effective against one pathogen species or a group of closely related species, rather than being broadly effective against multiple pathogens. QR is often obtained through plant breeding without knowledge of the causal genetic loci or molecular mechanisms. QR is likely to depend on many of the plant immune system components discussed in this article, as well as traits that are unique to certain plant-pathogen pairings (such as sensitivity to certain pathogen effectors), as well as general plant traits such as leaf surface characteristics or root system or plant canopy architecture. The term QR is synonymous with minor gene resistance. Adult plant resistance and seedling resistance Adult plant resistance (APR) is a specialist term referring to quantitative resistance that is not effective in the seedling stage but is effective throughout many remaining plant growth stages. The difference between adult plant resistance and seedling resistance is especially important in annual crops. Seedling resistance is resistance which begins in the seedling stage of plant development and continues throughout its lifetime. When used by specialists, the term does not refer to resistance that is only active during the seedling stage. "Seedling resistance" is meant to be synonymous with major gene resistance or all stage resistance (ASR), and is used as a contrast to "adult plant resistance". Seedling resistance is often mediated by single R genes, but not all R genes encode seedling resistance. Plant breeding for disease resistance Plant breeders emphasize selection and development of disease-resistant plant lines. Plant diseases can also be partially controlled by use of pesticides and by cultivation practices such as crop rotation, tillage, planting density, disease-free seeds and cleaning of equipment, but plant varieties with inherent (genetically determined) disease resistance are generally preferred. Breeding for disease resistance began when plants were first domesticated. Breeding efforts continue because pathogen populations are under selection pressure and evolve increased virulence, pathogens move (or are moved) to new areas, changing cultivation practices or climate favor some pathogens and can reduce resistance efficacy, and plant breeding for other traits can disrupt prior resistance. A plant line with acceptable resistance against one pathogen may lack resistance against others. Breeding for resistance typically includes: Identification of plants that may be less desirable in other ways, but which carry a useful disease resistance trait, including wild plant lines that often express enhanced resistance. Crossing of a desirable but disease-susceptible variety to a plant that is a source of resistance. Growth of breeding candidates in a disease-conducive setting, possibly including pathogen inoculation. Attention must be paid to the specific pathogen isolates, to address variability within a single pathogen species. Selection of disease-resistant individuals that retain other desirable traits such as yield, quality and including other disease resistance traits. Resistance is termed durable if it continues to be effective over multiple years of widespread use as pathogen populations evolve. "Vertical resistance" is specific to certain races or strains of a pathogen species, is often controlled by single R genes and can be less durable. Horizontal or broad-spectrum resistance against an entire pathogen species is often only incompletely effective, but more durable, and is often controlled by many genes that segregate in breeding populations. Durability of resistance is important even when future improved varieties are expected to be on the way: The average time from human recognition of a new fungal disease threat to the release of a resistant crop for that pathogen is at least twelve years. Crops such as potato, apple, banana, and sugarcane are often propagated by vegetative reproduction to preserve highly desirable plant varieties, because for these species, outcrossing seriously disrupts the preferred traits. See also asexual propagation. Vegetatively propagated crops may be among the best targets for resistance improvement by the biotechnology method of plant transformation to manage genes that affect disease resistance. Scientific breeding for disease resistance originated with Sir Rowland Biffen, who identified a single recessive gene for resistance to wheat yellow rust. Nearly every crop was then bred to include disease resistance (R) genes, many by introgression from compatible wild relatives. GM or transgenic engineered disease resistance The term GM ("genetically modified") is often used as a synonym of transgenic to refer to plants modified using recombinant DNA technologies. Plants with transgenic/GM disease resistance against insect pests have been extremely successful as commercial products, especially in maize and cotton, and are planted annually on over 20 million hectares in over 20 countries worldwide (see also genetically modified crops). Transgenic plant disease resistance against microbial pathogens was first demonstrated in 1986. Expression of viral coat protein gene sequences conferred virus resistance via small RNAs. This proved to be a widely applicable mechanism for inhibiting viral replication. Combining coat protein genes from three different viruses, scientists developed squash hybrids with field-validated, multiviral resistance. Similar levels of resistance to this variety of viruses had not been achieved by conventional breeding. A similar strategy was deployed to combat papaya ringspot virus, which by 1994 threatened to destroy Hawaii's papaya industry. Field trials demonstrated excellent efficacy and high fruit quality. By 1998 the first transgenic virus-resistant papaya was approved for sale. Disease resistance has been durable for over 15 years. Transgenic papaya accounts for ~85% of Hawaiian production. The fruit is approved for sale in the U.S., Canada, and Japan. Potato lines expressing viral replicase sequences that confer resistance to potato leafroll virus were sold under the trade names NewLeaf Y and NewLeaf Plus, and were widely accepted in commercial production in 1999–2001, until McDonald's Corp. decided not to purchase GM potatoes and Monsanto decided to close their NatureMark potato business. NewLeaf Y and NewLeaf Plus potatoes carried two GM traits, as they also expressed Bt-mediated resistance to Colorado potato beetle. No other crop with engineered disease resistance against microbial pathogens had reached the market by 2013, although more than a dozen were in some state of development and testing. PRR transfer Research aimed at engineered resistance follows multiple strategies. One is to transfer useful PRRs into species that lack them. Identification of functional PRRs and their transfer to a recipient species that lacks an orthologous receptor could provide a general pathway to additional broadened PRR repertoires. For example, the Arabidopsis PRR EF-Tu receptor (EFR) recognizes the bacterial translation elongation factor EF-Tu. Research performed at Sainsbury Laboratory demonstrated that deployment of EFR into either Nicotiana benthamiana or Solanum lycopersicum (tomato), which cannot recognize EF-Tu, conferred resistance to a wide range of bacterial pathogens. EFR expression in tomato was especially effective against the widespread and devastating soil bacterium Ralstonia solanacearum. Conversely, the tomato PRR Verticillium 1 (Ve1) gene can be transferred from tomato to Arabidopsis, where it confers resistance to race 1 Verticillium isolates. Stacking The second strategy attempts to deploy multiple NLR genes simultaneously, a breeding strategy known as stacking. Cultivars generated by either DNA-assisted molecular breeding or gene transfer will likely display more durable resistance, because pathogens would have to mutate multiple effector genes. DNA sequencing allows researchers to functionally “mine” NLR genes from multiple species/strains. The avrBs2 effector gene from Xanthomona perforans is the causal agent of bacterial spot disease of pepper and tomato. The first “effector-rationalized” search for a potentially durable R gene followed the finding that avrBs2 is found in most disease-causing Xanthomonas species and is required for pathogen fitness. The Bs2 NLR gene from the wild pepper, Capsicum chacoense, was moved into tomato, where it inhibited pathogen growth. Field trials demonstrated robust resistance without bactericidal chemicals. However, rare strains of Xanthomonas overcame Bs2-mediated resistance in pepper by acquisition of avrBs2 mutations that avoid recognition but retain virulence. Stacking R genes that each recognize a different core effector could delay or prevent adaptation. More than 50 loci in wheat strains confer disease resistance against wheat stem, leaf and yellow stripe rust pathogens. The Stem rust 35 (Sr35) NLR gene, cloned from a diploid relative of cultivated wheat, Triticum monococcum, provides resistance to wheat rust isolate Ug99. Similarly, Sr33, from the wheat relative Aegilops tauschii, encodes a wheat ortholog to barley Mla powdery mildew–resistance genes. Both genes are unusual in wheat and its relatives. Combined with the Sr2 gene that acts additively with at least Sr33, they could provide durable disease resistance to Ug99 and its derivatives. Executor genes Another class of plant disease resistance genes opens a “trap door” that quickly kills invaded cells, stopping pathogen proliferation. Xanthomonas and Ralstonia transcription activator–like (TAL) effectors are DNA-binding proteins that activate host gene expression to enhance pathogen virulence. Both the rice and pepper lineages independently evolved TAL-effector binding sites that instead act as an executioner that induces hypersensitive host cell death when up-regulated. Xa27 from rice and Bs3 and Bs4c from pepper, are such “executor” (or "executioner") genes that encode non-homologous plant proteins of unknown function. Executor genes are expressed only in the presence of a specific TAL effector. Engineered executor genes were demonstrated by successfully redesigning the pepper Bs3 promoter to contain two additional binding sites for TAL effectors from disparate pathogen strains. Subsequently, an engineered executor gene was deployed in rice by adding five TAL effector binding sites to the Xa27 promoter. The synthetic Xa27 construct conferred resistance against Xanthomonas bacterial blight and bacterial leaf streak species. Host susceptibility alleles Most plant pathogens reprogram host gene expression patterns to directly benefit the pathogen. Reprogrammed genes required for pathogen survival and proliferation can be thought of as “disease-susceptibility genes.” Recessive resistance genes are disease-susceptibility candidates. For example, a mutation disabled an Arabidopsis gene encoding pectate lyase (involved in cell wall degradation), conferring resistance to the powdery mildew pathogen Golovinomyces cichoracearum. Similarly, the Barley MLO gene and spontaneously mutated pea and tomato MLO orthologs also confer powdery mildew resistance. Lr34 is a gene that provides partial resistance to leaf and yellow rusts and powdery mildew in wheat. Lr34 encodes an adenosine triphosphate (ATP)–binding cassette (ABC) transporter. The dominant allele that provides disease resistance was recently found in cultivated wheat (not in wild strains) and, like MLO, provides broad-spectrum resistance in barley. Natural alleles of host translation elongation initiation factors eif4e and eif4g are also recessive viral-resistance genes. Some have been deployed to control potyviruses in barley, rice, tomato, pepper, pea, lettuce, and melon. The discovery prompted a successful mutant screen for chemically induced eif4e alleles in tomato. Natural promoter variation can lead to the evolution of recessive disease-resistance alleles. For example, the recessive resistance gene xa13 in rice is an allele of Os-8N3. Os-8N3 is transcriptionally activated byXanthomonas oryzae pv. oryzae strains that express the TAL effector PthXo1. The xa13 gene has a mutated effector-binding element in its promoter that eliminates PthXo1 binding and renders these lines resistant to strains that rely on PthXo1. This finding also demonstrated that Os-8N3 is required for susceptibility. Xa13/Os-8N3 is required for pollen development, showing that such mutant alleles can be problematic should the disease-susceptibility phenotype alter function in other processes. However, mutations in the Os11N3 (OsSWEET14) TAL effector–binding element were made by fusing TAL effectors to nucleases (TALENs). Genome-edited rice plants with altered Os11N3 binding sites remained resistant to Xanthomonas oryzae pv. oryzae, but still provided normal development function. Gene silencing RNA silencing-based resistance is a powerful tool for engineering resistant crops. The advantage of RNAi as a novel gene therapy against fungal, viral, and bacterial infection in plants lies in the fact that it regulates gene expression via messenger RNA degradation, translation repression and chromatin remodelling through small non-coding RNAs. Mechanistically, the silencing processes are guided by processing products of the double-stranded RNA (dsRNA) trigger, which are known as small interfering RNAs and microRNAs. Temperature Effects on Virus Resistance Temperature significantly affects plant resistance to viruses. For example, plants with the N gene for tobacco develop tolerance to tobacco mosaic virus (TMV) but become systemically infected at temperatures above 28°C. Similarly, Capsicum chinense plants carrying the Tsw gene can become systemically infected with Tomato spotted wilt virus (TSWV) at 32°C. In the case of Beet necrotic yellow vein virus (BNYVV), plants expressing the BvGLYR1 gene showed higher virus accumulation at 22°C compared to 30°C, indicating that temperature influences the effectiveness of this gene in virus resistance. Host range Among the thousands of species of plant pathogenic microorganisms, only a small minority have the capacity to infect a broad range of plant species. Most pathogens instead exhibit a high degree of host-specificity. Non-host plant species are often said to express non-host resistance. The term host resistance is used when a pathogen species can be pathogenic on the host species but certain strains of that plant species resist certain strains of the pathogen species. The causes of host resistance and non-host resistance can overlap. Pathogen host range is determined, among other things, by the presence of appropriate effectors that allow colonization of a particular host. Pathogen host range can change quite suddenly if, for example, the pathogen's capacity to synthesize a host-specific toxin or effector is gained by gene shuffling/mutation, or by horizontal gene transfer. Epidemics and population biology Native populations are often characterized by substantial genotype diversity and dispersed populations (growth in a mixture with many other plant species). They also have undergone of plant-pathogen coevolution. Hence as long as novel pathogens are not introduced/do not evolve, such populations generally exhibit only a low incidence of severe disease epidemics. Monocrop agricultural systems provide an ideal environment for pathogen evolution, because they offer a high density of target specimens with similar/identical genotypes. The rise in mobility stemming from modern transportation systems provides pathogens with access to more potential targets. Climate change can alter the viable geographic range of pathogen species and cause some diseases to become a problem in areas where the disease was previously less important. These factors make modern agriculture more prone to disease epidemics. Common solutions include constant breeding for disease resistance, use of pesticides, use of border inspections and plant import restrictions, maintenance of significant genetic diversity within the crop gene pool (see crop diversity), and constant surveillance to accelerate initiation of appropriate responses. Some pathogen species have much greater capacity to overcome plant disease resistance than others, often because of their ability to evolve rapidly and to disperse broadly. Case Study of American Chestnut Blight Chestnut blight was first noticed in American Chestnut trees that were growing in what is now known as the Bronx Zoo in the year 1904. For years following this incident, it was argued as to what the identity of the pathogen was, as well as the appropriate approach to its control. The earliest attempts to fix the problem on the chestnut involved chemical solutions or physical ones. They attempted to use fungicides, cut limbs off of trees to stop the infection, and completely remove infected trees from habitations to not allow them to infect the others. All of these strategies ended up unsuccessful. Even quarantine measures were put into place which were helped by the passage of Plant Quarantine Act. Chestnut blight still proved to be a huge problem as it rapidly moved through the densely populated forests of chestnut trees. In 1914, the idea was considered to induce blight resistance to the trees through various different means and breeding mechanisms. See also Gene-for-gene relationship Induced systemic resistance Plant defense against herbivory Plant pathology Plant use of endophytic fungi in defense Systemic acquired resistance References Further reading Lucas, J.A., "Plant Defence." Chapter 9 in Plant Pathology and Plant Pathogens, 3rd ed. 1998 Blackwell Science. Hammond-Kosack, K. and Jones, J.D.G. "Responses to plant pathogens." In: Buchanan, Gruissem and Jones, eds. Biochemistry and Molecular Biology of Plants, Second Edition. 2015. Wiley-Blackwell, Hoboken, NJ. Schumann, G. Plant Diseases: Their Biology and Social Impact. 1991 APS Press, St. Paul, Minnesota External links APS Home Resistance Plant immunity Chemical ecology
Plant disease resistance
[ "Chemistry", "Biology" ]
7,051
[ "Biochemistry", "Chemical ecology" ]
22,594,732
https://en.wikipedia.org/wiki/Process%20simulation
Process simulation is used for the design, development, analysis, and optimization of technical process of simulation of processes such as: chemical plants, chemical processes, environmental systems, power stations, complex manufacturing operations, biological processes, and similar technical functions. Main principle Process simulation is a model-based representation of chemical, physical, biological, and other technical processes and unit operations in software. Basic prerequisites for the model are chemical and physical properties of pure components and mixtures, of reactions, and of mathematical models which, in combination, allow the calculation of process properties by the software. Process simulation software describes processes in flow diagrams where unit operations are positioned and connected by product or educt streams. The software solves the mass and energy balance to find a stable operating point on specified parameters. The goal of a process simulation is to find optimal conditions for a process. This is essentially an optimization problem which has to be solved in an iterative process. In the example above the feed stream to the column is defined in terms of its chemical and physical properties. This includes the composition of individual molecular species in the stream; the overall mass flowrate; the streams pressure and temperature. For hydrocarbon systems the Vapor-Liquid Equilibrium Ratios (K-Values) or models that are used to define them are specified by the user. The properties of the column are defined such as the inlet pressure and the number of theoretical plates. The duty of the reboiler and overhead condenser are calculated by the model to achieve a specified composition or other parameter of the bottom and/or top product. The simulation calculates the chemical and physical properties of the product streams, each is assigned a unique number which is used in the mass and energy diagram. Process simulation uses models which introduce approximations and assumptions but allow the description of a property over a wide range of temperatures and pressures which might not be covered by available real data. Models also allow interpolation and extrapolation - within certain limits - and enable the search for conditions outside the range of known properties. Modelling The development of models for a better representation of real processes is the core of the further development of the simulation software. Model development is done through the principles of chemical engineering but also control engineering and for the improvement of mathematical simulation techniques. Process simulation is therefore a field where practitioners from chemistry, physics, computer science, mathematics, and engineering work together. Efforts are made to develop new and improved models for the calculation of properties. This includes for example the description of thermophysical properties like vapor pressures, viscosities, caloric data, etc. of pure components and mixtures properties of different apparatus like reactors, distillation columns, pumps, etc. chemical reactions and kinetics environmental and safety-related data There are two main types of models: Simple equations and correlations where parameters are fitted to experimental data. Predictive methods where properties are estimated. The equations and correlations are normally preferred because they describe the property (almost) exactly. To obtain reliable parameters it is necessary to have experimental data which are usually obtained from factual data banks or, if no data are publicly available, from measurements. Using predictive methods is more cost effective than experimental work and also than data from data banks. Despite this advantage predicted properties are normally only used in early stages of the process development to find first approximate solutions and to exclude false pathways because these estimation methods normally introduce higher errors than correlations obtained from real data. Process simulation has encouraged the development of mathematical models in the fields of numerics and the solving of complex problems. History The history of process simulation is related to the development of the computer science and of computer hardware and programming languages. Early implementations of partial aspects of chemical processes were introduced in the 1970s when suitable hardware and software (here mainly the programming languages FORTRAN and C) became available. The modelling of chemical properties began much earlier, notably the cubic equation of states and the Antoine equation were precursory developments of the 19th century. Steady state and dynamic process simulation Initially process simulation was used to simulate steady state processes. Steady-state models perform a mass and energy balance of a steady state process (a process in an equilibrium state) independent of time. Dynamic simulation is an extension of steady-state process simulation whereby time-dependence is built into the models via derivative terms i.e. accumulation of mass and energy. The advent of dynamic simulation means that the time-dependent description, prediction and control of real processes in real time has become possible. This includes the description of starting up and shutting down a plant, changes of conditions during a reaction, holdups, thermal changes and more. Dynamic simulation require increased calculation time and are mathematically more complex than a steady state simulation. It can be seen as a multiple repeated steady state simulation (based on a fixed time step) with constantly changing parameters. Dynamic simulation can be used in both an online and offline fashion. The online case being model predictive control, where the real-time simulation results are used to predict the changes that would occur for a control input change, and the control parameters are optimised based on the results. Offline process simulation can be used in the design, troubleshooting and optimisation of process plant as well as the conduction of case studies to assess the impacts of process modifications. Dynamic simulation is also used for operator training. See also Advanced Simulation Library Computer simulation List of chemical process simulators Software Process simulation References Chemical process engineering Simulation Industrial design Process engineering
Process simulation
[ "Chemistry", "Engineering" ]
1,108
[ "Industrial design", "Process engineering", "Design engineering", "Chemical engineering", "Mechanical engineering by discipline", "Chemical process engineering", "Design" ]
22,596,736
https://en.wikipedia.org/wiki/Structure-based%20combinatorial%20protein%20engineering
Structure-based combinatorial protein engineering (SCOPE) is a synthetic biology technique for creating gene libraries (lineages) of defined composition designed from structural and probabilistic constraints of the encoded proteins. The development of this technique was driven by fundamental questions about protein structure, function, and evolution, although the technique is generally applicable for the creation of engineered proteins with commercially desirable properties. Combinatorial travel through sequence spacetime is the goal of SCOPE. Description At its inception, SCOPE was developed as a homology-independent recombination technique to enable the creation of multiple crossover libraries from distantly related genes. In this application, an “exon plate tectonics” design strategy was devised to assemble “equivalent” elements of structure (continental plates) with variability in the junctions linking them (fault lines) to explore global protein space. To create the corresponding library of genes, the breeding scheme of Gregor Mendel was adapted into a PCR strategy to selectively cross hybrid genes, a process of iterative inbreeding to create all possible combinations of coding segments with variable linkages. Genetic complementation in temperature-sensitive E. coli was used as the selection system to successfully identify functional hybrid DNA polymerases of minimal architecture with enhanced phenotypes. SCOPE was then used to construct a synthetic enzyme lineage, which was biochemically characterized to recapitulate the evolutionary divergence of two modern day enzymes. The rapid evolvability of chemical diversity in terpene synthases were demonstrated through processes akin to both Darwinian gradualism and saltation: some mutational pathways show steady, additive changes, whereas others show drastic jumps between contrasting product specificities with single mutational steps. Further, a metric was devised to describe the chemical distance of mutational steps to derive a chemical-based phylogeny relating sequence variation to chemical output. These examples establish SCOPE as a standardized method for the construction of synthetic gene libraries from close or distantly related parental sequences to identify functional novelty among the encoded proteins. See also Directed evolution Enzymology Expanded genetic code Gene synthesis Genome Nucleic acid analogues Protein design Protein engineering Protein folding Proteomics Proteome Structural biology Synthetic biology Further reading External links SCOPE Patent Combinatorial chemistry Evolutionary biology Protein engineering
Structure-based combinatorial protein engineering
[ "Chemistry", "Materials_science", "Mathematics", "Engineering", "Biology" ]
458
[ "Synthetic biology", "Evolutionary biology", "Combinatorial chemistry", "Biological engineering", "Materials science", "Combinatorics", "Bioinformatics", "Molecular genetics" ]
22,597,036
https://en.wikipedia.org/wiki/In%20situ%20polymerization
In polymer chemistry, in situ polymerization is a preparation method that occurs "in the polymerization mixture" and is used to develop polymer nanocomposites from nanoparticles. There are numerous unstable oligomers (molecules) which must be synthesized in situ (i.e. in the reaction mixture but cannot be isolated on their own) for use in various processes. The in situ polymerization process consists of an initiation step followed by a series of polymerization steps, which results in the formation of a hybrid between polymer molecules and nanoparticles. Nanoparticles are initially spread out in a liquid monomer or a precursor of relatively low molecular weight. Upon the formation of a homogeneous mixture, initiation of the polymerization reaction is carried out by addition of an adequate initiator, which is exposed to a source of heat, radiation, etc. After the polymerization mechanism is completed, a nanocomposite is produced, which consists of polymer molecules bound to nanoparticles. In order to perform the in situ polymerization of precursor polymer molecules to form a polymer nanocomposite, certain conditions must be fulfilled which include the use of low viscosity pre-polymers (typically less than 1 pascal), a short period of polymerization, the use of polymer with advantageous mechanical properties, and no formation of side products during the polymerization process. Advantages and Disadvantages There are several advantages of the in situ polymerization process, which include the use of cost-effective materials, being easy to automate, and the ability to integrate with many other heating and curing methods. Some downsides of this preparation method, however, include limited availability of usable materials, a short time period to execute the polymerization process, and expensive equipment is required.    The next sections will cover the various examples of polymer nanocomposites produced using the in situ polymerization technique, and their real life applications. Clay Nanocomposites Towards the end of the 20th century, Toyota Motor Corp devised the first commercial application of the clay-polyamide-6 nanocomposite, which was prepared via in situ polymerization. Once Toyota laid the groundwork for polymer layered silicate nanocomposites, extensive research in this particular area was conducted afterwards. Clay nanocomposites can experience a significant increase in strength, thermal stability, and ability to penetrate barriers upon addition of a minute portion of nanofiller into the polymer matrix. A standard technique to prepare clay nancomposites is in situ polymerization, which consists of intercalation of the monomer with the clay surface, followed by initiation by the functional group in the organic cation and then polymerization. A study by Zeng and Lee investigated the role of the initiator in the in situ polymerization process of clay nanocomposites. One of the major findings was that the more favorable nanocomposite product was produced with a more polar monomer and initiator. Carbon Nanotubes (CNT) In situ polymerization is an important method of preparing polymer grafted nanotubes using carbon nanotubes. Properties Due to their remarkable mechanical, thermal and electronic properties, including high conductivity, large surface area, and excellent thermal stability, carbon nanotubes (CNT) have been heavily studied since their discovery to develop various real world applications. Two particular applications that carbon nanotubes have made major contributions to include strengthening composites as filler material and energy production via thermally conductive composites. Types of CNT Currently, the two principal types of carbon nanotubes are single walled nanotubes (SWNT) and multi-walled nanotubes (MWNT). Advantages of In Situ Polymerization Using CNT In situ polymerization offers several advantages in the preparation of polymer grafted nanotubes compared to other methods. First and foremost, it allows polymer macromolecules to attach to CNT walls. Additionally, the resulting composite is miscible with most types of polymers. Unlike solution or melt processing, in situ polymerization can prepare insoluble and thermally unstable polymers. Lastly, in situ polymerization can achieve stronger covalent interactions between polymer and CNTs earlier in the process. Applications Recent improvements in the in situ polymerization process have led to the production of polymer-carbon nanotube composites with enhanced mechanical properties. With regards to their energy-related applications, carbon nanotubes have been used to make electrodes, with one specific example being the CNT/PMMA composite electrode. In situ polymerization has been studied to streamline the construction process of such electrodes. Huang, Vanhaecke, and Chen found that in situ polymerization can potentially produce composites of conductive CNTs on a grand scale. Some aspects of in situ polymerization that can help achieve this feat are that it is cost effective with regards to operation, requires minimal sample, has high sensitivity, and offers many promising environmental and bioanalytical applications. Biopharmaceuticals Proteins, DNAs, and RNAs are just a few examples of biopharmaceuticals that hold the potential to treat various disorders and diseases, ranging from cancer to infectious diseases. However, due to certain undesirable properties such as poor stability, susceptibility to enzyme degradation, and insufficient capability to penetrate biological barriers, the application of such biopharmaceuticals in delivering medical treatment has been severely hindered. The formation of polymer-biomacromolecule nanocomposites via in situ polymerization offers an innovative means of overcoming these obstacles and improving the overall effectiveness of biopharmaceuticals. Recent studies have demonstrated how in situ polymerization can be implemented to improve the stability, bioactivity, and ability to cross biological barriers of biopharmaceuticals. Types of Biomolecule Polymer Nanocomposites The two main types of nanocomposites formed by in situ polymerization are 1) biomolecule-linear polymer hybrids, which are linear or have a star-like shape, and contain covalent bonds between individual polymer chains and the biomolecular surface and 2) biomolecule-crosslinked polymer nanocapsules, which are nanocapsules with biomacromolecules centered within the polymer shells. In Situ Polymerization Methods for Biomolecules Biomolecule-linear polymer hybrids are formed via “grafting-from” polymerization, which is an in situ approach that differs from the standard “grafting to” polymerization. Whereas “grafting to” polymerization involves the straightforward attachment of polymers to the biomolecule of choice, the “grafting from” method takes place on proteins that are pre-modified with initiators. Some examples of “grafting to” polymerization include atom transfer radical polymerization (ATRP) and reversible addition-fragmentation chain transfer (RAFT). These methods are similar in that they both lead to narrow molecular weight distributions and can make block copolymer. On the other hand, they each have distinct properties that need to be analyzed on a case-by-case basis. For example, ATRP is sensitive to oxygen whereas RAFT is insensitive to oxygen; in addition, RAFT has a much greater compatibility with monomers than ATRP. Radical polymerization with crosslinkers is the other in situ polymerization method, and this process leads to the formation of biomolecule-crosslinked polymer nanocapsules. This process produces nanogels/nanocapsules via a covalent or non-covalent approach. In the covalent approach, the two steps are the conjugation of acryloyl groups to protein followed by in situ free radical polymerization. In the non-covalent approach, proteins are entrapped within nanocapsules. Protein Nanogels Nanogels, which are microscopic hydrogel particles held together by a cross-linked polymer network, offer a desirable mode of drug delivery that has a variety of biomedical applications. In situ polymerization can be used to prepare protein nanogels that help facilitate the storage and delivery of protein. The preparation of such nanogels via the in situ polymerization method begins with free proteins dispersed in an aqueous solution along with cross-linkers and monomers, followed by addition of radical initiators, which leads to the polymerization of a nanogel polymer shell that encloses a protein core. Additional modification of the polymeric nanogel enables delivery to specific target cells. Three classes of in situ polymerized nanogels are 1) direct covalent conjugation via chemical modifications, 2) noncovalent encapsulation, and 3) cross-linking of preformed crosslinkable polymers. Protein nanogels have tremendous applications for cancer treatment, vaccination, diagnosis, regenerative medicine, and therapies for loss-of-function genetic diseases. In situ polymerized nanogels are capable of delivering the appropriate amount of protein to the site of treatment; certain chemical and physical factors including pH, temperature, and redox potential manage the protein delivery process of nanogels. Urea formaldehyde and melamine formalehyde Urea-formaldehyde (UF) and melamine formaldehyde (MF) encapsulation systems are other examples that utilize in situ polymerization. In such type of in situ polymerization a chemical encapsulation technique is involved very similar to interfacial coating. The distinguishing characteristic of in situ polymerization is that no reactants are included in the core material. All polymerization occurs in the continuous phase, rather than on both sides of the interface between the continuous phase and the core material. In situ polymerization of such formaldehyde systems usually involves the emulsification of an oil-phase in water. Then, water-soluble urea/melamine formaldehyde resin monomers are added, which are allowed to disperse. The initiation step occurs when acid is added to lower the pH of the mixture. Crosslinking of the resins completes the polymerization process and results in a shell of polymer-encapsulated oil droplets. References Polymers
In situ polymerization
[ "Chemistry", "Materials_science" ]
2,100
[ "Polymers", "Polymer chemistry" ]
22,600,725
https://en.wikipedia.org/wiki/TSE%20buffer
TSE or Tris/Saline/EDTA, is a buffer solution containing a mixture of Tris base, Sodium chloride and EDTA. In molecular biology, TSE buffers are often used in procedures involving nucleic acids. Tris-acid solutions are effective buffers for slightly basic conditions, which keep DNA deprotonated and soluble in water. The concentration of tris in the solution is kept near 25 mM. EDTA is a chelator of divalent cations, particularly of magnesium (Mg2+). As these ions are necessary co-factors for many enzymes, including contaminant nucleases, the role of the EDTA is to protect the nucleic acids against enzymatic degradation. But since Mg2+ is also a co-factor for many useful DNA-modifying enzymes such as restriction enzymes and DNA polymerases, its concentration in TSE buffers is generally kept low (typically at around 2.5 mM). The sodium chloride is generally kept at a concentration of 0.05 M. References Buffer solutions Genetics techniques
TSE buffer
[ "Chemistry", "Engineering", "Biology" ]
221
[ "Genetics techniques", "Buffer solutions", "Biotechnology stubs", "Genetic engineering", "Biochemistry stubs", "Biochemistry" ]
1,952,635
https://en.wikipedia.org/wiki/Ion%20trap
An ion trap is a combination of electric and/or magnetic fields used to capture charged particles — known as ions — often in a system isolated from an external environment. Atomic and molecular ion traps have a number of applications in physics and chemistry such as precision mass spectrometry, improved atomic frequency standards, and quantum computing. In comparison to neutral atom traps, ion traps have deeper trapping potentials (up to several electronvolts) that do not depend on the internal electronic structure of a trapped ion. This makes ion traps more suitable for the study of light interactions with single atomic systems. The two most popular types of ion traps are the Penning trap, which forms a potential via a combination of static electric and magnetic fields, and the Paul trap which forms a potential via a combination of static and oscillating electric fields. Penning traps can be used for precise magnetic measurements in spectroscopy. Studies of quantum state manipulation most often use the Paul trap. This may lead to a trapped ion quantum computer and has already been used to create the world's most accurate atomic clocks. Electron guns (a device emitting high-speed electrons, used in CRTs) can use an ion trap to prevent degradation of the cathode by positive ions. History The physical principles of ion traps were first explored by F. M. Penning (1894–1953), who observed that electrons released by the cathode of an ionization vacuum gauge follow a long cycloidal path to the anode in the presence of a sufficiently strong magnetic field. A scheme for confining charged particles in three dimensions without the use of magnetic fields was developed by W. Paul based on his work with quadrupole mass spectrometers. Ion traps were used in television receivers prior to the introduction of aluminized CRT faces around 1958, to protect the phosphor screen from ions. The ion trap must be delicately adjusted for maximum brightness. Theory Any charged particle, such as an ion, feels a force from an electric or magnetic field. Ion traps work by using this force to confine ions in a small, isolated volume of space so that they can be studied or manipulated. Although any static (constant in time) electromagnetic field produces a force on an ion, it is not possible to confine an ion using only a static electric field. This is a consequence of Earnshaw's theorem. However, physicists have various ways of working around this theorem by using combinations of static magnetic and electric fields (as in a Penning trap) or by an oscillating electric field and a static electric field(Paul trap). Ion motion and confinement in the trap is generally divided into axial and radial components, which are typically addressed separately by different fields. In both Paul and Penning traps, axial ion motion is confined by a static electric field. Paul traps use an oscillating electric field to confine the ion radially and Penning traps generate radial confinement with a static magnetic field. Paul Trap A Paul trap that uses an oscillating quadrupole field to trap ions radially and a static potential to confine ions axially. The quadrupole field is realized by four parallel electrodes laying in the -axis positioned at the corners of a square in the -plane. Electrodes diagonally opposite each other are connected and an a.c. voltage is applied. Using Maxwell's equations, the electric field produced by this potential is electric field . Applying Newton's second law to an ion of charge and mass in this a.c. electric field, we can find the force on the ion using . We wind up with . Assuming that the ion has zero initial velocity, two successive integrations give the velocity and displacement as , , where is a constant of integration. Thus, the ion oscillates with angular frequency and amplitude proportional to the electric field strength and is confined radially. Working specifically with a linear Paul trap, we can write more specific equations of motion. Along the -axis, an analysis of the radial symmetry yields a potential . The constants and are determined by boundary conditions on the electrodes and satisfies Laplace's equation . Assuming the length of the electrodes is much greater than their separation , it can be shown that . Since the electric field is given by the gradient of the potential, we get that . Defining , the equations of motion in the -plane are a simplified form of the Mathieu equation, . Penning Trap A standard configuration for a Penning trap consists of a ring electrode and two end caps. A static voltage differential between the ring and end caps confines ions along the axial direction (between end caps). However, as expected from Earnshaw's theorem, the static electric potential is not sufficient to trap an ion in all three dimensions. To provide the radial confinement, a strong axial magnetic field is applied. For a uniform electric field , the force accelerates a positively charged ion along the -axis. For a uniform magnetic field , the Lorentz force causes the ion to move in circular motion with cyclotron frequency . Assuming an ion with zero initial velocity placed in a region with and , the equations of motion are , , . The resulting motion is a combination of oscillatory motion around the -axis with frequency and a drift velocity in the -direction. The drift velocity is perpendicular to the direction of the electric field. For the radial electric field produced by the electrodes in a Penning trap, the drift velocity will precess around the axial direction with some frequency , called the magnetron frequency. An ion will also have a third characteristic frequency between the two end cap electrodes. The frequencies usually have widely different values with . Ion trap mass spectrometers An ion trap mass spectrometer may incorporate a Penning trap (Fourier-transform ion cyclotron resonance), Paul trap or the Kingdon trap. The Orbitrap, introduced in 2005, is based on the Kingdon trap. Other types of mass spectrometers may also use a linear quadrupole ion trap as a selective mass filter. Penning ion trap A Penning trap stores charged particles using a strong homogeneous axial magnetic field to confine particles radially and a quadrupole electric field to confine the particles axially. Penning traps are well suited for measurements of the properties of ions and stable charged subatomic particles. Precision studies of the electron magnetic moment by Dehmelt and others are an important topic in modern physics. Penning traps can be used in quantum computation and quantum information processing and are used at CERN to store antimatter. Penning traps form the basis of Fourier-transform ion cyclotron resonance mass spectrometry for determining the mass-to-charge ratio of ions. The Penning Trap was invented by Frans Michel Penning and Hans Georg Dehmelt, who built the first trap in the 1950s. Paul ion trap A Paul trap is a type of quadrupole ion trap that uses static direct current (DC) and radio frequency (RF) oscillating electric fields to trap ions. Paul traps are commonly used as components of a mass spectrometer. The invention of the 3D quadrupole ion trap itself is attributed to Wolfgang Paul who shared the Nobel Prize in Physics in 1989 for this work. The trap consists of two hyperbolic metal electrodes with their foci facing each other and a hyperbolic ring electrode halfway between the other two electrodes. Ions are trapped in the space between these three electrodes by the oscillating and static electric fields. Kingdon trap and orbitrap A Kingdon trap consists of a thin central wire, an outer cylindrical electrode and isolated end cap electrodes at both ends. A static applied voltage results in a radial logarithmic potential between the electrodes. In a Kingdon trap there is no potential minimum to store the ions; however, they are stored with a finite angular momentum about the central wire and the applied electric field in the device allows for the stability of the ion trajectories. In 1981, Knight introduced a modified outer electrode that included an axial quadrupole term that confines the ions on the trap axis. The dynamic Kingdon trap has an additional AC voltage that uses strong defocusing to permanently store charged particles. The dynamic Kingdon trap does not require the trapped ions to have angular momentum with respect to the filament. An Orbitrap is a modified Kingdon trap that is used for mass spectrometry. Though the idea has been suggested and computer simulations performed neither the Kingdon nor the Knight configurations were reported to produce mass spectra, as the simulations indicated mass resolving power would be problematic. Trapped ion quantum computer Some experimental work towards developing quantum computers use trapped ions. Units of quantum information called qubits are stored in stable electronic states of each ion, and quantum information can be processed and transferred through the collective quantized motion of the ions, interacting by the Coulomb force. Lasers are applied to induce coupling between the qubit states (for single qubit operations) or between the internal qubit states and external motional states (for entanglement between qubits). See also Laser cooling Mass spectrometry Quantum jump References External links VIAS Science Cartoons A cranky view of an ion trap... Paul trap Mass spectrometry Ions
Ion trap
[ "Physics", "Chemistry" ]
1,909
[ "Matter", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Particle traps", "Mass spectrometry", "Ions" ]
1,952,861
https://en.wikipedia.org/wiki/Shock%20hardening
Shock hardening is a process used to strengthen metals and alloys, wherein a shock wave produces atomic-scale defects in the material's crystalline structure. As in cold work, these defects interfere with the normal processes by which metallic materials yield (plasticity), making materials stiffer, but more brittle. When compared to traditional cold work, such an extremely rapid process results in a different class of defect, producing a much harder material for a given change in shape. If the shock wave applies too great a force for too long, however, the rarefaction front that follows it can form voids in the material due to hydrostatic tension, weakening the material and often causing it to spall. Since voids nucleate at large defects, such as oxide inclusions and grain boundaries, high-purity samples with a large grain size (especially single crystals) are able to withstand greater shock without spalling, and can therefore be made much harder. Shock hardening has been observed in many contexts: Explosive forging uses the detonation of a high explosive charge to create a shockwave. This effect is used to harden rail track cast components and, coupled with the Misnay-Schardin effect, in the operation of explosively forged penetrators. Greater hardening can be achieved by using a lower quantity of an explosive with greater brisance, so that the force applied is greater but the material spends less time in hydrostatic tension. Laser shock, similar to inertial confinement fusion, uses the ablation plume caused by a laser pulse to apply force to the laser's target. The rebound from the expelled matter can create very high pressures, and the pulse length of lasers is often quite short, meaning that good hardening can be achieved with little risk of spallation. Surface effects can also be achieved by laser treatment, including amorphization. Light-gas guns have been used to study shock hardening. Although too labor-intensive for widespread industrial application, they do provide a versatile research testbed. They allow precise control of both magnitude and profile of the shock wave through adjustments to the projectile's muzzle velocity and density profile, respectively. Studies of various projectile types have been crucial in overturning a prior theory that spallation occurs at a threshold of pressure, independent of time. Instead, experiments show longer-lasting shocks of a given magnitude produce more material damage. See also List of laser articles References Metalworking Metallurgical processes
Shock hardening
[ "Chemistry", "Materials_science" ]
498
[ "Metallurgical processes", "Metallurgy" ]
1,953,581
https://en.wikipedia.org/wiki/Redundancy%20%28engineering%29
In engineering and systems theory, redundancy is the intentional duplication of critical components or functions of a system with the goal of increasing reliability of the system, usually in the form of a backup or fail-safe, or to improve actual system performance, such as in the case of GNSS receivers, or multi-threaded computer processing. In many safety-critical systems, such as fly-by-wire and hydraulic systems in aircraft, some parts of the control system may be triplicated, which is formally termed triple modular redundancy (TMR). An error in one component may then be out-voted by the other two. In a triply redundant system, the system has three sub components, all three of which must fail before the system fails. Since each one rarely fails, and the sub components are designed to preclude common failure modes (which can then be modelled as independent failure), the probability of all three failing is calculated to be extraordinarily small; it is often outweighed by other risk factors, such as human error. Electrical surges arising from lightning strikes are an example of a failure mode which is difficult to fully isolate, unless the components are powered from independent power busses and have no direct electrical pathway in their interconnect (communication by some means is required for voting). Redundancy may also be known by the terms "majority voting systems" or "voting logic". Redundancy sometimes produces less, instead of greater reliability it creates a more complex system which is prone to various issues, it may lead to human neglect of duty, and may lead to higher production demands which by overstressing the system may make it less safe. Redundancy is one form of robustness as practiced in computer science. Geographic redundancy has become important in the data center industry, to safeguard data against natural disasters and political instability (see below). Forms of redundancy In computer science, there are four major forms of redundancy: Hardware redundancy, such as dual modular redundancy and triple modular redundancy Information redundancy, such as error detection and correction methods Time redundancy, performing the same operation multiple times such as multiple executions of a program or multiple copies of data transmitted Software redundancy such as N-version programming A modified form of software redundancy, applied to hardware may be: Distinct functional redundancy, such as both mechanical and hydraulic braking in a car. Applied in the case of software, code written independently and distinctly different but producing the same results for the same inputs. Structures are usually designed with redundant parts as well, ensuring that if one part fails, the entire structure will not collapse. A structure without redundancy is called fracture-critical, meaning that a single broken component can cause the collapse of the entire structure. Bridges that failed due to lack of redundancy include the Silver Bridge and the Interstate 5 bridge over the Skagit River. Parallel and combined systems demonstrate different level of redundancy. The models are subject of studies in reliability and safety engineering. Dissimilar redundancy Unlike traditional redundancy, which uses more than one of the same thing, dissimilar redundancy uses different things. The idea is that the different things are unlikely to contain identical flaws. The voting method may involve additional complexity if the two things take different amounts of time. Dissimilar redundancy is often used with software, because identical software contains identical flaws. The chance of failure is reduced by using at least two different types of each of the following processors, operating systems, software, sensors, types of actuators (electric, hydraulic, pneumatic, manual mechanical, etc.) communications protocols, communications hardware, communications networks, communications paths Geographic redundancy Geographic redundancy corrects the vulnerabilities of redundant devices deployed by geographically separating backup devices. Geographic redundancy reduces the likelihood of events such as power outages, floods, HVAC failures, lightning strikes, tornadoes, building fires, wildfires, and mass shootings disabling most of the system if not the entirety of it. Geographic redundancy locations can be more than continental, more than 62 miles apart and less than apart, less than 62 miles apart, but not on the same campus, or different buildings that are more than apart on the same campus. The following methods can reduce the risks of damage by a fire conflagration: large buildings at least to apart, but sometimes a minimum of apart. high-rise buildings at least apart open spaces clear of flammable vegetation within on each side of objects different wings on the same building, in rooms that are separated by more than different floors on the same wing of a building in rooms that are horizontally offset by a minimum of with fire walls between the rooms that are on different floors two rooms separated by another room, leaving at least a 70-foot gap between the two rooms there should be a minimum of two separated fire walls and on opposite sides of a corridor Geographic redundancy is used by Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Netflix, Dropbox, Salesforce, LinkedIn, PayPal, Twitter, Facebook, Apple iCloud, Cisco Meraki, and many others to provide geographic redundancy, high availability, fault tolerance and to ensure availability and reliability for their cloud services. As another example, to minimize risk of damage from severe windstorms or water damage, buildings can be located at least 2 miles (3.2 km) away from the shore, with an elevation of at least 5 feet (1.5 m) above sea level. For additional protection, they can be located at least 100 feet (30 m) away from flood plain areas. Functions of redundancy The two functions of redundancy are passive redundancy and active redundancy. Both functions prevent performance decline from exceeding specification limits without human intervention using extra capacity. Passive redundancy uses excess capacity to reduce the impact of component failures. One common form of passive redundancy is the extra strength of cabling and struts used in bridges. This extra strength allows some structural components to fail without bridge collapse. The extra strength used in the design is called the margin of safety. Eyes and ears provide working examples of passive redundancy. Vision loss in one eye does not cause blindness but depth perception is impaired. Hearing loss in one ear does not cause deafness but directionality is lost. Performance decline is commonly associated with passive redundancy when a limited number of failures occur. Active redundancy eliminates performance declines by monitoring the performance of individual devices, and this monitoring is used in voting logic. The voting logic is linked to switching that automatically reconfigures the components. Error detection and correction and the Global Positioning System (GPS) are two examples of active redundancy. Electrical power distribution provides an example of active redundancy. Several power lines connect each generation facility with customers. Each power line includes monitors that detect overload. Each power line also includes circuit breakers. The combination of power lines provides excess capacity. Circuit breakers disconnect a power line when monitors detect an overload. Power is redistributed across the remaining lines. At the Toronto Airport, there are 4 redundant electrical lines. Each of the 4 lines supply enough power for the entire airport. A spot network substation uses reverse current relays to open breakers to lines that fail, but lets power continue to flow the airport. Electrical power systems use power scheduling to reconfigure active redundancy. Computing systems adjust the production output of each generating facility when other generating facilities are suddenly lost. This prevents blackout conditions during major events such as an earthquake. Disadvantages Charles Perrow, author of Normal Accidents, has said that sometimes redundancies backfire and produce less, not more reliability. This may happen in three ways: First, redundant safety devices result in a more complex system, more prone to errors and accidents. Second, redundancy may lead to shirking of responsibility among workers. Third, redundancy may lead to increased production pressures, resulting in a system that operates at higher speeds, but less safely. Voting logic Voting logic uses performance monitoring to determine how to reconfigure individual components so that operation continues without violating specification limitations of the overall system. Voting logic often involves computers, but systems composed of items other than computers may be reconfigured using voting logic. Circuit breakers are an example of a form of non-computer voting logic. The simplest voting logic in computing systems involves two components: primary and alternate. They both run similar software, but the output from the alternate remains inactive during normal operation. The primary monitors itself and periodically sends an activity message to the alternate as long as everything is OK. All outputs from the primary stop, including the activity message, when the primary detects a fault. The alternate activates its output and takes over from the primary after a brief delay when the activity message ceases. Errors in voting logic can cause both outputs to be active or inactive at the same time, or cause outputs to flutter on and off. A more reliable form of voting logic involves an odd number of three devices or more. All perform identical functions and the outputs are compared by the voting logic. The voting logic establishes a majority when there is a disagreement, and the majority will act to deactivate the output from other device(s) that disagree. A single fault will not interrupt normal operation. This technique is used with avionics systems, such as those responsible for operation of the Space Shuttle. Calculating the probability of system failure Each duplicate component added to the system decreases the probability of system failure according to the formula:- where: – number of components – probability of component i failing – the probability of all components failing (system failure) This formula assumes independence of failure events. That means that the probability of a component B failing given that a component A has already failed is the same as that of B failing when A has not failed. There are situations where this is unreasonable, such as using two power supplies connected to the same socket in such a way that if one power supply failed, the other would too. It also assumes that only one component is needed to keep the system running. Redundancy and high availability You can achieve higher availability through redundancy. Let's say you have three redundant components: A, B and C. You can use following formula to calculate availability of the overall system: Availability of redundant components = 1 - (1 - availability of component A) X (1 - availability of component B) X (1 - availability of component C) In corollary, if you have N parallel components each having X availability, then: Availability of parallel components = 1 - (1 - X)^ N Using redundant components can exponentially increase the availability of overall system.  For example if each of your hosts has only 50% availability, by using 10 of hosts in parallel, you can achieve 99.9023% availability. Note that redundancy doesn’t always lead to higher availability. In fact, redundancy increases complexity which in turn reduces availability. According to Marc Brooker, to take advantage of redundancy, ensure that: You achieve a net-positive improvement in the overall availability of your system Your redundant components fail independently Your system can reliably detect healthy redundant components Your system can reliably scale out and scale-in redundant components. See also References External links Secure Propulsion using Advanced Redundant Control Using powerline as a redundant communication channel Engineering concepts Reliability engineering Safety Fault-tolerant computer systems
Redundancy (engineering)
[ "Technology", "Engineering" ]
2,379
[ "Systems engineering", "Reliability engineering", "Computer systems", "Fault-tolerant computer systems", "nan" ]
1,953,690
https://en.wikipedia.org/wiki/Amphora%20%28unit%29
An amphora (/ˈæmfərə/; Ancient Greek: ἀμφορεύς) was the unit of measurement of volume in the Greco-Roman era. The term is derived from ancient Greek use of the amphora, a tall terracotta or ceramic jar-like shipping container with two opposed handles near the top. Amphora means "two handled". An amphora is equal to 48 sextarii, which is about 34 litres or 9 gallons in the US customary units and 7.494 gallons in the imperial system of units. The Roman amphora quadrantal (≈25.9 litres), was one cubic-pes, holding 80 libra of wine, and was used to measure liquids, bulk goods, the cargo capacity of ships, and the production of vineyards. Along with other standardized Roman measures and currency, this gave an added advantage to Roman commerce. The related amphora capitolina standard, was kept in the temple of Jupiter on the Capitoline Hill in Rome. A typical Greek amphora, based on a cubic-pous, was ≈38.3 litres, The Greek talent, an ancient unit of weight, was roughly the mass of the amount of water that would fill an amphora. The French amphora, also called the minot de Paris, is muid or one cubic pied du roi and therefore ≈34.277 litres. References Units of volume Ancient Greek units of measurement Ancient Roman units of measurement External links Systems of Measurement
Amphora (unit)
[ "Mathematics" ]
308
[ "Units of volume", "Quantity", "Units of measurement" ]
1,955,806
https://en.wikipedia.org/wiki/New%20Valley%20Project
The New Valley Project or Toshka Project consists of building a system of canals to carry water from Lake Nasser to irrigate part of the sandy wastes of the Western Desert of Egypt, which is part of the Sahara Desert. History In 1997, the Egyptian government decided to develop a new valley (as opposed to the existing Nile Valley) where agricultural and industrial communities would develop. It has been an ambitious project which was meant to help Egypt cope with its rapidly growing population. Project The canal inlet starts from a site 8 km to the north of Toshka Bay (Khor) on Lake Nasser. The canal is meant to continue westwards until it reaches the Darb el-Arbe'ien route, then northwards along the Darb el- Arbe'ien to the Baris Oasis, covering a distance of 310 km. But as of April 2012, the canal is still 60 km short of the Baris Oasis. The Mubarak Pumping Station in Toshka is the centerpiece of the project and was inaugurated in March 2005. It pumps water from Lake Nasser to be transported by way of a canal through the valley, with the idea of transforming 2340 km2 (588,000 acres) of desert into agricultural land. The Toshka Project has now been revived by President Abdel Fattah el-Sisi. Half of the land will be given to college graduates, 1 acre each, funded by the Long Live Egypt Fund. The essential problem is that the Western Desert's high saline levels and the presence of underground aquifers in the area act as a major obstacle to any irrigation project. As the land is irrigated, the salt would mix with the aquifers and would reduce access to potable water. There is also the difficulty that the clay minerals found in the soil are posing technical problems to the big wheeled structures moving around autonomously to irrigate the land. Often their wheels get stuck in a little bowl created by wet clay that dried, and the irrigation machines come to a standstill. The only objective met up to April 2012 is the diversion of water from Lake Nasser into what little of the Sheikh Zayed Canal has been built. The Toshka Lakes are a by-product of the rising level of Lake Nasser and lie in the same general region as much of the New Valley Project. See also New Valley Governorate Baris Oasis Kharga Oasis Dakhla Oasis Farafra Oasis Bahariya Oasis Siwa Oasis External links South Valley Development Project in Toshka, Egyptian Ministry of Water Resources and Irrigation Egypt's new Nile Valley grand plan gone bad, The National, 22 April 2012 On Toshka New Valley's mega-failure Toshka Project - Mubarak Pumping Station / Sheikh Zayed Canal, Egypt Photographs Gallery New Valley Governorate Geography of Egypt Agriculture in Egypt Irrigation in Egypt Interbasin transfer Western Desert (Egypt)
New Valley Project
[ "Environmental_science" ]
593
[ "Hydrology", "Interbasin transfer" ]
20,003,993
https://en.wikipedia.org/wiki/Photodetection
In his historic paper entitled "The Quantum Theory of Optical Coherence," Roy J. Glauber set a solid foundation for the quantum electronics/quantum optics enterprise. The experimental development of the optical maser and later laser at that time had made the classical concept of optical coherence inadequate. Glauber started from the quantum theory of light detection by considering the process of photoionization in which a photodetector is triggered by an ionizing absorption of a photon. In the quantum theory of radiation, the electric field operator in the Coulomb gauge may be written as the sum of positive and negative frequency parts where One may expand in terms of the normal modes as follows: where are the unit vectors of polarization; this expansion has the same form as the classical expansion except that now the field amplitudes are operators. Glauber showed that, for an ideal photodetector situated at a point in a radiation field, the probability of observing a photoionization event in this detector between time and is proportional to , where and specifies the state of the field. Since the radiation field is a quantum-mechanical one, we do not know the exact properties of the incident light, and the probability should be averaged, as in the classical theory, to be proportional to where the angular brackets mean an average over the light field. The significance of the quantum theory of coherence is in the ordering of the creation and destruction operators and : Since is not equal to for a light field, the order makes the quantum statistical measurements (such as photon counting) quite different from the classical ones, i.e., the nonclassical properties of light, such as photon antibunching. Moreover, Glauber's theory of photodetection is of far-reaching fundamental significance to interpretation of quantum mechanics. The Glauber detection theory differs from the Born probabilistic interpretation, in that it expresses the meaning of physical law in terms of measured facts (relationships), counting events in the detection processes, without assuming the particle model of matter. These concepts quite naturally lead to a relational approach to quantum physics. References Quantum optics
Photodetection
[ "Physics" ]
437
[ "Quantum optics", "Quantum mechanics" ]
20,010,851
https://en.wikipedia.org/wiki/Monad%20%28nonstandard%20analysis%29
In nonstandard analysis, a monad or also a halo is the set of points infinitesimally close to a given point. Given a hyperreal number x in R∗, the monad of x is the set If x is finite (limited), the unique real number in the monad of x is called the standard part of x. References Nonstandard analysis
Monad (nonstandard analysis)
[ "Mathematics" ]
79
[ "Mathematical analysis", "Mathematical analysis stubs", "Mathematical objects", "Infinity", "Nonstandard analysis", "Mathematics of infinitesimals", "Model theory" ]
6,527,803
https://en.wikipedia.org/wiki/Self-interacting%20dark%20matter
In astrophysics and particle physics, self-interacting dark matter (SIDM) is an alternative class of dark matter particles which have strong interactions, in contrast to the standard cold dark matter model (CDM). SIDM was postulated in 2000 as a solution to the core-cusp problem. In the simplest models of DM self-interactions, a Yukawa-type potential and a force carrier φ mediates between two dark matter particles. On galactic scales, DM self-interaction leads to energy and momentum exchange between DM particles. Over cosmological time scales this results in isothermal cores in the central region of dark matter haloes. If the self-interacting dark matter is in hydrostatic equilibrium, its pressure and density follow: where and are the gravitational potential of the dark matter and a baryon respectively. The equation naturally correlates the dark matter distribution to that of the baryonic matter distribution. With this correlation, the self-interacting dark matter can explain phenomena such as the Tully–Fisher relation. Self-interacting dark matter has also been postulated as an explanation for the DAMA annual modulation signal. Moreover, it is shown that it can serve the seed of supermassive black holes at high redshift. SIDM may have originated in a so-called "Dark Big Bang". In July 2024 a study proposed SIDM solves the "final-parsec problem", two months later another study proposed the same with fuzzy cold dark matter. See also MACS J0025.4-1222, astronomical observations that constrain DM self-interaction ESO 146-5, the core of Abell 3827 that was claimed as the first evidence of SIDM Strongly interacting massive particle (SIMP), proposed to explain cosmic ray data Lambda-CDM model References Further reading Astroparticle physics Dark matter
Self-interacting dark matter
[ "Physics", "Astronomy" ]
383
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Astroparticle physics", "Unsolved problems in physics", "Astrophysics", "Particle physics", "Exotic matter", "Physics beyond the Standard Model", "Matter" ]
6,529,589
https://en.wikipedia.org/wiki/Tinker%20%28software%29
Tinker, previously stylized as TINKER, is a suite of computer software applications for molecular dynamics simulation. The codes provide a complete and general set of tools for molecular mechanics and molecular dynamics, with some special features for biomolecules. The core of the software is a modular set of callable routines which allow manipulating coordinates and evaluating potential energy and derivatives via straightforward means. Tinker works on Windows, macOS, Linux and Unix. The source code is available free of charge to non-commercial users under a proprietary license. The code is written in portable FORTRAN 77, Fortran 95 or CUDA with common extensions, and some C. Core developers are: (a) the Jay Ponder lab, at the Department of Chemistry, Washington University in St. Louis, St. Louis, Missouri. Laboratory head Ponder is Full Professor of Chemistry, and of Biochemistry & Molecular Biophysics; (b) the Pengyu Ren lab , at the Department of Biomedical Engineering University of Texas in Austin, Austin, Texas. Laboratory head Ren is Full Professor of Biomedical Engineering; (c) Jean-Philip Piquemal's research team at Laboratoire de Chimie Théorique, Department of Chemistry, Sorbonne University, Paris, France. Research team head Piquemal is Full Professor of Theoretical Chemistry. Features The Tinker package is based on several related codes: (a) the canonical Tinker, version 8, (b) the Tinker9 package as a direct extension of canonical Tinker to GPU systems, (c) the Tinker-HP package for massively parallel MPI applications on hybrid CPU and GPU-based systems, (d) Tinker-FFE for visualization of Tinker calculations via a Java-based graphical interface, and (e) the Tinker-OpenMM package for Tinker's use with GPUs via an interface for the OpenMM software. All of the Tinker codes are available from the TinkerTools organization site on GitHub. Additional information is available from the TinkerTools community web site. Programs are provided to perform many functions including: energy minimizing over Cartesian coordinates, torsional angles, or rigid bodies via conjugate gradient, variable metric or a truncated Newton method molecular, stochastic, and rigid body dynamics with periodic boundaries and control of temperature and pressure normal mode vibrational analysis distance geometry including an efficient random pairwise metrization building protein and nucleic acid structures from sequence simulated annealing with various cooling protocols analysis and breakdown of single point potential energies verification of analytical derivatives of standard and user defined potentials location of a transition state between two minima full energy surface search via a Conformation Scanning method free energy calculations via free energy perturbation or weighted histogram analysis fitting of intermolecular potential parameters to structural and thermodynamic data global optimizing via energy surface smoothing, including a Potential Smoothing and Search (PSS) method Awards Tinker-HP received the 2018 Atos-Joseph Fourier Prize in High Performance Computing. See also List of software for Monte Carlo molecular modeling Comparison of software for molecular mechanics modeling Molecular dynamics Molecular geometry Molecular design software Comparison of force field implementations References License External links Science software Molecular dynamics software Monte Carlo molecular modelling software Washington University in St. Louis
Tinker (software)
[ "Chemistry" ]
668
[ "Molecular dynamics", "Molecular dynamics software", "Computational chemistry software" ]
6,529,735
https://en.wikipedia.org/wiki/Bubble%20%28physics%29
A bubble is a globule of a gas substance in a liquid. In the opposite case, a globule of a liquid in a gas, is called a drop. Due to the Marangoni effect, bubbles may remain intact when they reach the surface of the immersive substance. Common examples Bubbles are seen in many places in everyday life, for example: As spontaneous nucleation of supersaturated carbon dioxide in soft drinks As vapor in boiling water As air mixed into agitated water, such as below a waterfall As sea foam As a soap bubble As given off in chemical reactions, e.g., baking soda + vinegar As a gas trapped in glass during its manufacture As the indicator in a spirit level As bubble gum Physics and chemistry Bubbles form and coalesce into globular shapes because those shapes are at a lower energy state. For the physics and chemistry behind it, see nucleation. Appearance Bubbles are visible because they have a different refractive index (RI) than the surrounding substance. For example, the RI of air is approximately 1.0003 and the RI of water is approximately 1.333. Snell's Law describes how electromagnetic waves change direction at the interface between two mediums with different RI; thus bubbles can be identified from the accompanying refraction and internal reflection even though both the immersed and immersing mediums are transparent. The above explanation only holds for bubbles of one medium submerged in another medium (e.g. bubbles of gas in a soft drink); the volume of a membrane bubble (e.g. soap bubble) will not distort light very much, and one can only see a membrane bubble due to thin-film diffraction and reflection. Applications Nucleation can be intentionally induced, for example, to create a bubblegram in a solid. In medical ultrasound imaging, small encapsulated bubbles called contrast agent are used to enhance the contrast. In thermal inkjet printing, vapor bubbles are used as actuators. They are occasionally used in other microfluidics applications as actuators. The violent collapse of bubbles (cavitation) near solid surfaces and the resulting impinging jet constitute the mechanism used in ultrasonic cleaning. The same effect, but on a larger scale, is used in focused energy weapons such as the bazooka and the torpedo. Pistol shrimp also uses a collapsing cavitation bubble as a weapon. The same effect is used to treat kidney stones in a lithotripter. Marine mammals such as dolphins and whales use bubbles for entertainment or as hunting tools. Aerators cause the dissolution of gas in the liquid by injecting bubbles. Bubbles are used by chemical and metallurgic engineer in processes such as distillation, absorption, flotation and spray drying. The complex processes involved often require consideration for mass and heat transfer and are modeled using fluid dynamics. The star-nosed mole and the American water shrew can smell underwater by rapidly breathing through their nostrils and creating a bubble. Research on the origin of life on Earth suggests that bubbles may have played an integral role in confining and concentrating precursor molecules for life, a function currently performed by cell membranes. Bubble lasers use bubbles as the optical resonator. They can be used as highly sensitive pressure sensors. Pulsation When bubbles are disturbed (for example when a gas bubble is injected underwater), the wall oscillates. Although it is often visually masked by much larger deformations in shape, a component of the oscillation changes the bubble volume (i.e. it is pulsation) which, in the absence of an externally-imposed sound field, occurs at the bubble's natural frequency. The pulsation is the most important component of the oscillation, acoustically, because by changing the gas volume, it changes its pressure, and leads to the emission of sound at the bubble's natural frequency. For air bubbles in water, large bubbles (negligible surface tension and thermal conductivity) undergo adiabatic pulsations, which means that no heat is transferred either from the liquid to the gas or vice versa. The natural frequency of such bubbles is determined by the equation: where: is the specific heat ratio of the gas is the steady state radius is the steady state pressure is the mass density of the surrounding liquid For air bubbles in water, smaller bubbles undergo isothermal pulsations. The corresponding equation for small bubbles of surface tension σ (and negligible liquid viscosity) is Excited bubbles trapped underwater are the major source of liquid sounds, such as inside our knuckles during knuckle cracking, and when a rain droplet impacts a surface of water. Physiology and medicine Injury by bubble formation and growth in body tissues is the mechanism of decompression sickness, which occurs when supersaturated dissolved inert gases leave the solution as bubbles during decompression. The damage can be due to mechanical deformation of tissues due to bubble growth in situ, or by blocking blood vessels where the bubble has lodged. Arterial gas embolism can occur when a gas bubble is introduced to the circulatory system and lodges in a blood vessel that is too small for it to pass through under the available pressure difference. This can occur as a result of decompression after hyperbaric exposure, a lung overexpansion injury, during intravenous fluid administration, or during surgery. In foods Foods containing bubbles includes bread, cakes, cereals and chocolate, and drinks including beer, champagne, mineral water and soft drinks, as well as more experimental applications in foams as made by chefs. See also Antibubble Bubble fusion Bubble sensor Foam Minnaert resonance Nanobubble Sonoluminescence Underwater acoustics References Sources Fluid mechanics
Bubble (physics)
[ "Chemistry", "Engineering" ]
1,177
[ "Bubbles (physics)", "Foams", "Civil engineering", "Fluid mechanics", "Fluid dynamics" ]
6,531,976
https://en.wikipedia.org/wiki/Chemokine%20receptor
Chemokine receptors are cytokine receptors found on the surface of certain cells that interact with a type of cytokine called a chemokine. There have been 20 distinct chemokine receptors discovered in humans. Each has a rhodopsin-like 7-transmembrane (7TM) structure and couples to G-protein for signal transduction within a cell, making them members of a large protein family of G protein-coupled receptors. Following interaction with their specific chemokine ligands, chemokine receptors trigger a flux in intracellular calcium (Ca2+) ions (calcium signaling). This causes cell responses, including the onset of a process known as chemotaxis that traffics the cell to a desired location within the organism. Chemokine receptors are divided into different families, CXC chemokine receptors, CC chemokine receptors, CX3C chemokine receptors and XC chemokine receptors that correspond to the 4 distinct subfamilies of chemokines they bind. The four subfamilies of chemokines differ in the spacing of structurally important cysteine residues near the N-terminal of the chemokine. Structural characteristics Chemokine receptors are G protein-coupled receptors containing 7 transmembrane helices that are found predominantly on the surface of leukocytes. Approximately 19 different chemokine receptors have been characterized to date, which share many common structural features. They are composed of about 350 amino acids that are divided into a short and acidic N-terminal end, seven transmembrane helices with three intracellular and three extracellular hydrophilic loops, and an intracellular C-terminus containing serine and threonine residues that act as phosphorylation sites during receptor regulation. The first two extracellular loops of chemokine receptors are linked together by disulfide bonding between two conserved cysteine residues. The N-terminal end of a chemokine receptor binds to chemokines and is important for ligand specificity. G-proteins couple to the C-terminal end, which is important for receptor signaling following ligand binding. Although chemokine receptors share high amino acid identity in their primary sequences, they typically bind a limited number of ligands. Chemokine receptors are redundant in their function as more than one chemokine is able to bind to a single receptor. Signal transduction Intracellular signaling by chemokine receptors is dependent on neighbouring G-proteins. G-proteins exist as a heterotrimer; they are composed of three distinct subunits. When the molecule GDP is bound to the G-protein subunit, the G-protein is in an inactive state. Following binding of the chemokine ligand, chemokine receptors associate with G-proteins, allowing the exchange of GDP for another molecule called GTP, and the dissociation of the different G protein subunits. The subunit called Gα activates an enzyme known as Phospholipase C (PLC) that is associated with the cell membrane. PLC cleaves Phosphatidylinositol (4,5)-bisphosphate (PIP2) to form two second messenger molecules called inositol triphosphate (IP3) and diacylglycerol (DAG); DAG activates another enzyme called protein kinase C (PKC), and IP3 triggers the release of calcium from intracellular stores. These events promote many signaling cascades, effecting a cellular response. For example, when CXCL8 (IL-8) binds to its specific receptors, CXCR1 or CXCR2, a rise in intracellular calcium activates the enzyme phospholipase D (PLD) that goes on to initiate an intracellular signaling cascade called the MAP kinase pathway. At the same time, the G-protein subunit Gα directly activates an enzyme called protein tyrosine kinase (PTK), which phosphorylates serine and threonine residues in the tail of the chemokine receptor, causing its desensitisation or inactivation. The initiated MAP kinase pathway activates specific cellular mechanisms involved in chemotaxis, degranulation, release of superoxide anions, and changes in the avidity of cell adhesion molecules called integrins. Chemokines and their receptors play a crucial role in cancer metastasis as they are involved in extravasation, migration, micrometastasis, and angiogenesis. This role of chemokine is strikingly similar to their normal function of localizing leukocytes to an inflammatory site. Selective pressures on Chemokine receptor 5 (CCR5) Human Immunodeficiency virus uses CCR5 receptor to target and infect host T-cells in humans. It weakens the immune system by destroying the CD4+ T-helper cells, making the body more susceptible to other infections. CCR5-Δ32 is an allelic variant of CCR5 gene with a 32 base pair deletion that results in a truncated receptor. People with this allele are resistant to AIDS as HIV cannot bind to the non-functional CCR5 receptor. An unusually high frequency of this allele is found in European Caucasian population, with an observed cline towards the north. Most researchers have attributed the current frequency of this allele to two major epidemics of human history: plague and smallpox. Although this allele originated much earlier, its frequency rose dramatically about 700 years ago. This led scientists to believe that bubonic plague acted as a selective pressure that drove CCR5-Δ32 to high frequency. It was speculated that allele may have provided protection against the Yersinia pestis, which is the causative agent for plague. Many in vivo mouse studies have refuted this claim by showing no protective effects of CCR5-Δ32 allele in mice infected with Y. pestis. Another theory that has gained more scientific support links the current frequency of the allele to smallpox epidemic. Although plague has killed a greater number people in a given time period, smallpox has collectively taken more lives. As smallpox has been dated back to 2000 years, a longer time period would have given smallpox enough time to exert selective pressure given an earlier origin of CCR5-Δ32. Population genetic models that analyzed geographic and temporal distribution of both plague and smallpox provide a much stronger evidence for smallpox as the driving factor of CCR5-Δ32. Smallpox has a higher mortality rate than plague, and it mostly affects children under the age of ten. From an evolutionary viewpoint, this results in greater loss of reproductive potential from a population which may explain increased selective pressure by smallpox. Smallpox was more prevalent in regions where higher CCR5-Δ32 frequencies are seen. Myxoma and variola major belong to the same family of viruses and myxoma has been shown to use CCR5 receptor to enter its host. Moreover, Yersinia is a bacterium which is biologically distinct from viruses and is unlikely to have similar mechanism of transmission. Recent evidence provides a strong support for smallpox as the selective agent for CCR5-Δ32. Families CXC chemokine receptors (six members) CC chemokine receptors (ten/eleven members) C chemokine receptors (one member, XCR1) CX3C chemokine receptors (one member, CX3CR1) Fifty chemokines have been discovered so far, and most bind onto CXC and CC families. Two types of chemokines that bind to these receptors are inflammatory chemokines and homeostatic chemokines. Inflammatory chemokines are expressed upon leukocyte activation, whereas homeostatic chemokines show continual expression. References External links The Cytokine Receptor Database Cell biology Integral membrane proteins
Chemokine receptor
[ "Biology" ]
1,662
[ "Cell biology" ]
6,532,921
https://en.wikipedia.org/wiki/Microecology
Microecology means microbial ecology or ecology of a microhabitat. It is a large field that includes many topics such as: evolution, biodiversity, exobiology, ecology, bioremediation, recycling, and food microbiology. It can also refer to a hybrid urban network at the scale of the neighbourhood. It is the study of the interactions between living organisms and their environment, and how these interactions affect the organisms and their environment. Additionally, it is a multidisciplinary area of study, combining elements of biology, chemistry, physics, mathematics and urban planning. It focuses on the study of the interactions between microorganisms and the environment they inhabit, their effects on the environment, and their effects on other organisms. Microecology also studies the effects of human activity on the environment and how this affects the growth and development of microorganisms or organic structures. Microecology has many applications in the fields of medicine, agriculture, biotechnology and design. It is also important for understanding the cycling of nutrients in the environment, and the behavior of microorganisms or actors in various environments. In humans, gut microecology is the study of the microbial ecology of the human gut which includes gut microbiota composition, its metabolic activity, and the interactions between the microbiota, the host, and the environment. Research in human gut microecology is important because the microbiome can have profound effects on human health. The microbiome is known to influence the immune system, digestion, and metabolism, and is thought to play a role in a variety of diseases, including diabetes, obesity, inflammatory bowel disease, and cancer. Studying the microbiome can help us better understand these diseases and develop treatments. Moving onwards, Intestinal microecology is a new area of microecology study. It is a complex microflora that is directly related to human health. Therefore, regulation of intestinal microecology will help in the treatment of many diseases. It was reported that intestinal flora is involved in anti-tumor immunotherapy and affects the curative effect of an anti-malignant tumor therapy to varying degrees. The activity of metabolites and microbial composition of the intestinal microbiota are associated with various diseases including gastrointestinal diseases and cancer. Similar to the intestinal microecosystem, the vaginal microecosystem is also complicated and plays an important role in women's health. Maintaining microecological balance and the acidic environment of the vagina inhibits the proliferation of pathogenic bacteria. Microecology in the Urban Context At the urban scale, the term micro-ecology has been used by Mueller-Wolfertshofer and Boucsein to describe the interdependence and interrelation of various activities within a neighbourhood. The synergy formed through socioeconomic processes, often with collaboration, profits all the actors involved and improves conditions, not just in the immediate neighbourhood, but at times even the city they are part of. References Microbiology Subfields of ecology
Microecology
[ "Chemistry", "Biology" ]
629
[ "Microbiology", "Microscopy" ]
6,533,848
https://en.wikipedia.org/wiki/Micro%20perforated%20plate
A micro perforated plate (MPP) is a device used to absorb sound, reducing its intensity. It consists of a thin flat plate, made from one of several different materials, with small holes punched in it. An MPP offers an alternative to traditional sound absorbers made from porous materials. Structure An MPP is normally 0.5–2 mm thick. The holes typically cover 0.5 to 2% of the plate, depending on the application and the environment in which the MPP is to be mounted. Hole diameter is usually less than 1 millimeter, typically 0.05 to 0.5 mm. They are usually made using the microperforation process. Operating principle The goal of a sound absorber is to convert acoustical energy into heat. In a traditional absorber, the sound wave propagates into the absorber. Because of the proximity of the porous material, the oscillating air molecules inside the absorber lose their acoustical energy due to friction. A MPP works in almost the same way. When the oscillating air molecules penetrate the MPP, the friction between the air in motion and the surface of the MPP dissipates the acoustical energy. Comparison with other materials Traditional sound absorbers are porous materials such as mineral wool, glass or polyester fibres. It is not possible to use these materials in harsh environments such as engine compartments. Traditional absorbers have many drawbacks, including pollution, the risk of fire, and problems with the useful lifetime of the absorbing material. The main reason why Micro Perforates have become so popular among acousticians is that they have a good absorption performance but without the disadvantages of a porous material. Furthermore, an MPP is also preferable from an aesthetic point of view. History For a while, perforated metal panels with holes in the 1–10 mm range have been used as a cage for sound-absorbing glass-fiber bats where large holes let the sound waves reach into the absorbent fiber. Another use has been the creation of narrowband Helmholtz absorbers which can be tuned by hole size and the dimensions of the hole distance and air gap behind the panel. However, when the hole dimensions are in the region of 0.05–0.5 mm, the narrow absorption peaks become much wider, making the additional fiber absorber more or less unnecessary, while still maintaining a very high absorption factor. By varying geometrical and material parameters, the acoustical performance can be tailored to meet a multitude of specifications in various applications. One early contributor to the theory of micro perforated plates as sound absorbers was Professor Daa-You Maa. Further possibilities aiming to improve the accuracy of Maa's original model are currently being investigated. One other major phenomenon that currently being investigated is the nonlinear effect i.e. an MPP behaves differently depending on the magnitude of the incident sound wave. References External links Acoustical Society of America Sound Acoustics Waves
Micro perforated plate
[ "Physics" ]
612
[ "Physical phenomena", "Classical mechanics", "Acoustics", "Waves", "Motion (physics)" ]
26,987,628
https://en.wikipedia.org/wiki/Graphon
In graph theory and statistics, a graphon (also known as a graph limit) is a symmetric measurable function , that is important in the study of dense graphs. Graphons arise both as a natural notion for the limit of a sequence of dense graphs, and as the fundamental defining objects of exchangeable random graph models. Graphons are tied to dense graphs by the following pair of observations: the random graph models defined by graphons give rise to dense graphs almost surely, and, by the regularity lemma, graphons capture the structure of arbitrary large dense graphs. Statistical formulation A graphon is a symmetric measurable function . Usually a graphon is understood as defining an exchangeable random graph model according to the following scheme: Each vertex of the graph is assigned an independent random value Edge is independently included in the graph with probability . A random graph model is an exchangeable random graph model if and only if it can be defined in terms of a (possibly random) graphon in this way. The model based on a fixed graphon is sometimes denoted , by analogy with the Erdős–Rényi model of random graphs. A graph generated from a graphon in this way is called a -random graph. It follows from this definition and the law of large numbers that, if , exchangeable random graph models are dense almost surely. Examples The simplest example of a graphon is for some constant . In this case the associated exchangeable random graph model is the Erdős–Rényi model that includes each edge independently with probability . If we instead start with a graphon that is piecewise constant by: dividing the unit square into blocks, and setting equal to on the block, the resulting exchangeable random graph model is the community stochastic block model, a generalization of the Erdős–Rényi model. We can interpret this as a random graph model consisting of distinct Erdős–Rényi graphs with parameters respectively, with bigraphs between them where each possible edge between blocks and is included independently with probability . Many other popular random graph models can be understood as exchangeable random graph models defined by some graphon, a detailed survey is included in Orbanz and Roy. Jointly exchangeable adjacency matrices A random graph of size can be represented as a random adjacency matrix. In order to impose consistency (in the sense of projectivity) between random graphs of different sizes it is natural to study the sequence of adjacency matrices arising as the upper-left sub-matrices of some infinite array of random variables; this allows us to generate by adding a node to and sampling the edges for . With this perspective, random graphs are defined as random infinite symmetric arrays . Following the fundamental importance of exchangeable sequences in classical probability, it is natural to look for an analogous notion in the random graph setting. One such notion is given by jointly exchangeable matrices; i.e. random matrices satisfying for all permutations of the natural numbers, where means equal in distribution. Intuitively, this condition means that the distribution of the random graph is unchanged by a relabeling of its vertices: that is, the labels of the vertices carry no information. There is a representation theorem for jointly exchangeable random adjacency matrices, analogous to de Finetti’s representation theorem for exchangeable sequences. This is a special case of the Aldous–Hoover theorem for jointly exchangeable arrays and, in this setting, asserts that the random matrix is generated by: Sample independently independently at random with probability where is a (possibly random) graphon. That is, a random graph model has a jointly exchangeable adjacency matrix if and only if it is a jointly exchangeable random graph model defined in terms of some graphon. Graphon estimation Due to identifiability issues, it is impossible to estimate either the graphon function or the node latent positions and there are two main directions of graphon estimation. One direction aims at estimating up to an equivalence class, or estimate the probability matrix induced by . Analytic formulation Any graph on vertices can be identified with its adjacency matrix . This matrix corresponds to a step function , defined by partitioning into intervals such that has interior and for each , setting equal to the entry of . This function is the associated graphon of the graph . In general, if we have a sequence of graphs where the number of vertices of goes to infinity, we can analyze the limiting behavior of the sequence by considering the limiting behavior of the functions . If these graphs converge (according to some suitable definition of convergence), then we expect the limit of these graphs to correspond to the limit of these associated functions. This motivates the definition of a graphon (short for "graph function") as a symmetric measurable function which captures the notion of a limit of a sequence of graphs. It turns out that for sequences of dense graphs, several apparently distinct notions of convergence are equivalent and under all of them the natural limit object is a graphon. Examples Constant graphon Take a sequence of Erdős–Rényi random graphs with some fixed parameter . Intuitively, as tends to infinity, the limit of this sequence of graphs is determined solely by edge density of these graphs. In the space of graphons, it turns out that such a sequence converges almost surely to the constant , which captures the above intuition. Half graphon Take the sequence of half-graphs, defined by taking to be the bipartite graph on vertices and such that is adjacent to precisely when . If the vertices are listed in the presented order, then the adjacency matrix has two corners of "half square" block matrices filled with ones, with the rest of the entries equal to zero. For example, the adjacency matrix of is given by As gets large, these corners of ones "smooth" out. Matching this intuition, the sequence converges to the half-graphon defined by when and otherwise. Complete bipartite graphon Take the sequence of complete bipartite graphs with equal sized parts. If we order the vertices by placing all vertices in one part at the beginning and placing the vertices of the other part at the end, the adjacency matrix of looks like a block off-diagonal matrix, with two blocks of ones and two blocks of zeros. For example, the adjacency matrix of is given by As gets larger, this block structure of the adjacency matrix remains constant, so that this sequence of graphs converges to a "complete bipartite" graphon defined by whenever and , and setting otherwise. If we instead order the vertices of by alternating between parts, the adjacency matrix has a chessboard structure of zeros and ones. For example, under this ordering, the adjacency matrix of is given by As gets larger, the adjacency matrices become a finer and finer chessboard. Despite this behavior, we still want the limit of to be unique and result in the graphon from example 3. This means that when we formally define convergence for a sequence of graphs, the definition of a limit should be agnostic to relabelings of the vertices. Limit of W-random graphs Take a random sequence of -random graphs by drawing for some fixed graphon . Then just like in the first example from this section, it turns out that converges to almost surely. Recovering graph parameters from graphons Given graph with associated graphon , we can recover graph theoretic properties and parameters of by integrating transformations of . For example, the edge density (i.e. average degree divided by number of vertices) of is given by the integral This is because is -valued, and each edge in corresponds to a region of area where equals . Similar reasoning shows that the triangle density in is equal to Notions of convergence There are many different ways to measure the distance between two graphs. If we are interested in metrics that "preserve" extremal properties of graphs, then we should restrict our attention to metrics that identify random graphs as similar. For example, if we randomly draw two graphs independently from an Erdős–Rényi model for some fixed , the distance between these two graphs under a "reasonable" metric should be close to zero with high probability for large . Naively, given two graphs on the same vertex set, one might define their distance as the number of edges that must be added or removed to get from one graph to the other, i.e. their edit distance. However, the edit distance does not identify random graphs as similar; in fact, two graphs drawn independently from have an expected (normalized) edit distance of . There are two natural metrics that behave well on dense random graphs in the sense that we want. The first is a sampling metric, which says that two graphs are close if their distributions of subgraphs are close. The second is an edge discrepancy metric, which says two graphs are close when their edge densities are close on all their corresponding subsets of vertices. Miraculously, a sequence of graphs converges with respect to one metric precisely when it converges with respect to the other. Moreover, the limit objects under both metrics turn out to be graphons. The equivalence of these two notions of convergence mirrors how various notions of quasirandom graphs are equivalent. Homomorphism densities One way to measure the distance between two graphs and is to compare their relative subgraph counts. That is, for each graph we can compare the number of copies of in and in . If these numbers are close for every graph , then intuitively and are similar looking graphs. Rather than dealing directly with subgraphs, however, it turns out to be easier to work with graph homomorphisms. This is fine when dealing with large, dense graphs, since in this scenario the number of subgraphs and the number of graph homomorphisms from a fixed graph are asymptotically equal. Given two graphs and , the homomorphism density of in is defined to be the number of graph homomorphisms from to . In other words, is the probability a randomly chosen map from the vertices of to the vertices of sends adjacent vertices in to adjacent vertices in . Graphons offer a simple way to compute homomorphism densities. Indeed, given a graph with associated graphon and another , we have where the integral is multidimensional, taken over the unit hypercube . This follows from the definition of an associated graphon, by considering when the above integrand is equal to . We can then extend the definition of homomorphism density to arbitrary graphons , by using the same integral and defining for any graph . Given this setup, we say a sequence of graphs is left-convergent if for every fixed graph , the sequence of homomorphism densities converges. Although not evident from the definition alone, if converges in this sense, then there always exists a graphon such that for every graph , we have simultaneously. Cut distance Take two graphs and on the same vertex set. Because these graphs share the same vertices, one way to measure their distance is to restrict to subsets of the vertex set, and for each such pair subsets compare the number of edges from to in to the number of edges between and in . If these numbers are similar for every pair of subsets (relative to the total number of vertices), then that suggests and are similar graphs. As a preliminary formalization of this notion of distance, for any pair of graphs and on the same vertex set of size , define the labeled cut distance between and to be In other words, the labeled cut distance encodes the maximum discrepancy of the edge densities between and . We can generalize this concept to graphons by expressing the edge density in terms of the associated graphon , giving the equality where are unions of intervals corresponding to the vertices in and . Note that this definition can still be used even when the graphs being compared do not share a vertex set. This motivates the following more general definition. Definition 1. For any symmetric, measurable function , define the cut norm of to be the quantity taken over all measurable subsets of the unit interval. This captures our earlier notion of labeled cut distance, as we have the equality . This distance measure still has one major limitation: it can assign nonzero distance to two isomorphic graphs. To make sure isomorphic graphs have distance zero, we should compute the minimum cut norm over all possible "relabellings" of the vertices. This motivates the following definition of the cut distance. Definition 2. For any pair of graphons and , define their cut distance to be where is the composition of with the map , and the infimum is taken over all measure-preserving bijections from the unit interval to itself. The cut distance between two graphs is defined to be the cut distance between their associated graphons. We now say that a sequence of graphs is convergent under the cut distance if it is a Cauchy sequence under the cut distance . Although not a direct consequence of the definition, if such a sequence of graphs is Cauchy, then it always converges to some graphon . Equivalence of convergence As it turns out, for any sequence of graphs , left-convergence is equivalent to convergence under the cut distance, and furthermore, the limit graphon is the same. We can also consider convergence of graphons themselves using the same definitions, and the same equivalence is true. In fact, both notions of convergence are related more strongly through what are called counting lemmas. Counting Lemma. For any pair of graphons and , we have for all graphs . The name "counting lemma" comes from the bounds that this lemma gives on homomorphism densities , which are analogous to subgraph counts of graphs. This lemma is a generalization of the graph counting lemma that appears in the field of regularity partitions, and it immediately shows that convergence under the cut distance implies left-convergence. Inverse Counting Lemma. For every real number , there exist a real number and a positive integer such that for any pair of graphons and with for all graphs satisfying , we must have . This lemma shows that left-convergence implies convergence under the cut distance. The space of graphons We can make the cut-distance into a metric by taking the set of all graphons and identifying two graphons whenever . The resulting space of graphons is denoted , and together with forms a metric space. This space turns out to be compact. Moreover, it contains the set of all finite graphs, represented by their associated graphons, as a dense subset. These observations show that the space of graphons is a completion of the space of graphs with respect to the cut distance. One immediate consequence of this is the following. Corollary 1. For every real number , there is an integer such that for every graphon , there is a graph with at most vertices such that . To see why, let be the set of graphs. Consider for each graph the open ball containing all graphons such that . The set of open balls for all graphs covers , so compactness implies that there is a finite subcover for some finite subset . We can now take to be the largest number of vertices among the graphs in . Applications Regularity lemma Compactness of the space of graphons can be thought of as an analytic formulation of Szemerédi's regularity lemma; in fact, a stronger result than the original lemma. Szemeredi's regularity lemma can be translated into the language of graphons as follows. Define a step function to be a graphon that is piecewise constant, i.e. for some partition of , is constant on for all . The statement that a graph has a regularity partition is equivalent to saying that its associated graphon is close to a step function. The proof of compactness requires only the weak regularity lemma: Weak Regularity Lemma for Graphons. For every graphon and , there is a step function with at most steps such that . but it can be used to prove stronger regularity results, such as the strong regularity lemma: Strong Regularity Lemma for Graphons. For every sequence of positive real numbers, there is a positive integer such that for every graphon , there is a graphon and a step function with steps such that and The proof of the strong regularity lemma is similar in concept to Corollary 1 above. It turns out that every graphon can be approximated with a step function in the norm, showing that the set of balls cover . These sets are not open in the metric, but they can be enlarged slightly to be open. Now, we can take a finite subcover, and one can show that the desired condition follows. Sidorenko's conjecture The analytic nature of graphons allows greater flexibility in attacking inequalities related to homomorphisms. For example, Sidorenko's conjecture is a major open problem in extremal graph theory, which asserts that for any graph on vertices with average degree (for some ) and bipartite graph on vertices and edges, the number of homomorphisms from to is at least . Since this quantity is the expected number of labeled subgraphs of in a random graph , the conjecture can be interpreted as the claim that for any bipartite graph , the random graph achieves (in expectation) the minimum number of copies of over all graphs with some fixed edge density. Many approaches to Sidorenko's conjecture formulate the problem as an integral inequality on graphons, which then allows the problem to be attacked using other analytical approaches. Generalizations Graphons are naturally associated with dense simple graphs. There are extensions of this model to dense directed weighted graphs, often referred to as decorated graphons. There are also recent extensions to the sparse graph regime, from both the perspective of random graph models and graph limit theory. References Graph theory Probability theory
Graphon
[ "Mathematics" ]
3,664
[ "Discrete mathematics", "Mathematical relations", "Graph theory", "Combinatorics" ]
26,991,816
https://en.wikipedia.org/wiki/Kronecker%20sum%20of%20discrete%20Laplacians
In mathematics, the Kronecker sum of discrete Laplacians, named after Leopold Kronecker, is a discrete version of the separation of variables for the continuous Laplacian in a rectangular cuboid domain. General form of the Kronecker sum of discrete Laplacians In a general situation of the separation of variables in the discrete case, the multidimensional discrete Laplacian is a Kronecker sum of 1D discrete Laplacians. Example: 2D discrete Laplacian on a regular grid with the homogeneous Dirichlet boundary condition Mathematically, using the Kronecker sum: where and are 1D discrete Laplacians in the x- and y-directions, correspondingly, and are the identities of appropriate sizes. Both and must correspond to the case of the homogeneous Dirichlet boundary condition at end points of the x- and y-intervals, in order to generate the 2D discrete Laplacian L corresponding to the homogeneous Dirichlet boundary condition everywhere on the boundary of the rectangular domain. Here is a sample OCTAVE/MATLAB code to compute L on the regular 10×15 2D grid: nx = 10; % number of grid points in the x-direction; ny = 15; % number of grid points in the y-direction; ex = ones(nx,1); Dxx = spdiags([ex -2*ex ex], [-1 0 1], nx, nx); %1D discrete Laplacian in the x-direction ; ey = ones(ny,1); Dyy = spdiags([ey, -2*ey ey], [-1 0 1], ny, ny); %1D discrete Laplacian in the y-direction ; L = kron(Dyy, speye(nx)) + kron(speye(ny), Dxx) ; Eigenvalues and eigenvectors of multidimensional discrete Laplacian on a regular grid Knowing all eigenvalues and eigenvectors of the factors, all eigenvalues and eigenvectors of the Kronecker product can be explicitly calculated. Based on this, eigenvalues and eigenvectors of the Kronecker sum can also be explicitly calculated. The eigenvalues and eigenvectors of the standard central difference approximation of the second derivative on an interval for traditional combinations of boundary conditions at the interval end points are well known. Combining these expressions with the formulas of eigenvalues and eigenvectors for the Kronecker sum, one can easily obtain the required answer. Example: 3D discrete Laplacian on a regular grid with the homogeneous Dirichlet boundary condition where and are 1D discrete Laplacians in every of the 3 directions, and are the identities of appropriate sizes. Each 1D discrete Laplacian must correspond to the case of the homogeneous Dirichlet boundary condition, in order to generate the 3D discrete Laplacian L corresponding to the homogeneous Dirichlet boundary condition everywhere on the boundary. The eigenvalues are where , and the corresponding eigenvectors are where the multi-index pairs the eigenvalues and the eigenvectors, while the multi-index determines the location of the value of every eigenvector at the regular grid. The boundary points, where the homogeneous Dirichlet boundary condition are imposed, are just outside the grid. Available software An OCTAVE/MATLAB code http://www.mathworks.com/matlabcentral/fileexchange/27279-laplacian-in-1d-2d-or-3d is available under a BSD License, which computes the sparse matrix of the 1, 2D, and 3D negative Laplacians on a rectangular grid for combinations of Dirichlet, Neumann, and Periodic boundary conditions using Kronecker sums of discrete 1D Laplacians. The code also provides the exact eigenvalues and eigenvectors using the explicit formulas given above. Operator theory Matrix theory Numerical differential equations Finite differences Articles with example MATLAB/Octave code
Kronecker sum of discrete Laplacians
[ "Mathematics" ]
867
[ "Mathematical analysis", "Finite differences" ]
512,075
https://en.wikipedia.org/wiki/Advisory%20Opinion%20on%20the%20Legality%20of%20the%20Threat%20or%20Use%20of%20Nuclear%20Weapons
Legality of the Threat or Use of Nuclear Weapons [1996] ICJ 3 is a landmark international law case, where the International Court of Justice gave an advisory opinion stating that while the threat or use of nuclear weapons would generally be contrary to international humanitarian law, it cannot be concluded whether or not such a threat or use of nuclear weapons would be lawful in extreme circumstances where the very survival of a state would be at stake. The Court held that there is no source of international law that explicitly authorises or prohibits the threat or use of nuclear weapons but such threat or use must be in conformity with the UN Charter and principles of international humanitarian law. The Court also concluded that there was a general obligation to pursue nuclear disarmament. The World Health Organization requested the opinion on 3 September 1993, but it was initially refused because the WHO was acting outside its legal capacity (ultra vires). So the United Nations General Assembly requested another opinion in December 1994, accepted by the Court in January 1995. As well as determining the illegality of nuclear weapon use, the court discussed the proper role of international judicial bodies, the ICJ's advisory function, international humanitarian law (jus in bello), and rules governing the use of force (jus ad bellum). It explored the status of "Lotus approach", and employed the concept of non liquet. There were also strategic questions such as the legality of the practice of nuclear deterrence or the meaning of Article VI of the 1968 Treaty on the Non-Proliferation of Nuclear Weapons. The possibility of threat outlawing use of nuclear weapons in an armed conflict was raised on 30 June 1950, by the Dutch representative to the International Law Commission (ILC), , who suggested this "would in itself be an advance". In addition, the Polish government requested this issue to be examined by the ILC as a crime against the peace of mankind. However, the issue was delayed during the Cold War. The new Start Treaty is an agreement by both the US and Russian governments to limit the deploying of nuclear ballistic missiles. Being signed in 2010 and started in force back on February 5, 2011, had the Russian government seven years to meet the requirements set by the treaty. The treaty was extended in 2021 for another five years till 2026. Request of the World Health Organization An advisory opinion on this issue was originally requested by the World Health Organization (WHO) on 3 September 1993: The ICJ considered the WHO's request, in a case known as the Legality of the Use by a State of Nuclear Weapons in Armed Conflict (General List No. 93), and also known as the WHO Nuclear Weapons case, between 1993 and 1996. The ICJ fixed 10 June 1994 as the time limit for written submissions, but after receiving many written and oral submissions, later extended this date to 20 September 1994. After considering the case the Court refused to give an advisory opinion on the WHO question. On 8 July 1996 it held, by 11 votes to three, that the question did not fall within the scope of WHO's activities, as is required by Article 96(2) of the UN Charter. Request of the UN General Assembly On 15 December 1994 the UN General Assembly adopted resolution A/RES/49/75K. This asked the ICJ urgently to render its advisory opinion on the following question: The resolution, submitted to the Court on 19 December 1994, was adopted by 78 states voting in favour, 43 against, 38 abstaining and 26 not voting. The General Assembly had considered asking a similar question in the autumn of 1993, at the instigation of the Non-Aligned Movement (NAM), which ultimately did not push its request that year. NAM was more willing the following year, in the face of written statements submitted in the WHO proceedings from a number of nuclear-weapon states indicating strong views to the effect that the WHO lacked competence in the matter. The Court subsequently fixed 20 June 1995 as the filing date for written statements. Altogether, 42 states participated in the written phase of the pleadings, the largest number ever to join in proceedings before the Court. Of the five declared nuclear weapon states (the P5), only the People's Republic of China did not participate. Of the three "threshold" nuclear-weapon states, only India participated. Many of the participants were developing states which had not previously contributed to proceedings before the ICJ, a reflection perhaps of the unparalleled interest in this matter and the growing willingness of developing states to engage in international judicial proceedings in the "post-colonial" period. Oral hearings were held from 30 October to 15 November 1995. Twenty-two states participated: Australia, Egypt, France, Germany, Indonesia, Mexico, Iran, Italy, Japan, Malaysia, New Zealand, Philippines, Qatar, Russian Federation, San Marino, Samoa, Marshall Islands, Solomon Islands, Costa Rica, United Kingdom, United States, Zimbabwe; as did the WHO. The secretariat of the UN did not appear, but filed with the Court a dossier explaining the history of resolution 49/75K. Each state was allocated 90 minutes to make its statement. On 8 July 1996, nearly eight months after the close of the oral phase, the ICJ rendered its opinion. Decision of the International Court of Justice Composition of the Court The ICJ is composed of fifteen judges elected to nine year terms by the UN General Assembly and the UN Security Council. The court's "advisory opinion" can be requested only by specific United Nations organisations, and is inherently non-binding under the Statute of the court. The fifteen judges asked to give their advisory opinion regarding the legality of the threat or use of nuclear weapons were: Court's analysis Deterrence and "threat" The court considered the matter of deterrence, which involves a threat to use nuclear weapons under certain circumstances on a potential enemy or an enemy. Was such a threat illegal? The court decided, with some judges dissenting, that, if a threatened retaliatory strike was consistent with military necessity and proportionality, it would not necessarily be illegal. (Judgement paragraphs 37–50) The legality of the possession of nuclear weapons The court then considered the legality of the possession, as opposed to actual use, of nuclear weapons. The Court looked at various treaties, including the UN Charter, and found no treaty language that specifically forbade the possession of nuclear weapons in a categorical way. The UN Charter was examined in paragraphs 37–50 (paragraph 37: "The Court will now address the question of the legality or illegality of recourse to nuclear weapons in the light of the provisions of the Charter relating to the threat or use of force"). Paragraph 39 mentions: "These provisions [i.e. those of the Charter] do not refer to specific weapons. They apply to any use of force, regardless of the weapons employed. The Charter neither expressly prohibits, nor permits, the use of any specific weapon, including nuclear weapons. A weapon that is already unlawful per se, whether by treaty or custom, does not become lawful by reason of its being used for a legitimate purpose under the Charter." Treaties were examined in paragraphs 53–63 (paragraph 53: "The Court must therefore now examine whether there is any prohibition of recourse to nuclear weapons as such; it will first ascertain whether there is a conventional prescription to this effect"), as part of the law applicable in situations of armed conflict (paragraph 51, first sentence: "Having dealt with the Charter provisions relating to the threat or use of force, the Court will now turn to the law applicable in situations of armed conflict"). In particular, with respect to "the argument [that] has been advanced that nuclear weapons should be treated in the same way as poisoned weapons", the Court concluded that "it does not seem to the Court that the use of nuclear weapons can be regarded as specifically prohibited on the basis of the [...] provisions of the Second Hague Declaration of 1899, the Regulations annexed to the Hague Convention IV of 1907 or the 1925 Protocol" (paragraphs 54 and 56)". It was also argued by some that the Hague Conventions concerning the use of bacteriological or chemical weapons would also apply to nuclear weapons, but the Court was unable to adopt this argument ("The Court does not find any specific prohibition of recourse to nuclear weapons in treaties expressly prohibiting the use of certain weapons of mass destruction", paragraph 57 in fine). With respect to treaties that "deal [...] exclusively with acquisition, manufacture, possession, deployment and testing of nuclear weapons, without specifically addressing their threat or use," the Court notes that those treaties "certainly point to an increasing concern in the international community with these weapons; the Court concludes from this that these treaties could therefore be seen as foreshadowing a future general prohibition of the use of such weapons, but they do not constitute such a prohibition by themselves" (paragraph 62). Also, regarding regional treaties prohibiting resource, namely those of Tlatelolco (Latin America) and Rarotonga (South Pacific) the Court notes that while those "testify to a growing awareness of the need to liberate the community of States and the international public from the dangers resulting from the existence of nuclear weapons", "[i]t [i.e. the Court] does not, however, view these elements as amounting to a comprehensive and universal conventional prohibition on the use, or the threat of use, of those weapons as such." (paragraph 63). Customary international law also provided insufficient evidence that the possession of nuclear weapons had come to be universally regarded as illegal. Ultimately, the court was unable to find an opinio juris (that is, legal consensus) that nuclear weapons are illegal to possess. (paragraph 65) However, in practice, nuclear weapons have not been used in war since 1945 and there have been numerous UN resolutions condemning their use (however, such resolutions are not universally supported—most notably, the nuclear powers object to them).(paragraph 68–73) The ICJ did not find that these facts demonstrated a new and clear customary law absolutely forbidding nuclear weapons. However, there are many universal humanitarian laws applying to war. For instance, it is illegal for a combatant specifically to target civilians and certain types of weapons that cause indiscriminate damage are categorically outlawed. All states seem to observe these rules, making them a part of customary international law, so the court ruled that these laws would also apply to the use of nuclear weapons.(paragraph 86) The Court decided not to pronounce on the matter of whether the use of nuclear weapons might possibly be legal, if exercised as a last resort in extreme circumstances (such as if the very existence of the state was in jeopardy).(paragraph 97) Decision The court undertook seven separate votes, all of which were passed: The court decided to comply with the request for an advisory opinion; The court replied that "There is in neither customary nor conventional international law any specific authorization of the threat or use of nuclear weapons"; The court replied that "There is in neither customary nor conventional international law any comprehensive and universal prohibition of the threat or use of nuclear weapons as such"; The court replied that "A threat or use of force by means of nuclear weapons that is contrary to Article 2, paragraph 4, of the United Nations Charter and that fails to meet all the requirements of Article 51, is unlawful"; The court replied that "A threat or use of nuclear weapons should also be compatible with the requirements of the international law applicable in armed conflict, particularly those of the principles and rules of humanitarian law, as well as with specific obligations under treaties and other undertakings which expressly deal with nuclear weapons" The court replied that "the threat or use of nuclear weapons would generally be contrary to the rules of international law applicable in armed conflict, and in particular the principles and rules of humanitarian law; However, in view of the current state of international law, and of the elements of fact at its disposal, the Court cannot conclude definitively whether the threat or use of nuclear weapons would be lawful or unlawful in an extreme circumstance of self-defence, in which the very survival of a State would be at stake" The court replied that "There exists an obligation to pursue in good faith and bring to a conclusion negotiations leading to nuclear disarmament in all its aspects under strict and effective international control". The court voted as follows: Split decision The only significantly split decision was on the matter of whether "the threat or use of nuclear weapons would generally be contrary to the rules of international law applicable in armed conflict", not including "in an extreme circumstance of self-defence, in which the very survival of a State would be at stake". However, three of the seven "dissenting" judges (namely, Judge Shahabuddeen of Guyana, Judge Weeramantry of Sri Lanka, and Judge Koroma of Sierra Leone) wrote separate opinions explaining that the reason they were dissenting was their view that there is no exception under any circumstances (including that of ensuring the survival of a State) to the general principle that use of nuclear weapons is illegal. A fourth dissenter, Judge Oda of Japan, dissented largely on the ground that the Court simply should not have taken the case. Vice President Schwebel remarked in his dissenting opinion that And Higgins noted that she did not Nevertheless, the Court's opinion did not conclude definitively and categorically, under the existing state of international law at the time, whether in an extreme circumstance of self-defence in which the very survival of a State would be a stake, the threat or use of nuclear weapons would necessarily be unlawful in all possible cases. However, the court's opinion unanimously clarified that the world's states have a binding duty to negotiate in good faith, and to accomplish, nuclear disarmament. International reaction United Kingdom The Government of the United Kingdom has announced plans to renew Britain's only nuclear weapon, the Trident missile system. They have published a white paper The Future of the United Kingdom's Nuclear Deterrent in which they state that the renewal is fully compatible with the United Kingdom's treaty commitments and international law. These arguments are summarised in a question and answer briefing published by UK Permanent Representative to the Conference on Disarmament The white paper The Future of the United Kingdom's Nuclear Deterrent stands in contrast to two legal opinions. The first, commissioned by Peacerights, was given on 19 December 2005 by Rabinder Singh QC and Professor Christine Chinkin of Matrix Chambers. It addressed Drawing on the International Court of Justice (ICJ) opinion, Singh and Chinkin argued that: The second legal opinion was commissioned by Greenpeace and given by Philippe Sands QC and Helen Law, also of Matrix Chambers, on 13 November 2006. The opinion addressed With regards to the jus ad bellum, Sands and Law found that The phrase "very survival of the state" is a direct quote from paragraph 97 of the ICJ ruling. With regards to international humanitarian law, they found that Finally, with reference to the NPT, Sands and Law found that Scots law In 1999 a legal case was put forward to attempt to use the ICJ's Opinion in establishing the illegality of nuclear weapons. On 27 September 1999, three Trident Ploughshares activists Ulla Røder from Denmark, Angie Zelter from England, and Ellen Moxley from Scotland, were acquitted of charges of malicious damage at Greenock Sheriff Court. The three women had boarded Maytime, a barge moored in Loch Goil and involved in scientific work connected with the s berthed in the nearby Gareloch, and caused £80,000 worth of damage. As is often the case in trials relating to such actions, the defendants attempted to establish that their actions were necessary, in that they had prevented what they saw as "nuclear crime". The acquittal of the Trident Three resulted in the High Court of Justiciary, the supreme criminal court in Scots law, considering a Lord Advocate's Reference, and presenting the first detailed analysis of the ICJ Opinion by another judicial body. The High Court was asked to answer four questions: In a trial under Scottish criminal procedure, is it competent to lead evidence as to the content of customary international law as it applies in the United Kingdom? Does any rule of customary international law justify a private individual in Scotland in damaging or destroying property in pursuit of his or her objection to the United Kingdom's possession of nuclear weapons, its action in placing such weapons at locations within Scotland or its policies in relation to such weapons? Does the belief of an accused person that his or her actions are justified in law constitute a defence to a charge of malicious mischief or theft? Is it a general defence to a criminal charge that the offence was committed in order to prevent or bring to an end the commission of an offence by another person? The four collective answers given by Lord Prosser, Lord Kirkwood and Lord Penrose were all negative. This did not have the effect of overturning the acquittals of Roder, Zelter and Moxley (Scots law, like many other jurisdictions, does not allow for an acquittal to be appealed); however, it does have the effect of invalidating the ratio decidendi under which the three women were able to argue for their acquittal, and ensures that similar defences cannot be present in Scots Law. See also Global Security Institute International humanitarian law List of International Court of Justice cases Martens Clause Mutual assured destruction Nuclear warfare Nuclear weapons convention Humanitarian Initiative Parliamentarians for Nuclear Non-Proliferation and Disarmament Treaty on the Prohibition of Nuclear Weapons Notes References Sands, Philippe; and Law, Helen; The United Kingdom's nuclear deterrent:Current and future issies of legality (PDF) for Greenpeace. Singh, Rabinder; and Chinkin, Christine; The Maintenance and Possible Replacement of the Trident Nuclear Missile System Introduction and Summary of Advice for Peacerights United Kingdom Permanent Representative to the Conference on Disarmament Britain's Nuclear Deterrent United Nations General Assembly A/RES/49/75/K: Request for an advisory opinion from the International Court of Justice on the legality of the threat or use of nuclear weapons 90th plenary meeting 15 December 1994. Weiss, Peter; Notes on a Misunderstood Decision: The World Court's Near Perfect Advisory Opinion in the Nuclear Weapons Case , website of the Lawyers' Committee on Nuclear Policy (LCNP) July 22, 1996 ICJ documents ICJ documents relating to the case Legality of the threat or use of nuclear weapons (General List No. 95) 8 July 1996 Summary of the Advisory Opinion Declarations of individual judges: Declaration of President Bedjaoui Declaration of Judge Herczegh Declaration of Judge Shi Declaration of Judge Vereshchetin Declaration of Judge Ferrari Bravo Separate Opinions of individual judges: Separate Opinion of Judge Guillaume Separate Opinion of Judge Ranjeva Separate Opinion of Judge Fleischhauer Dissenting Opinions of individual judges: Dissenting Opinion of Vice-President Schwebel Dissenting Opinion of Judge Oda Dissenting Opinion of Judge Shahabuddeen Dissenting Opinion of Judge Weeramantry Dissenting Opinion of Judge Koroma Dissenting Opinion of Judge Higgins Further reading David, Eric; "The Opinion of the International Court of Justice on the Legality of the Use of Nuclear Weapons" (1997) 316 International Review of the Red Cross 21. Condorelli, Luigi; "Nuclear Weapons: A Weighty Matter for the International Court of Justice" (1997) 316 International Review of the Red Cross 9, 11. Ginger, Ann Fagan; "Looking at the United Nations through The Prism of National Peace Law," 36(2) UN Chronicle62 (Summer 1999). Greenwood, Christopher; "The Advisory Opinion on Nuclear Weapons and the Contribution of the International Court to International Humanitarian Law" (1997) 316 International Review of the Red Cross 65. Greenwood, Christopher; "Jus ad Bellum and Jus in Bello in the Nuclear Weapons Advisory Opinion" in Laurence Boisson de Chazournes and Phillipe Sands (eds), International Law, the International Court of Justice and Nuclear Weapons (1999) 247, 249. Holdstock, Dougaylas; and Waterston, Lis; "Nuclear weapons, a continuing threat to health," 355(9214) The Lancet 1544 (29 April 2000). Jeutner, Valentin; "Irresolvable Norm Conflicts in International Law: The Concept of a Legal Dilemma" (Oxford University Press 2017), . McNeill, John; "The International Court of Justice Advisory Opinion in the Nuclear Weapons Cases--A First Appraisal" (1997) 316 International Review of the Red Cross 103, 117. Mohr, Manfred; "Advisory Opinion of the International Court of Justice on the Legality of the Use of Nuclear Weapons Under International Law--A Few Thoughts on its Strengths and Weaknesses" (1997) 316 International Review of the Red Cross 92, 94. Moore, Mike; "World Court says mostly no to nuclear weapons," 52(5) Bulletin of the Atomic Scientists, 39 (Sept-October 1996). Moxley, Charles J.; Nuclear Weapons and International Law in the Post Cold War World (Austin & Winfield 2000), . Nuclear weapons policy International Court of Justice cases 1996 in case law 1996 in international relations Aggression in international law
Advisory Opinion on the Legality of the Threat or Use of Nuclear Weapons
[ "Biology" ]
4,391
[ "Behavior", "Aggression", "Aggression in international law" ]
512,093
https://en.wikipedia.org/wiki/Sedimentation%20equilibrium
Sedimentation equilibrium in a suspension of different particles, such as molecules, exists when the rate of transport of each material in any one direction due to sedimentation equals the rate of transport in the opposite direction due to diffusion. Sedimentation is due to an external force, such as gravity or centrifugal force in a centrifuge. It was discovered for colloids by Jean Baptiste Perrin for which he received the Nobel Prize in Physics in 1926. Colloid In a colloid, the colloidal particles are said to be in sedimentation equilibrium if the rate of sedimentation is equal to the rate of movement from Brownian motion. For dilute colloids, this is described using the Laplace-Perrin distribution law: where is the colloidal particle volume fraction as a function of vertical distance above reference point , is the colloidal particle volume fraction at reference point , is the buoyant mass of the colloidal particles, is the standard acceleration due to gravity, is the Boltzmann constant, is the absolute temperature, and is the sedimentation length. The buoyant mass is calculated using where is the difference in mass density between the colloidal particles and the suspension medium, and is the colloidal particle volume found using the volume of a sphere ( is the radius of the colloidal particle). Sedimentation length The Laplace-Perrin distribution law can be rearranged to give the sedimentation length . The sedimentation length describes the probability of finding a colloidal particle at a height above the point of reference . At the length above the reference point, the concentration of colloidal particles decreases by a factor of . If the sedimentation length is much greater than the diameter of the colloidal particles (), the particles can diffuse a distance greater than this diameter, and the substance remains a suspension. However, if the sedimentation length is less than the diameter (), the particles can only diffuse by a much shorter length. They will sediment under the influence of gravity and settle to the bottom of the container. The substance can no longer be considered a colloidal suspension. It may become a colloidal suspension again if an action to undertaken to suspend the colloidal particles again, such as stirring the colloid. Example The difference in mass density between the colloidal particles of mass density and the medium of suspension of mass density , and the diameter of the particles, have an influence on the value of . As an example, consider a colloidal suspension of polyethylene particles in water, and three different values for the diameter of the particles: 0.1 μm, 1 μm and 10 μm. The volume of a colloidal particles can be calculated using the volume of a sphere . is the mass density of polyethylene, which is approximately on average 920 kg/m3 and is the mass density of water, which is approximately 1000 kg/m3 at room temperature (293K). Therefore is -80 kg/m3. Generally, decreases with . For the 0.1 μm diameter particle, is larger than the diameter, and the particles will be able to diffuse. For the 10 μm diameter particle, is much smaller than the diameter. As is negative the particles will cream, and the substance will no longer be a colloidal suspension. In this example, the difference is mass density is relatively small. Consider a colloid with particles much denser than polyethylene, for example silicon with a mass density of approximately 2330 kg/m3. If these particles are suspended in water, will be 1330 kg/m3. will decrease as increases. For example, if the particles had a diameter of 10 μm the sedimentation length would be 5.92×10−4 μm, one order of magnitude smaller than for polyethylene particles. Also, because the particles are more dense than water, is positive and the particles will sediment. Ultracentrifuge Modern applications use the analytical ultracentrifuge. The theoretical basis for the measurements is developed from the Mason-Weaver equation. The advantage of using analytical sedimentation equilibrium analysis for Molecular Weight of proteins and their interacting mixtures is the avoidance of need for derivation of a frictional coefficient, otherwise required for interpretation of dynamic sedimentation. Sedimentation equilibrium can be used to determine molecular mass. It forms the basis for an analytical ultracentrifugation method for measuring molecular masses, such as those of proteins, in solution. References External links Reversible Associations in Structural and Molecular Biology Biochemistry methods
Sedimentation equilibrium
[ "Chemistry", "Biology" ]
932
[ "Biochemistry methods", "Biochemistry" ]
512,243
https://en.wikipedia.org/wiki/Lakshmi%20Mittal
Lakshmi Niwas Mittal (; born 15 June 1950 in Sadulpur, Rajasthan, India) is an Indian-born British steel magnate, based in the United Kingdom. He is the executive chairman of ArcelorMittal, the world's second largest steelmaking company, as well as chairman of stainless steel manufacturer Aperam. Mittal owns 38% of ArcelorMittal and holds a 3% stake in EFL Championship side Queens Park Rangers. In 2005, Forbes ranked Mittal as the third-richest person in the world, making him the first Indian citizen to be ranked in the top ten in the publication's annual list of the world's richest people. He was ranked the sixth-richest person in the world by Forbes in 2011, but dropped to 82nd place in March 2015, and only to 130th by October 2024. He is also the "57th-most powerful person" of the 72 individuals named in Forbes''' "Most Powerful People" list for 2015. His daughter Vanisha Mittal's wedding (in 2005) was the second-most expensive in recorded history. Mittal has been a member of the board of directors of Goldman Sachs since 2008. He sits on the World Steel Association's executive committee, and is a member of the Global CEO Council of the Chinese People's Association for Friendship with Foreign Countries, the Foreign Investment Council in Kazakhstan, the World Economic Forum's International Business Council, and the European Round Table of Industrialists. He is also a member of the board of trustees of the Cleveland Clinic. In 2005, The Sunday Times named him "Business Person of 2006", the Financial Times named him "Person of the Year", and Time magazine named him "International Newsmaker of the Year 2006". In 2007, Time magazine included him in their "Time 100" list. Early life and career Mittal was born in a Marwari Hindu family. He studied at Shri Daulatram Nopany Vidyalaya, Calcutta from 1957 to 1964. He graduated from St. Xavier's College, affiliated to the University of Calcutta, with a B.Com degree in the first class. Lakshmi's father, Mohanlal Mittal, ran a steel business, Nippon Denro Ispat. In 1976, because of the curb of steel production by the Indian government, the 26-year-old Mittal opened his first steel factory PT Ispat Indo in Sidoarjo, East Java, Indonesia. In 1989 Mittal purchased the state-owned steel works in Trinidad and Tobago, which were operating at an enormous loss. He turned them into profitable ventures in a year. Until the 1990s, the family's main assets in India were a cold-rolling mill for sheet steels in Nagpur and an alloy steels plant near Pune. Today, the family business, including a large integrated steel plant near Mumbai, is run by his younger brothers Pramod Mittal and Vinod Mittal, but Lakshmi has no connection with it. In 1995 Mittal purchased the Irish Steel plant based in Cork, Ireland, from the government for a nominal fee of IR£1. Only six years later in 2001 it was closed, leaving over 400 people redundant. Subsequent environmental issues at the site have been a cause for criticism. The Irish government sought a High Court judgement that Mittal's company should contribute to the cost of the clean-up of Cork Harbour, but failed. The clean up was expected to cost €70 million. Prior to December 2001, Mittal had acquired assets which he renamed Ispat Mexicana and his Kazakhstani operation Ispat Karmet. That month he renamed Sidex Galati to Ispat Sidex, which he had acquired in November 2001. In October 2003, the LNM Group succeeded in concluding the $155 million transaction to pry loose the Romanian government from the control of steel assets Siderurgica Hunedoara and Petrotub Roman, the day after it took over the PHS Steel Group which include Huta Sendzimira, Huta Katowice, Huta Florian and Huta Cedler from the Polish government. Petrotub Roman was renamed Ispat Tepro in the sequel. Mittal successfully employed Marek Dochnal's consultancy to influence Polish officials in the 2003 privatisation of PHS steel group, which was then Poland's largest. Dochnal was later arrested for bribing Polish officials on behalf of Russian agents in a separate affair. In March 2007, the Polish government said it wanted to renegotiate the 2004 sale to ArcelorMittal. Employees of Mittal have accused him of allowing "slave labour" conditions after multiple fatalities in his mines. For example, during December 2004, 23 miners died in explosions in his mines in Kazakhstan caused by faulty gas detectors. In 2006–07, Mittal succeeded in a hostile takeover bid for Arcelor, which he renamed Arcelor Mittal. In so doing he obtained control of amongst others the Usinor steel assets of France, the Arbed steel assets of Luxembourg, and the Aceralia steel assets of Spain. Controversies The Mittal Affair: "Cash for Influence" In 2002, Plaid Cymru MP Adam Price obtained a letter written by Tony Blair to the Romanian Government in support of Mittal's LNM Group steel company, which was in the process of bidding to buy Romania's state-owned industry. This revelation caused controversy, because Mittal had given £125,000 to the British Labour Party the previous year. Although Blair defended his letter as simply "celebrating the success" of a British company, he was criticised because LNM was registered in the Dutch Antilles and employed less than 1% of its workforce in the UK. LNM was a "major global competitor of Britain's own struggling steel industry". Blair's letter hinted that the privatisation of the firm and sale to Mittal might help smooth the way for Romania's entry into the European Union. It also had a passage, removed just prior to Blair's signing of it, describing Mittal as "a friend". In October 2003, the LNM Group succeeded in concluding the transaction to pry loose the Romanian government from the control of steel assets. Social work Sports After witnessing India win only one medal, bronze, in the 2000 Summer Olympics, and one medal, silver, at the 2004 Summer Olympics, Mittal decided to set up the Mittal Champions Trust with $9 million to support ten Indian athletes with world-beating potential. In 2008, Mittal awarded Abhinav Bindra with Rs. 1.5 Crore (Rs. 15 million), for getting India its first individual Olympic gold medal in shooting. ArcelorMittal also provided steel for the construction of the ArcelorMittal Orbit for the 2012 Summer Olympics. For Comic Relief he matched the money raised (~£1 million) on the celebrity special BBC programme, The Apprentice. Mittal had emerged as a leading contender to buy and sell Barclays Premiership clubs Wigan and Everton. However, on 20 December 2007, it was announced that the Mittal family had purchased a 20 per cent shareholding in Queens Park Rangers football club joining Flavio Briatore and Mittal's friend Bernie Ecclestone. As part of the investment Mittal's son-in-law, Amit Bhatia, took a place on the board of directors. The combined investment in the struggling club sparked suggestions that Mittal might be looking to join the growing ranks of wealthy individuals investing heavily in English football and emulating similar benefactors such as Roman Abramovich. On 19 February 2010, Briatore resigned as QPR chairman, and sold further shares in the club to Ecclestone, making Ecclestone the single largest shareholder. Education In 2003, the Lakshmi Niwas Mittal, Usha Mittal Foundation and the Government of Rajasthan partnered together to establish a university, the LNM Institute of Information Technology (LNMIIT) in Jaipur as an autonomous non-profit organisation. In 2009, the Foundation along with Bharatiya Vidya Bhavan founded the Usha Lakshmi Mittal Institute of Management in New Delhi. SNDT Women's University renamed the Institute of Technology for Women (ITW) as Usha Mittal Institute of Technology after a large donation from the Lakshmi Niwas Mittal Foundation. He completed his primary and secondary school from Nopany High formerly known as Shri Daulatram Nopany Vidyalaya. Medical In 2008, the Mittals made a donation of £15 million to Great Ormond Street Hospital in London, the largest private contribution the hospital had ever received. The donation was used to help fund their new facility, the Mittal Children's Medical Centre. COVID-19 pandemic He made a donation of ₹100 crores to PM cares fund during the COVID-19 pandemic in India in 2020. Personal life Mittal is married to Usha Dalmia. They have a son Aditya Mittal and a daughter Vanisha Mittal. Lakshmi Mittal has two brothers, Pramod Mittal and Vinod Mittal, and a sister, Seema Lohia, who married Indonesian businessman, Sri Prakash Lohia. His residence at 18–19 Kensington Palace Gardens—which was purchased from Formula One boss Bernie Ecclestone in 2004 for £67 million (US$128 million)—made it the world's most expensive house at the time. The house is decorated with marble taken from the same quarry that supplied the Taj Mahal. The extravagant show of wealth has been referred to as the "Taj Mittal". It has 12 bedrooms, an indoor pool, Turkish baths and parking for 20 cars. He is a lacto-vegetarian. Mittal bought No. 9A Palace Greens, Kensington Gardens, formerly the Philippines Embassy, for £70 million in 2008 for his daughter Vanisha Mittal who is married to Amit Bhatia, a businessman and philanthropist. Mittal threw a lavish "vegetarian reception" for Vanisha in the Palace of Versailles, France. In 2005, he also bought a colonial bungalow for $30 million at No. 22, Dr APJ Abdul Kalam Road, New Delhi, one of the most exclusive streets in India, occupied by embassies and billionaires, and rebuilt it as a house. Personal wealth According to the Sunday Times Rich List 2016, Mittal and his family had an estimated personal net worth of 7.12 billion, a decrease of $2.08 billion on the previous year. Meanwhile, in 2016 Forbes magazine's annual billionaires list assessed estimated Mittal's wealth in 2016 at as the 135th-wealthiest billionaire with a net worth of 8.4 billion. Mittal's net worth peaked in 2008, assessed by The Sunday Times at £27.70 billion, and by Forbes at 45.0 billion, and rated as the fourth-wealthiest individual in the world. As of 2022, he was ranked as the 15th richest man in India by Forbes with a net-worth of US$17.8 billion. According to Forbes, Lakshmi Mittal’s net worth is currently estimated at US$16.4 billion as of 2024. He holds the position of the 113th richest individual globally and ranks as the 13th wealthiest person in India. In October 2024, Mittal was ranked 15th on Forbes list of India’s 100 richest tycoons, with a net worth of $16.7 billion. Wealth rankings {| class="wikitable" !colspan="2"|Legend |- ! Icon ! Description |- | |Has not changed from the previous year's list |- | |Has increased from the previous year's list |- | |Has decreased from the previous year's list |} Awards and honours Tim Bouquet and Byron Ousey – Cold Steel (Little, Brown, 2008) Navalpreet Rangi – Documentary Film (The Man with a Mission, 2010) References External links Profile at Forbes Profile at BBC News Article on Mittal with background on Arcelor takeover bid – Time'' BBC News Online – "Glimpsing a Fairytale Wedding" Mittal Steel Cleveland Works Article on Mittal Family purchase of Escada – Bloomberg 1950 births Living people Marwari people ArcelorMittal Businesspeople in steel Directors of Goldman Sachs Fellows of King's College London Indian billionaires Indian chief executives Indian emigrants to England Labour Party (UK) donors People from Churu district St. Xavier's College, Kolkata alumni Indian Institute of Social Welfare and Business Management alumni University of Calcutta alumni Rajasthani people Bessemer Gold Medal Queens Park Rangers F.C. directors and chairmen Recipients of the Padma Vibhushan in trade & industry 20th-century Indian businesspeople Businesspeople from Rajasthan Lakshmi Conservative Party (UK) donors
Lakshmi Mittal
[ "Chemistry" ]
2,638
[ "Bessemer Gold Medal", "Chemical engineering awards" ]
512,741
https://en.wikipedia.org/wiki/Cell%20fractionation
In cell biology, cell fractionation is the process used to separate cellular components while preserving individual functions of each component. This is a method that was originally used to demonstrate the cellular location of various biochemical processes. Other uses of subcellular fractionation is to provide an enriched source of a protein for further purification, and facilitate the diagnosis of various disease states. Homogenization Tissue is typically homogenized in a buffer solution that is isotonic to stop osmotic damage. Mechanisms for homogenization include grinding, mincing, chopping, pressure changes, osmotic shock, freeze-thawing, and ultrasound. The samples are then kept cold to prevent enzymatic damage. It is the formation of homogenous mass of cells (cell homogenate or cell suspension). It involves grinding of cells in a suitable medium in the presence of certain enzymes with correct pH, ionic composition, and temperature. For example, pectinase which digests middle lamella among plant cells. Filtration This step may not be necessary depending on the source of the cells. Animal tissue however is likely to yield connective tissue which must be removed. Commonly, filtration is achieved either by pouring through gauze or with a suction filter and the relevant grade ceramic filter. Purification Purification is achieved by differential centrifugation – the sequential increase in gravitational force results in the sequential separation of organelles according to their density. See also Cell disruption Media for cell separation by density: Percoll Ficoll References Biochemical separation processes Fractionation Laboratory techniques
Cell fractionation
[ "Chemistry", "Biology" ]
315
[ "Biochemistry methods", "Fractionation", "Separation processes", "Biochemical separation processes", "nan" ]
512,768
https://en.wikipedia.org/wiki/Differential%20centrifugation
In biochemistry and cell biology, differential centrifugation (also known as differential velocity centrifugation) is a common procedure used to separate organelles and other sub-cellular particles based on their sedimentation rate. Although often applied in biological analysis, differential centrifugation is a general technique also suitable for crude purification of non-living suspended particles (e.g. nanoparticles, colloidal particles, viruses). In a typical case where differential centrifugation is used to analyze cell-biological phenomena (e.g. organelle distribution), a tissue sample is first lysed to break the cell membranes and release the organelles and cytosol. The lysate is then subjected to repeated centrifugations, where particles that sediment sufficiently quickly at a given centrifugal force for a given time form a compact "pellet" at the bottom of the centrifugation tube. After each centrifugation, the supernatant (non-pelleted solution) is removed from the tube and re-centrifuged at an increased centrifugal force and/or time. Differential centrifugation is suitable for crude separations on the basis of sedimentation rate, but more fine grained purifications may be done on the basis of density through equilibrium density-gradient centrifugation. Thus, the differential centrifugation method is the successive pelleting of particles from the previous supernatant, using increasingly higher centrifugation forces. Cellular organelles separated by differential centrifugation maintain a relatively high degree of normal functioning, as long as they are not subject to denaturing conditions during isolation. Theory In a viscous fluid, the rate of sedimentation of a given suspended particle (as long as the particle is denser than the fluid) is largely a function of the following factors: Gravitational force Difference in density Fluid viscosity Particle size and shape Larger particles sediment more quickly and at lower centrifugal forces. If a particle is less dense than the fluid (e.g., fats in water), the particle will not sediment, but rather will float, regardless of strength of the g-force experienced by the particle. Centrifugal force separates components not only on the basis of density, but also of particle size and shape. In contrast, a more specialized equilibrium density-gradient centrifugation produces a separation profile dependent on particle-density alone, and therefore is suitable for more fine-grained separations. High g-force makes sedimentation of small particles much faster than Brownian diffusion, even for very small (nanoscale) particles. When a centrifuge is used, Stokes' law must be modified to account for the variation in g-force with distance from the center of rotation. where D is the minimum diameter of the particles expected to sediment (m) η (or μ) is the fluid dynamic viscosity (Pa.s) Rf is the final radius of rotation (m) Ri is the initial radius of rotation (m) ρp is particle volumetric mass density (kg/m3) ρf is the fluid volumetric mass density (kg/m3) ω is the angular velocity (radian/s) t is the time required to sediment from Ri to Rf (s) Procedure Differential centrifugation can be used with intact particles (e.g. biological cells, microparticles, nanoparticles), or used to separate the component parts of a given particle. Using the example of a separation of eukaryotic organelles from intact cells, the cell must first be lysed and homogenized (ideally by a gentle technique, such as Dounce homogenization; harsher techniques or over homogenization will lead to a lower proportion of intact organelles). Once the crude organelle extract is obtained, it may be subjected to a varying centrifugation speeds to separate the organelles: Ultracentrifugation The lysed sample is now ready for centrifugation in an ultracentrifuge. An ultracentrifuge consists of a refrigerated, low-pressure chamber containing a rotor which is driven by an electrical motor capable of high speed rotation. Samples are placed in tubes within or attached to the rotor. Rotational speed may reach up to 100,000 rpm for floor model, 150,000 rpm for bench-top model (Beckman Optima Max-XP or Sorvall MTX150 or himac CS150NX), creating centrifugal speed forces of 800,000g to 1,000,000g. This force causes sedimentation of macromolecules, and can even cause non-uniform distributions of small molecules. Since different fragments of a cell have different sizes and densities, each fragment will settle into a pellet with different minimum centrifugal forces. Thus, separation of the sample into different layers can be done by first centrifuging the original lysate under weak forces, removing the pellet, then exposing the subsequent supernatants to sequentially greater centrifugal fields. Each time a portion of different density is sedimented to the bottom of the container and extracted, and repeated application produces a rank of layers which includes different parts of the original sample. Additional steps can be taken to further refine each of the obtained pellets. Sedimentation depends on mass, shape, and partial specific volume of a macromolecule, as well as solvent density, rotor size and rate of rotation. The sedimentation velocity can be monitored during the experiment to calculate molecular weight. Values of sedimentation coefficient (S) can be calculated. Large values of S (faster sedimentation rate) correspond to larger molecular weight. Dense particle sediments more rapidly. Elongated proteins have larger frictional coefficients, and sediment more slowly to ensure accuracy. Differences between differential and density gradient centrifugation The difference between differential and density gradient centrifugation techniques is that the latter method uses solutions of different densities (e.g. sucrose, Ficoll, Percoll) or gels through which the sample passes. This separates the sample into layers by relative density, based on the principle that molecules settle down under a centrifugal force until they reach a medium with the density the same as theirs. The degree of separation or number of layers depends on the solution or gel. Differential centrifugation, on the other hand, does not utilize a density gradient, and the centrifugation is taken in increasing speeds. The different centrifugation speeds often create separation into not more than two fractions, so the supernatant can be separated further in additional centrifugation steps. For that, each step the centrifugation speed has to be increased until the desired particles are separated. In contrast, the density gradient centrifugation is usually performed with just one centrifugation speed. See also Buoyant density ultracentrifugation Jerome Vinograd Svedberg References Cell biology Centrifugation Industrial processes Fractionation Laboratory techniques
Differential centrifugation
[ "Chemistry", "Biology" ]
1,461
[ "Fractionation", "Centrifugation", "Cell biology", "Separation processes", "nan" ]
513,091
https://en.wikipedia.org/wiki/Single-nucleotide%20polymorphism
In genetics and bioinformatics, a single-nucleotide polymorphism (SNP ; plural SNPs ) is a germline substitution of a single nucleotide at a specific position in the genome. Although certain definitions require the substitution to be present in a sufficiently large fraction of the population (e.g. 1% or more), many publications do not apply such a frequency threshold. For example, a G nucleotide present at a specific location in a reference genome may be replaced by an A in a minority of individuals. The two possible nucleotide variations of this SNP – G or A – are called alleles. SNPs can help explain differences in susceptibility to a wide range of diseases across a population. For example, a common SNP in the CFH gene is associated with increased risk of age-related macular degeneration. Differences in the severity of an illness or response to treatments may also be manifestations of genetic variations caused by SNPs. For example, two common SNPs in the APOE gene, rs429358 and rs7412, lead to three major APO-E alleles with different associated risks for development of Alzheimer's disease and age at onset of the disease. Single nucleotide substitutions with an allele frequency of less than 1% are sometimes called single-nucleotide variants (SNVs). "Variant" may also be used as a general term for any single nucleotide change in a DNA sequence, encompassing both common SNPs and rare mutations, whether germline or somatic. The term SNV has therefore been used to refer to point mutations found in cancer cells. DNA variants must also commonly be taken into consideration in molecular diagnostics applications such as designing PCR primers to detect viruses, in which the viral RNA or DNA sample may contain SNVs. However, this nomenclature uses arbitrary distinctions (such as an allele frequency of 1%) and is not used consistently across all fields; the resulting disagreement has prompted calls for a more consistent framework for naming differences in DNA sequences between two samples. Types Single-nucleotide polymorphisms may fall within coding sequences of genes, non-coding regions of genes, or in the intergenic regions (regions between genes). SNPs within a coding sequence do not necessarily change the amino acid sequence of the protein that is produced, due to degeneracy of the genetic code. SNPs in the coding region are of two types: synonymous SNPs and nonsynonymous SNPs. Synonymous SNPs do not affect the protein sequence, while nonsynonymous SNPs change the amino acid sequence of protein. SNPs in non-coding regions can manifest in a higher risk of cancer, and may affect mRNA structure and disease susceptibility. Non-coding SNPs can also alter the level of expression of a gene, as an eQTL (expression quantitative trait locus). SNPs in coding regions: synonymous substitutions by definition do not result in a change of amino acid in the protein, but still can affect its function in other ways. An example would be a seemingly silent mutation in the multidrug resistance gene 1 (MDR1), which codes for a cellular membrane pump that expels drugs from the cell, can slow down translation and allow the peptide chain to fold into an unusual conformation, causing the mutant pump to be less functional (in MDR1 protein e.g. C1236T polymorphism changes a GGC codon to GGT at amino acid position 412 of the polypeptide (both encode glycine) and the C3435T polymorphism changes ATC to ATT at position 1145 (both encode isoleucine)). nonsynonymous substitutions: missense – single change in the base results in change in amino acid of protein and its malfunction which leads to disease (e.g. c.1580G>T SNP in LMNA gene – position 1580 (nt) in the DNA sequence (CGT codon) causing the guanine to be replaced with the thymine, yielding CTT codon in the DNA sequence, results at the protein level in the replacement of the arginine by the leucine in the position 527, at the phenotype level this manifests in overlapping mandibuloacral dysplasia and progeria syndrome) nonsense – point mutation in a sequence of DNA that results in a premature stop codon, or a nonsense codon in the transcribed mRNA, and in a truncated, incomplete, and usually nonfunctional protein product (e.g. Cystic fibrosis caused by the G542X mutation in the cystic fibrosis transmembrane conductance regulator gene). SNPs that are not in protein-coding regions may still affect gene splicing, transcription factor binding, messenger RNA degradation, or the sequence of noncoding RNA. Gene expression affected by this type of SNP is referred to as an eSNP (expression SNP) and may be upstream or downstream from the gene. Frequency More than 600 million SNPs have been identified across the human genome in the world's population. A typical genome differs from the reference human genome at 4–5 million sites, most of which (more than 99.9%) consist of SNPs and short indels. Within a genome The genomic distribution of SNPs is not homogenous; SNPs occur in non-coding regions more frequently than in coding regions or, in general, where natural selection is acting and "fixing" the allele (eliminating other variants) of the SNP that constitutes the most favorable genetic adaptation. Other factors, like genetic recombination and mutation rate, can also determine SNP density. SNP density can be predicted by the presence of microsatellites: AT microsatellites in particular are potent predictors of SNP density, with long (AT)(n) repeat tracts tending to be found in regions of significantly reduced SNP density and low GC content. Within a population Since there are variations between human populations, a SNP allele that is common in one geographical or ethnic group may be rarer in another. However, this pattern of variation is relatively rare; in a global sample of 67.3 million SNPs, the Human Genome Diversity Project "found no such private variants that are fixed in a given continent or major region. The highest frequencies are reached by a few tens of variants present at >70% (and a few thousands at >50%) in Africa, the Americas, and Oceania. By contrast, the highest frequency variants private to Europe, East Asia, the Middle East, or Central and South Asia reach just 10 to 30%." Within a population, SNPs can be assigned a minor allele frequency (MAF)—the lowest allele frequency at a locus that is observed in a particular population. This is simply the lesser of the two allele frequencies for single-nucleotide polymorphisms. With this knowledge, scientists have developed new methods in analyzing population structures in less studied species. By using pooling techniques, the cost of the analysis is significantly lowered. These techniques are based on sequencing a population in a pooled sample instead of sequencing every individual within the population by itself. With new bioinformatics tools, there is a possibility of investigating population structure, gene flow, and gene migration by observing the allele frequencies within the entire population. With these protocols there is a possibility for combining the advantages of SNPs with micro satellite markers. However, there is information lost in the process, such as linkage disequilibrium and zygosity information. Applications Association studies (such as GWAS, see below) can determine whether a genetic variant is associated with a disease or trait. A tag SNP is a representative single-nucleotide polymorphism in a region of the genome with high linkage disequilibrium (the non-random association of alleles at two or more loci). Tag SNPs are useful in whole-genome SNP association studies, in which hundreds of thousands of SNPs across the entire genome are genotyped. Haplotype mapping: sets of alleles or DNA sequences can be clustered so that a single SNP can identify many linked SNPs. Linkage disequilibrium (LD), a term used in population genetics, indicates non-random association of alleles at two or more loci, not necessarily on the same chromosome. It refers to the phenomenon that SNP allele or DNA sequence that are close together in the genome tend to be inherited together. LD can be affected by two parameters (among other factors, such as population stratification): The distance between the SNPs (the larger the distance, the lower the LD) Recombination rate (the lower the recombination rate, the higher the LD) In genetic epidemiology SNPs are used to estimate transmission clusters. Importance Variations in the DNA sequences of humans can affect how humans develop diseases and respond to pathogens, chemicals, drugs, vaccines, and other agents. SNPs are also critical for personalized medicine. Examples include biomedical research, forensics, pharmacogenetics, and disease causation, as outlined below. Clinical research Genome-wide association study (GWAS) One of main contributions of SNPs in clinical research is genome-wide association study (GWAS). Genome-wide genetic data can be generated by multiple technologies, including SNP array and whole genome sequencing. GWAS has been commonly used in identifying SNPs associated with diseases or clinical phenotypes or traits. Since GWAS is a genome-wide assessment, a large sample site is required to obtain sufficient statistical power to detect all possible associations. Some SNPs have relatively small effect on diseases or clinical phenotypes or traits. To estimate study power, the genetic model for disease needs to be considered, such as dominant, recessive, or additive effects. Due to genetic heterogeneity, GWAS analysis must be adjusted for race. Candidate gene association study Candidate gene association study is commonly used in genetic study before the invention of high throughput genotyping or sequencing technologies. Candidate gene association study is to investigate limited number of pre-specified SNPs for association with diseases or clinical phenotypes or traits. So this is a hypothesis driven approach. Since only a limited number of SNPs are tested, a relatively small sample size is sufficient to detect the association. Candidate gene association approach is also commonly used to confirm findings from GWAS in independent samples. Homozygosity mapping in disease Genome-wide SNP data can be used for homozygosity mapping. Homozygosity mapping is a method used to identify homozygous autosomal recessive loci, which can be a powerful tool to map genomic regions or genes that are involved in disease pathogenesis. Methylation patterns Recently, preliminary results reported SNPs as important components of the epigenetic program in organisms. Moreover, cosmopolitan studies in European and South Asiatic populations have revealed the influence of SNPs in the methylation of specific CpG sites. In addition, meQTL enrichment analysis using GWAS database, demonstrated that those associations are important toward the prediction of biological traits. Forensic sciences SNPs have historically been used to match a forensic DNA sample to a suspect but has been made obsolete due to advancing STR-based DNA fingerprinting techniques. However, the development of next-generation-sequencing (NGS) technology may allow for more opportunities for the use of SNPs in phenotypic clues such as ethnicity, hair color, and eye color with a good probability of a match. This can additionally be applied to increase the accuracy of facial reconstructions by providing information that may otherwise be unknown, and this information can be used to help identify suspects even without a STR DNA profile match. Some cons to using SNPs versus STRs is that SNPs yield less information than STRs, and therefore more SNPs are needed for analysis before a profile of a suspect is able to be created. Additionally, SNPs heavily rely on the presence of a database for comparative analysis of samples. However, in instances with degraded or small volume samples, SNP techniques are an excellent alternative to STR methods. SNPs (as opposed to STRs) have an abundance of potential markers, can be fully automated, and a possible reduction of required fragment length to less than 100 bp. Pharmacogenetics Pharmacogenetics focuses on identifying genetic variations including SNPs associated with differential responses to treatment. Many drug metabolizing enzymes, drug targets, or target pathways can be influenced by SNPs. The SNPs involved in drug metabolizing enzyme activities can change drug pharmacokinetics, while the SNPs involved in drug target or its pathway can change drug pharmacodynamics. Therefore, SNPs are potential genetic markers that can be used to predict drug exposure or effectiveness of the treatment. Genome-wide pharmacogenetic study is called pharmacogenomics. Pharmacogenetics and pharmacogenomics are important in the development of precision medicine, especially for life-threatening diseases such as cancers. Disease Only small amount of SNPs in the human genome may have impact on human diseases. Large scale GWAS has been done for the most important human diseases, including heart diseases, metabolic diseases, autoimmune diseases, and neurodegenerative and psychiatric disorders. Most of the SNPs with relatively large effects on these diseases have been identified. These findings have significantly improved understanding of disease pathogenesis and molecular pathways, and facilitated development of better treatment. Further GWAS with larger samples size will reveal the SNPs with relatively small effect on diseases. For common and complex diseases, such as type-2 diabetes, rheumatoid arthritis, and Alzheimer's disease, multiple genetic factors are involved in disease etiology. In addition, gene-gene interaction and gene-environment interaction also play an important role in disease initiation and progression. Examples rs6311 and rs6313 are SNPs in the Serotonin 5-HT2A receptor gene on human chromosome 13. The SNP − 3279C/A (rs3761548) is amongst the SNPs locating in the promoter region of the Foxp3 gene, might be involved in cancer progression. A SNP in the F5 gene causes Factor V Leiden thrombophilia. rs3091244 is an example of a triallelic SNP in the CRP gene on human chromosome 1. TAS2R38 codes for PTC tasting ability, and contains 6 annotated SNPs. rs148649884 and rs138055828 in the FCN1 gene encoding M-ficolin crippled the ligand-binding capability of the recombinant M-ficolin. rs12821256 on a cis-regulatory module changes the amount of transcription of the KIT ligand gene. Among northern Europeans, high levels of transcription leads to brown hair, and low levels leads to blond hair. This is an example of overt but non-pathological phenotype change by one SNP. An intronic SNP in DNA mismatch repair gene PMS2 (rs1059060, Ser775Asn) is associated with increased sperm DNA damage and risk of male infertility. Databases As there are for genes, bioinformatics databases exist for SNPs. dbSNP is a SNP database from the National Center for Biotechnology Information (NCBI). , dbSNP listed 149,735,377 SNPs in humans. Kaviar is a compendium of SNPs from multiple data sources including dbSNP. SNPedia is a wiki-style database supporting personal genome annotation, interpretation and analysis. The OMIM database describes the association between polymorphisms and diseases (e.g., gives diseases in text form) dbSAP – single amino-acid polymorphism database for protein variation detection The Human Gene Mutation Database provides gene mutations causing or associated with human inherited diseases and functional SNPs The International HapMap Project, where researchers are identifying Tag SNPs to be able to determine the collection of haplotypes present in each subject. GWAS Central allows users to visually interrogate the actual summary-level association data in one or more genome-wide association studies. The International SNP Map working group mapped the sequence flanking each SNP by alignment to the genomic sequence of large-insert clones in Genebank. These alignments were converted to chromosomal coordinates that is shown in Table 1. This list has greatly increased since, with, for instance, the Kaviar database now listing 162 million single nucleotide variants (SNVs). Nomenclature The nomenclature for SNPs include several variations for an individual SNP, while lacking a common consensus. The rs### standard is that which has been adopted by dbSNP and uses the prefix "rs", for "reference SNP", followed by a unique and arbitrary number. SNPs are frequently referred to by their dbSNP rs number, as in the examples above. The Human Genome Variation Society (HGVS) uses a standard which conveys more information about the SNP. Examples are: c.76A>T: "c." for coding region, followed by a number for the position of the nucleotide, followed by a one-letter abbreviation for the nucleotide (A, C, G, T, or U), followed by a greater than sign (">") to indicate substitution, followed by the abbreviation of the nucleotide which replaces the former p.Ser123Arg: "p." for protein, followed by a three-letter abbreviation for the amino acid, followed by a number for the position of the amino acid, followed by the abbreviation of the amino acid which replaces the former. SNP analysis SNPs can be easily assayed due to only containing two possible alleles and three possible genotypes involving the two alleles: homozygous A, homozygous B and heterozygous AB, leading to many possible techniques for analysis. Some include: DNA sequencing; capillary electrophoresis; mass spectrometry; single-strand conformation polymorphism (SSCP); single base extension; electrochemical analysis; denaturating HPLC and gel electrophoresis; restriction fragment length polymorphism; and hybridization analysis. Programs for prediction of SNP effects An important group of SNPs are those that corresponds to missense mutations causing amino acid change on protein level. Point mutation of particular residue can have different effect on protein function (from no effect to complete disruption its function). Usually, change in amino acids with similar size and physico-chemical properties (e.g. substitution from leucine to valine) has mild effect, and opposite. Similarly, if SNP disrupts secondary structure elements (e.g. substitution to proline in alpha helix region) such mutation usually may affect whole protein structure and function. Using those simple and many other machine learning derived rules a group of programs for the prediction of SNP effect was developed: SIFT This program provides insight into how a laboratory induced missense or nonsynonymous mutation will affect protein function based on physical properties of the amino acid and sequence homology. LIST (Local Identity and Shared Taxa) estimates the potential deleteriousness of mutations resulted from altering their protein functions. It is based on the assumption that variations observed in closely related species are more significant when assessing conservation compared to those in distantly related species. SNAP2 SuSPect PolyPhen-2 PredictSNP MutationTaster: official website Variant Effect Predictor from the Ensembl project SNPViz : This program provides a 3D representation of the protein affected, highlighting the amino acid change so doctors can determine pathogenicity of the mutant protein. PROVEAN PhyreRisk is a database which maps variants to experimental and predicted protein structures. Missense3D is a tool which provides a stereochemical report on the effect of missense variants on protein structure. See also Affymetrix HapMap Illumina International HapMap Project Short tandem repeat (STR) Single-base extension SNP array SNP genotyping SNPedia Snpstr SNV calling from NGS data Suspension array technology Tag SNP TaqMan Variome References Further reading Human Genome Project Information — SNP Fact Sheet External links NCBI resources – Introduction to SNPs from NCBI The SNP Consortium LTD – SNP search NCBI dbSNP database – "a central repository for both single base nucleotide substitutions and short deletion and insertion polymorphisms" HGMD – the Human Gene Mutation Database, includes rare mutations and functional SNPs GWAS Central – a central database of summary-level genetic association findings 1000 Genomes Project – A Deep Catalog of Human Genetic Variation WatCut – an online tool for the design of SNP-RFLP assays SNPStats – SNPStats, a web tool for analysis of genetic association studies Restriction HomePage – a set of tools for DNA restriction and SNP detection, including design of mutagenic primers American Association for Cancer Research Cancer Concepts Factsheet on SNPs PharmGKB – The Pharmacogenetics and Pharmacogenomics Knowledge Base, a resource for SNPs associated with drug response and disease outcomes. GEN-SNiP – Online tool that identifies polymorphisms in test DNA sequences. Rules for Nomenclature of Genes, Genetic Markers, Alleles, and Mutations in Mouse and Rat HGNC Guidelines for Human Gene Nomenclature SNP effect predictor with galaxy integration Open SNP – a portal for sharing own SNP test results dbSAP – SNP database for protein variation detection Molecular biology Population genetics DNA Genetic genealogy Biotechnology Mutation
Single-nucleotide polymorphism
[ "Chemistry", "Biology" ]
4,627
[ "Single-nucleotide polymorphisms", "Biodiversity", "Molecular biology" ]
513,128
https://en.wikipedia.org/wiki/Airy%20disk
In optics, the Airy disk (or Airy disc) and Airy pattern are descriptions of the best-focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light. The Airy disk is of importance in physics, optics, and astronomy. The diffraction pattern resulting from a uniformly illuminated, circular aperture has a bright central region, known as the Airy disk, which together with the series of concentric rings around is called the Airy pattern. Both are named after George Biddell Airy. The disk and rings phenomenon had been known prior to Airy; John Herschel described the appearance of a bright star seen through a telescope under high magnification for an 1828 article on light for the Encyclopedia Metropolitana: Airy wrote the first full theoretical treatment explaining the phenomenon (his 1835 "On the Diffraction of an Object-glass with Circular Aperture"). Mathematically, the diffraction pattern is characterized by the wavelength of light illuminating the circular aperture, and the aperture's size. The appearance of the diffraction pattern is additionally characterized by the sensitivity of the eye or other detector used to observe the pattern. The most important application of this concept is in cameras, microscopes and telescopes. Due to diffraction, the smallest point to which a lens or mirror can focus a beam of light is the size of the Airy disk. Even if one were able to make a perfect lens, there is still a limit to the resolution of an image created by such a lens. An optical system in which the resolution is no longer limited by imperfections in the lenses but only by diffraction is said to be diffraction limited. Size Far from the aperture, the angle at which the first minimum occurs, measured from the direction of incoming light, is given by the approximate formula: or, for small angles, simply where is in radians, is the wavelength of the light in meters, and is the diameter of the aperture in meters. The full width at half maximum is given by Airy wrote this relation as where was the angle of first minimum in seconds of arc, was the radius of the aperture in inches, and the wavelength of light was assumed to be 0.000022 inches (560 nm; the mean of visible wavelengths). This is equal to the angular resolution of a circular aperture. The Rayleigh criterion for barely resolving two objects that are point sources of light, such as stars seen through a telescope, is that the center of the Airy disk for the first object occurs at the first minimum of the Airy disk of the second. This means that the angular resolution of a diffraction-limited system is given by the same formulae. However, while the angle at which the first minimum occurs (which is sometimes described as the radius of the Airy disk) depends only on wavelength and aperture size, the appearance of the diffraction pattern will vary with the intensity (brightness) of the light source. Because any detector (eye, film, digital) used to observe the diffraction pattern can have an intensity threshold for detection, the full diffraction pattern may not be apparent. In astronomy, the outer rings are frequently not apparent even in a highly magnified image of a star. It may be that none of the rings are apparent, in which case the star image appears as a disk (central maximum only) rather than as a full diffraction pattern. Furthermore, fainter stars will appear as smaller disks than brighter stars, because less of their central maximum reaches the threshold of detection. While in theory all stars or other "point sources" of a given wavelength and seen through a given aperture have the same Airy disk radius characterized by the above equation (and the same diffraction pattern size), differing only in intensity, the appearance is that fainter sources appear as smaller disks, and brighter sources appear as larger disks. This was described by Airy in his original work: The rapid decrease of light in the successive rings will sufficiently explain the visibility of two or three rings with a very bright star and the non-visibility of rings with a faint star. The difference of the diameters of the central spots (or spurious disks) of different stars ... is also fully explained. Thus the radius of the spurious disk of a faint star, where light of less than half the intensity of the central light makes no impression on the eye, is determined by [s = 1.17/a], whereas the radius of the spurious disk of a bright star, where light of 1/10 the intensity of the central light is sensible, is determined by [s = 1.97/a]. Despite this feature of Airy's work, the radius of the Airy disk is often given as being simply the angle of first minimum, even in standard textbooks. In reality, the angle of first minimum is a limiting value for the size of the Airy disk, and not a definite radius. Examples Cameras If two objects imaged by a camera are separated by an angle small enough that their Airy disks on the camera detector start overlapping, the objects cannot be clearly separated any more in the image, and they start blurring together. Two objects are said to be just resolved when the maximum of the first Airy pattern falls on top of the first minimum of the second Airy pattern (the Rayleigh criterion). Therefore, the smallest angular separation two objects can have before they significantly blur together is given as stated above by Thus, the ability of the system to resolve detail is limited by the ratio of λ/d. The larger the aperture for a given wavelength, the finer the detail that can be distinguished in the image. This can also be expressed as where is the separation of the images of the two objects on the film, and is the distance from the lens to the film. If we take the distance from the lens to the film to be approximately equal to the focal length of the lens, we find but is the f-number of a lens. A typical setting for use on an overcast day would be (see Sunny 16 rule). For violet, the shortest wavelength visible light, the wavelength λ is about 420 nanometers (see cone cells for sensitivity of S cone cells). This gives a value for of about 4 μm. In a digital camera, making the pixels of the image sensor smaller than half this value (one pixel for each object, one for each space between) would not significantly increase the captured image resolution. However, it may improve the final image by over-sampling, allowing noise reduction. The human eye The fastest f-number for the human eye is about 2.1, corresponding to a diffraction-limited point spread function with approximately 1 μm diameter. However, at this f-number, spherical aberration limits visual acuity, while a 3 mm pupil diameter (f/5.7) approximates the resolution achieved by the human eye. The maximum density of cones in the human fovea is approximately 170,000 per square millimeter, which implies that the cone spacing in the human eye is about 2.5 μm, approximately the diameter of the point spread function at f/5. Focused laser beam A circular laser beam with uniform intensity across the circle (a flat-top beam) focused by a lens will form an Airy disk pattern at the focus. The size of the Airy disk determines the laser intensity at the focus. Aiming sight Some weapon aiming sights (e.g. FN FNC) require the user to align a peep sight (rear, nearby sight, i.e. which will be out of focus) with a tip (which should be focused and overlaid on the target) at the end of the barrel. When looking through the peep sight, the user will notice an Airy disk that will help center the sight over the pin. Conditions for observation Light from a uniformly illuminated circular aperture (or from a uniform, flattop beam) will exhibit an Airy diffraction pattern far away from the aperture due to Fraunhofer diffraction (far-field diffraction). The conditions for being in the far field and exhibiting an Airy pattern are: the incoming light illuminating the aperture is a plane wave (no phase variation across the aperture), the intensity is constant over the area of the aperture, and the distance from the aperture where the diffracted light is observed (the screen distance) is large compared to the aperture size, and the radius of the aperture is not too much larger than the wavelength of the light. The last two conditions can be formally written as In practice, the conditions for uniform illumination can be met by placing the source of the illumination far from the aperture. If the conditions for far field are not met (for example if the aperture is large), the far-field Airy diffraction pattern can also be obtained on a screen much closer to the aperture by using a lens right after the aperture (or the lens itself can form the aperture). The Airy pattern will then be formed at the focus of the lens rather than at infinity. Hence, the focal spot of a uniform circular laser beam (a flattop beam) focused by a lens will also be an Airy pattern. In a camera or imaging system an object far away gets imaged onto the film or detector plane by the objective lens, and the far field diffraction pattern is observed at the detector. The resulting image is a convolution of the ideal image with the Airy diffraction pattern due to diffraction from the iris aperture or due to the finite size of the lens. This leads to the finite resolution of a lens system described above. Mathematical formulation The intensity of the Airy pattern follows the Fraunhofer diffraction pattern of a circular aperture, given by the squared modulus of the Fourier transform of the circular aperture: where is the maximum intensity of the pattern at the Airy disc center, is the Bessel function of the first kind of order one, is the wavenumber, is the radius of the aperture, and is the angle of observation, i.e. the angle between the axis of the circular aperture and the line between aperture center and observation point. where q is the radial distance from the observation point to the optical axis and R is its distance to the aperture. Note that the Airy disk as given by the above expression is only valid for large R, where Fraunhofer diffraction applies; calculation of the shadow in the near-field must rather be handled using Fresnel diffraction. However the exact Airy pattern does appear at a finite distance if a lens is placed at the aperture. Then the Airy pattern will be perfectly focussed at the distance given by the lens's focal length (assuming collimated light incident on the aperture) given by the above equations. The zeros of are at From this, it follows that the first dark ring in the diffraction pattern occurs where or If a lens is used to focus the Airy pattern at a finite distance, then the radius of the first dark ring on the focal plane is solely given by the numerical aperture A (closely related to the f-number) by where the numerical aperture A is equal to the aperture's radius d/2 divided by R', the distance from the center of the Airy pattern to the edge of the aperture. Viewing the aperture of radius d/2 and lens as a camera (see diagram above) projecting an image onto a focal plane at distance f, the numerical aperture A is related to the commonly-cited f-number N= f/d (ratio of the focal length to the lens diameter) according to for N≫1 it is simply approximated as This shows that the best possible image resolution of a camera is limited by the numerical aperture (and thus f-number) of its lens due to diffraction. The half maximum of the central Airy disk (where ) occurs at the 1/e2 point (where ) occurs at and the maximum of the first ring occurs at The intensity at the center of the diffraction pattern is related to the total power incident on the aperture by where is the source strength per unit area at the aperture, A is the area of the aperture () and R is the distance from the aperture. At the focal plane of a lens, The intensity at the maximum of the first ring is about 1.75% of the intensity at the center of the Airy disk. The expression for above can be integrated to give the total power contained in the diffraction pattern within a circle of given size: where and are Bessel functions. Hence the fractions of the total power contained within the first, second, and third dark rings (where ) are 83.8%, 91.0%, and 93.8% respectively. Classical treatments of the Airy disk and diffraction pattern assume that the incident light is a plane wave that consists of coherent (in phase) photons of the same wavelength that interfere with each other. The famous double slit experiment showed that diffraction patterns could arise even when the coherent photons were so spread out in time that they could not interfere with each other. This led to the quantum mechanical picture that each photon effectively takes all possible paths from a source to a detector. Richard Feynman explained that each path has a complex amplitude that can be thought of as a unit vector that is perpendicular to the path and makes one complete rotation for each wavelength of advance. The detection probability is the square of the modulus of the sum of the complex amplitudes at the detector. Diffraction patterns arise because the paths sum differently at different detector positions. According to these principles the Airy disk and diffraction pattern can be computed numerically by using Feynman photon path integrals to determine the detection probability at different points in the focal plane of a parabolic mirror. Approximation using a Gaussian profile The Airy pattern falls rather slowly to zero with increasing distance from the center, with the outer rings containing a significant portion of the integrated intensity of the pattern. As a result, the root mean square (RMS) spotsize is undefined (i.e. infinite). An alternative measure of the spot size is to ignore the relatively small outer rings of the Airy pattern and to approximate the central lobe with a Gaussian profile, such that where is the irradiance at the center of the pattern, represents the radial distance from the center of the pattern, and is the Gaussian RMS width (in one dimension). If we equate the peak amplitude of the Airy pattern and Gaussian profile, that is, and find the value of giving the optimal approximation to the pattern, we obtain where N is the f-number. If, on the other hand, we wish to enforce that the Gaussian profile has the same volume as does the Airy pattern, then this becomes In optical aberration theory, it is common to describe an imaging system as diffraction-limited if the Airy disk radius is larger than the RMS spotsize determined from geometric ray tracing (see Optical lens design). The Gaussian profile approximation provides an alternative means of comparison: using the approximation above shows that the Gaussian waist of the Gaussian approximation to the Airy disk is about two-third the Airy disk radius, i.e. as opposed to Obscured Airy pattern Similar equations can also be derived for the obscured Airy diffraction pattern which is the diffraction pattern from an annular aperture or beam, i.e. a uniform circular aperture (beam) obscured by a circular block at the center. This situation is relevant to many common reflector telescope designs that incorporate a secondary mirror, including Newtonian telescopes and Schmidt–Cassegrain telescopes. where is the annular aperture obscuration ratio, or the ratio of the diameter of the obscuring disk and the diameter of the aperture (beam). and x is defined as above: where is the radial distance in the focal plane from the optical axis, is the wavelength and is the f-number of the system. The fractional encircled energy (the fraction of the total energy contained within a circle of radius centered at the optical axis in the focal plane) is then given by: For the formulas reduce to the unobscured versions above. The practical effect of having a central obstruction in a telescope is that the central disc becomes slightly smaller, and the first bright ring becomes brighter at the expense of the central disc. This becomes more problematic with short focal length telescopes which require larger secondary mirrors. Comparison to Gaussian beam focus A circular laser beam with uniform intensity profile, focused by a lens, will form an Airy pattern at the focal plane of the lens. The intensity at the center of the focus will be where is the total power of the beam, is the area of the beam ( is the beam diameter), is the wavelength, and is the focal length of the lens. A Gaussian beam transmitted through a hard aperture will be clipped. Energy is lost and edge diffraction occurs, effectively increasing the divergence. Because of these effects there is a Gaussian beam diameter which maximizes the intensity in the far field. This occurs when the diameter of the Gaussian is 89% of the aperture diameter, and the on axis intensity in the far field will be 81% of that produced by a uniform intensity profile. Elliptical aperture The Fourier integral of the circular cross section of radius is This is the special case of the Fourier integral of the elliptical cross section with half axes and : where See also Amateur astronomy Apodization Fraunhofer diffraction Bloom (shader effect) Newton's rings Optical unit Point spread function Debye-Scherrer ring Strehl ratio Speckle pattern Notes and references External links Nikon MicroscopyU (website). (Interactive Java Tutorial) Molecular Expressions (website). (Interactive Java Tutorial) Molecular Expressions. , Connexions (website), November 8, 2005. – Mathematical details to derive the above formula. "The Airy Disk: An Explanation Of What It Is, And Why You Can't Avoid It", Oldham Optical UK. Physical optics Diffraction
Airy disk
[ "Physics", "Chemistry", "Materials_science" ]
3,757
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]