id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
4,342,484
https://en.wikipedia.org/wiki/Torsion%20tensor
In differential geometry, the torsion tensor is a tensor that is associated to any affine connection. The torsion tensor is a bilinear map of two input vectors , that produces an output vector representing the displacement within a tangent space when the tangent space is developed (or "rolled") along an infinitesimal parallelogram whose sides are . It is skew symmetric in its inputs, because developing over the parallelogram in the opposite sense produces the opposite displacement, similarly to how a screw moves in opposite ways when it is twisted in two directions. Torsion is particularly useful in the study of the geometry of geodesics. Given a system of parametrized geodesics, one can specify a class of affine connections having those geodesics, but differing by their torsions. There is a unique connection which absorbs the torsion, generalizing the Levi-Civita connection to other, possibly non-metric situations (such as Finsler geometry). The difference between a connection with torsion, and a corresponding connection without torsion is a tensor, called the contorsion tensor. Absorption of torsion also plays a fundamental role in the study of G-structures and Cartan's equivalence method. Torsion is also useful in the study of unparametrized families of geodesics, via the associated projective connection. In relativity theory, such ideas have been implemented in the form of Einstein–Cartan theory. Definition Let M be a manifold with an affine connection on the tangent bundle (aka covariant derivative) ∇. The torsion tensor (sometimes called the Cartan (torsion) tensor) of ∇ is the vector-valued 2-form defined on vector fields X and Y by where is the Lie bracket of two vector fields. By the Leibniz rule, T(fX, Y) = T(X, fY) = fT(X, Y) for any smooth function f. So T is tensorial, despite being defined in terms of the connection which is a first order differential operator: it gives a 2-form on tangent vectors, while the covariant derivative is only defined for vector fields. Components of the torsion tensor The components of the torsion tensor in terms of a local basis of sections of the tangent bundle can be derived by setting , and by introducing the commutator coefficients . The components of the torsion are then Here are the connection coefficients defining the connection. If the basis is holonomic then the Lie brackets vanish, . So . In particular (see below), while the geodesic equations determine the symmetric part of the connection, the torsion tensor determines the antisymmetric part. The torsion form The torsion form, an alternative characterization of torsion, applies to the frame bundle FM of the manifold M. This principal bundle is equipped with a connection form ω, a gl(n)-valued one-form which maps vertical vectors to the generators of the right action in gl(n) and equivariantly intertwines the right action of GL(n) on the tangent bundle of FM with the adjoint representation on gl(n). The frame bundle also carries a canonical one-form θ, with values in Rn, defined at a frame (regarded as a linear function ) by where is the projection mapping for the principal bundle and is its push-forward. The torsion form is then Equivalently, Θ = Dθ, where D is the exterior covariant derivative determined by the connection. The torsion form is a (horizontal) tensorial form with values in Rn, meaning that under the right action of it transforms equivariantly: where acts on the right-hand side by its canonical action on Rn. Torsion form in a frame The torsion form may be expressed in terms of a connection form on the base manifold M, written in a particular frame of the tangent bundle . The connection form expresses the exterior covariant derivative of these basic sections: The solder form for the tangent bundle (relative to this frame) is the dual basis of the ei, so that (the Kronecker delta). Then the torsion 2-form has components In the rightmost expression, are the frame-components of the torsion tensor, as given in the previous definition. It can be easily shown that Θi transforms tensorially in the sense that if a different frame for some invertible matrix-valued function (gji), then In other terms, Θ is a tensor of type (carrying one contravariant and two covariant indices). Alternatively, the solder form can be characterized in a frame-independent fashion as the TM-valued one-form θ on M corresponding to the identity endomorphism of the tangent bundle under the duality isomorphism . Then the torsion 2-form is a section given by where D is the exterior covariant derivative. (See connection form for further details.) Irreducible decomposition The torsion tensor can be decomposed into two irreducible parts: a trace-free part and another part which contains the trace terms. Using the index notation, the trace of T is given by and the trace-free part is where δij is the Kronecker delta. Intrinsically, one has The trace of T, tr T, is an element of T∗M defined as follows. For each vector fixed , T defines an element T(X) of via Then (tr T)(X) is defined as the trace of this endomorphism. That is, The trace-free part of T is then where ι denotes the interior product. Curvature and the Bianchi identities The curvature tensor of ∇ is a mapping defined on vector fields X, Y, and Z by For vectors at a point, this definition is independent of how the vectors are extended to vector fields away from the point (thus it defines a tensor, much like the torsion). The Bianchi identities relate the curvature and torsion as follows. Let denote the cyclic sum over X, Y, and Z. For instance, Then the following identities hold Bianchi's first identity: Bianchi's second identity: The curvature form and Bianchi identities The curvature form is the gl(n)-valued 2-form where, again, D denotes the exterior covariant derivative. In terms of the curvature form and torsion form, the corresponding Bianchi identities are Moreover, one can recover the curvature and torsion tensors from the curvature and torsion forms as follows. At a point u of FxM, one has where again is the function specifying the frame in the fibre, and the choice of lift of the vectors via π−1 is irrelevant since the curvature and torsion forms are horizontal (they vanish on the ambiguous vertical vectors). Characterizations and interpretations The torsion is a manner of characterizing the amount of slipping or twisting that a plane does when rolling along a surface or higher dimensional affine manifold. For example, consider rolling a plane along a small circle drawn on a sphere. If the plane does not slip or twist, then when the plane is rolled all the way along the circle, it will also trace a circle in the plane. It turns out that the plane will have rotated (despite there being no twist whilst rolling it), an effect due to the curvature of the sphere. But the curve traced out will still be a circle, and so in particular a closed curve that begins and ends at the same point. On the other hand, if the plane were rolled along the sphere, but it was allowed it to slip or twist in the process, then the path the circle traces on the plane could be a much more general curve that need not even be closed. The torsion is a way to quantify this additional slipping and twisting while rolling a plane along a curve. Thus the torsion tensor can be intuitively understood by taking a small parallelogram circuit with sides given by vectors v and w, in a space and rolling the tangent space along each of the four sides of the parallelogram, marking the point of contact as it goes. When the circuit is completed, the marked curve will have been displaced out of the plane of the parallelogram by a vector, denoted . Thus the torsion tensor is a tensor: a (bilinear) function of two input vectors v and w that produces an output vector . It is skew symmetric in the arguments v and w, a reflection of the fact that traversing the circuit in the opposite sense undoes the original displacement, in much the same way that twisting a screw in opposite directions displaces the screw in opposite ways. The torsion tensor thus is related to, although distinct from, the torsion of a curve, as it appears in the Frenet–Serret formulas: the torsion of a connection measures a dislocation of a developed curve out of its plane, while the torsion of a curve is also a dislocation out of its osculating plane. In the geometry of surfaces, the geodesic torsion describes how a surface twists about a curve on the surface. The companion notion of curvature measures how moving frames roll along a curve without slipping or twisting. Example Consider the (flat) Euclidean space . On it, we put a connection that is flat, but with non-zero torsion, defined on the standard Euclidean frame by the (Euclidean) cross product: Consider now the parallel transport of the vector along the axis, starting at the origin. The parallel vector field thus satisfies , and the differential equation Thus , and the solution is . Now the tip of the vector , as it is transported along the axis traces out the helix Thus we see that, in the presence of torsion, parallel transport tends to twist a frame around the direction of motion, analogously to the role played by torsion in the classical differential geometry of curves. Development One interpretation of the torsion involves the development of a curve. Suppose that a piecewise smooth closed loop is given, based at the point , where . We assume that is homotopic to zero. The curve can be developed into the tangent space at in the following manner. Let be a parallel coframe along , and let be the coordinates on induced by . A development of is a curve in whose coordinates sastify the differential equation If the torsion is zero, then the developed curve is also a closed loop (so that ). On the other hand, if the torsion is non-zero, then the developed curve may not be closed, so that . Thus the development of a loop in the presence of torsion can become dislocated, analogously to a screw dislocation. The foregoing considerations can be made more quantitative by considering a small parallelogram, originating at the point , with sides . Then the tangent bivector to the parallelogram is . The development of this parallelogram, using the connection, is no longer closed in general, and the displacement in going around the loop is translation by the vector , where is the torsion tensor, up to higher order terms in . This displacement is directly analogous to the Burgers vector of crystallography. More generally, one can also transport a moving frame along the curve . The linear transformation that the frame undergoes between is then determined by the curvature of the connection. Together, the linear transformation of the frame and the translation of the starting point from to comprise the holonomy of the connection. The torsion of a filament In materials science, and especially elasticity theory, ideas of torsion also play an important role. One problem models the growth of vines, focusing on the question of how vines manage to twist around objects. The vine itself is modeled as a pair of elastic filaments twisted around one another. In its energy-minimizing state, the vine naturally grows in the shape of a helix. But the vine may also be stretched out to maximize its extent (or length). In this case, the torsion of the vine is related to the torsion of the pair of filaments (or equivalently the surface torsion of the ribbon connecting the filaments), and it reflects the difference between the length-maximizing (geodesic) configuration of the vine and its energy-minimizing configuration. Torsion and vorticity In fluid dynamics, torsion is naturally associated to vortex lines. Suppose that a connection is given in three dimensions, with curvature 2-form and torsion 2-form . Let be the skew-symmetric Levi-Civita tensor, and Then the Bianchi identities The Bianchi identities are imply that and These are the equations satisfied by an equilibrium continuous medium with moment density . Geodesics and the absorption of torsion Suppose that γ(t) is a curve on M. Then γ is an affinely parametrized geodesic provided that for all time t in the domain of γ. (Here the dot denotes differentiation with respect to t, which associates with γ the tangent vector pointing along it.) Each geodesic is uniquely determined by its initial tangent vector at time , . One application of the torsion of a connection involves the geodesic spray of the connection: roughly the family of all affinely parametrized geodesics. Torsion is the ambiguity of classifying connections in terms of their geodesic sprays: Two connections ∇ and ∇′ which have the same affinely parametrized geodesics (i.e., the same geodesic spray) differ only by torsion. More precisely, if X and Y are a pair of tangent vectors at , then let be the difference of the two connections, calculated in terms of arbitrary extensions of X and Y away from p. By the Leibniz product rule, one sees that Δ does not actually depend on how X and Y are extended (so it defines a tensor on M). Let S and A be the symmetric and alternating parts of Δ: Then is the difference of the torsion tensors. ∇ and ∇′ define the same families of affinely parametrized geodesics if and only if . In other words, the symmetric part of the difference of two connections determines whether they have the same parametrized geodesics, whereas the skew part of the difference is determined by the relative torsions of the two connections. Another consequence is: Given any affine connection ∇, there is a unique torsion-free connection ∇′ with the same family of affinely parametrized geodesics. The difference between these two connections is in fact a tensor, the contorsion tensor. This is a generalization of the fundamental theorem of Riemannian geometry to general affine (possibly non-metric) connections. Picking out the unique torsion-free connection subordinate to a family of parametrized geodesics is known as absorption of torsion, and it is one of the stages of Cartan's equivalence method. See also Contorsion tensor Curtright field Curvature tensor Levi-Civita connection Torsion coefficient Torsion of curves Notes References , 393. , 212. External links Bill Thurston (2011) Rolling without slipping interpretation of torsion, URL (version: 2011-01-27). Differential geometry Connection (mathematics) Curvature (mathematics) Tensors
Torsion tensor
[ "Physics", "Engineering" ]
3,169
[ "Geometric measurement", "Tensors", "Physical quantities", "Curvature (mathematics)" ]
4,342,970
https://en.wikipedia.org/wiki/Surface%20%28mathematics%29
In mathematics, a surface is a mathematical model of the common concept of a surface. It is a generalization of a plane, but, unlike a plane, it may be curved; this is analogous to a curve generalizing a straight line. There are several more precise definitions, depending on the context and the mathematical tools that are used for the study. The simplest mathematical surfaces are planes and spheres in the Euclidean 3-space. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not. A surface is a topological space of dimension two; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian). Definitions Often, a surface is defined by equations that are satisfied by the coordinates of its points. This is the case of the graph of a continuous function of two variables. The set of the zeros of a function of three variables is a surface, which is called an implicit surface. If the defining three-variate function is a polynomial, the surface is an algebraic surface. For example, the unit sphere is an algebraic surface, as it may be defined by the implicit equation A surface may also be defined as the image, in some space of dimension at least 3, of a continuous function of two variables (some further conditions are required to ensure that the image is not a curve). In this case, one says that one has a parametric surface, which is parametrized by these two variables, called parameters. For example, the unit sphere may be parametrized by the Euler angles, also called longitude and latitude by Parametric equations of surfaces are often irregular at some points. For example, all but two points of the unit sphere, are the image, by the above parametrization, of exactly one pair of Euler angles (modulo ). For the remaining two points (the north and south poles), one has , and the longitude may take any values. Also, there are surfaces for which there cannot exist a single parametrization that covers the whole surface. Therefore, one often considers surfaces which are parametrized by several parametric equations, whose images cover the surface. This is formalized by the concept of manifold: in the context of manifolds, typically in topology and differential geometry, a surface is a manifold of dimension two; this means that a surface is a topological space such that every point has a neighborhood which is homeomorphic to an open subset of the Euclidean plane (see Surface (topology) and Surface (differential geometry)). This allows defining surfaces in spaces of dimension higher than three, and even abstract surfaces, which are not contained in any other space. On the other hand, this excludes surfaces that have singularities, such as the vertex of a conical surface or points where a surface crosses itself. In classical geometry, a surface is generally defined as a locus of a point or a line. For example, a sphere is the locus of a point which is at a given distance of a fixed point, called the center; a conical surface is the locus of a line passing through a fixed point and crossing a curve; a surface of revolution is the locus of a curve rotating around a line. A ruled surface is the locus of a moving line satisfying some constraints; in modern terminology, a ruled surface is a surface, which is a union of lines. Terminology There are several kinds of surfaces that are considered in mathematics. An unambiguous terminology is thus necessary to distinguish them when needed. A topological surface is a surface that is a manifold of dimension two (see ). A differentiable surface is a surfaces that is a differentiable manifold (see ). Every differentiable surface is a topological surface, but the converse is false. A "surface" is often implicitly supposed to be contained in a Euclidean space of dimension 3, typically . A surface that is contained in a projective space is called a projective surface (see ). A surface that is not supposed to be included in another space is called an abstract surface. Examples The graph of a continuous function of two variables, defined over a connected open subset of is a topological surface. If the function is differentiable, the graph is a differentiable surface. A plane is both an algebraic surface and a differentiable surface. It is also a ruled surface and a surface of revolution. A circular cylinder (that is, the locus of a line crossing a circle and parallel to a given direction) is an algebraic surface and a differentiable surface. A circular cone (locus of a line crossing a circle, and passing through a fixed point, the apex, which is outside the plane of the circle) is an algebraic surface which is not a differentiable surface. If one removes the apex, the remainder of the cone is the union of two differentiable surfaces. The surface of a polyhedron is a topological surface, which is neither a differentiable surface nor an algebraic surface. A hyperbolic paraboloid (the graph of the function ) is a differentiable surface and an algebraic surface. It is also a ruled surface, and, for this reason, is often used in architecture. A two-sheet hyperboloid is an algebraic surface and the union of two non-intersecting differentiable surfaces. Parametric surface A parametric surface is the image of an open subset of the Euclidean plane (typically ) by a continuous function, in a topological space, generally a Euclidean space of dimension at least three. Usually the function is supposed to be continuously differentiable, and this will be always the case in this article. Specifically, a parametric surface in is given by three functions of two variables and , called parameters As the image of such a function may be a curve (for example, if the three functions are constant with respect to ), a further condition is required, generally that, for almost all values of the parameters, the Jacobian matrix has rank two. Here "almost all" means that the values of the parameters where the rank is two contain a dense open subset of the range of the parametrization. For surfaces in a space of higher dimension, the condition is the same, except for the number of columns of the Jacobian matrix. Tangent plane and normal vector A point where the above Jacobian matrix has rank two is called regular, or, more properly, the parametrization is called regular at . The tangent plane at a regular point is the unique plane passing through and having a direction parallel to the two row vectors of the Jacobian matrix. The tangent plane is an affine concept, because its definition is independent of the choice of a metric. In other words, any affine transformation maps the tangent plane to the surface at a point to the tangent plane to the image of the surface at the image of the point. The normal line at a point of a surface is the unique line passing through the point and perpendicular to the tangent plane; the normal vector is a vector which is parallel to the normal. For other differential invariants of surfaces, in the neighborhood of a point, see Differential geometry of surfaces. Irregular point and singular point A point of a parametric surface which is not regular is irregular. There are several kinds of irregular points. It may occur that an irregular point becomes regular, if one changes the parametrization. This is the case of the poles in the parametrization of the unit sphere by Euler angles: it suffices to permute the role of the different coordinate axes for changing the poles. On the other hand, consider the circular cone of parametric equation The apex of the cone is the origin , and is obtained for . It is an irregular point that remains irregular, whichever parametrization is chosen (otherwise, there would exist a unique tangent plane). Such an irregular point, where the tangent plane is undefined, is said singular. There is another kind of singular points. There are the self-crossing points, that is the points where the surface crosses itself. In other words, these are the points which are obtained for (at least) two different values of the parameters. Graph of a bivariate function Let be a function of two real variables. This is a parametric surface, parametrized as Every point of this surface is regular, as the two first columns of the Jacobian matrix form the identity matrix of rank two. Rational surface A rational surface is a surface that may be parametrized by rational functions of two variables. That is, if are, for , polynomials in two indeterminates, then the parametric surface, defined by is a rational surface. A rational surface is an algebraic surface, but most algebraic surfaces are not rational. Implicit surface An implicit surface in a Euclidean space (or, more generally, in an affine space) of dimension 3 is the set of the common zeros of a differentiable function of three variables Implicit means that the equation defines implicitly one of the variables as a function of the other variables. This is made more exact by the implicit function theorem: if , and the partial derivative in of is not zero at , then there exists a differentiable function such that in a neighbourhood of . In other words, the implicit surface is the graph of a function near a point of the surface where the partial derivative in is nonzero. An implicit surface has thus, locally, a parametric representation, except at the points of the surface where the three partial derivatives are zero. Regular points and tangent plane A point of the surface where at least one partial derivative of is nonzero is called regular. At such a point , the tangent plane and the direction of the normal are well defined, and may be deduced, with the implicit function theorem from the definition given above, in . The direction of the normal is the gradient, that is the vector The tangent plane is defined by its implicit equation Singular point A singular point of an implicit surface (in ) is a point of the surface where the implicit equation holds and the three partial derivatives of its defining function are all zero. Therefore, the singular points are the solutions of a system of four equations in three indeterminates. As most such systems have no solution, many surfaces do not have any singular point. A surface with no singular point is called regular or non-singular. The study of surfaces near their singular points and the classification of the singular points is singularity theory. A singular point is isolated if there is no other singular point in a neighborhood of it. Otherwise, the singular points may form a curve. This is in particular the case for self-crossing surfaces. Algebraic surface Originally, an algebraic surface was a surface which may be defined by an implicit equation where is a polynomial in three indeterminates, with real coefficients. The concept has been extended in several directions, by defining surfaces over arbitrary fields, and by considering surfaces in spaces of arbitrary dimension or in projective spaces. Abstract algebraic surfaces, which are not explicitly embedded in another space, are also considered. Surfaces over arbitrary fields Polynomials with coefficients in any field are accepted for defining an algebraic surface. However, the field of coefficients of a polynomial is not well defined, as, for example, a polynomial with rational coefficients may also be considered as a polynomial with real or complex coefficients. Therefore, the concept of point of the surface has been generalized in the following way. Given a polynomial , let be the smallest field containing the coefficients, and be an algebraically closed extension of , of infinite transcendence degree. Then a point of the surface is an element of which is a solution of the equation If the polynomial has real coefficients, the field is the complex field, and a point of the surface that belongs to (a usual point) is called a real point. A point that belongs to is called rational over , or simply a rational point, if is the field of rational numbers. Projective surface A projective surface in a projective space of dimension three is the set of points whose homogeneous coordinates are zeros of a single homogeneous polynomial in four variables. More generally, a projective surface is a subset of a projective space, which is a projective variety of dimension two. Projective surfaces are strongly related to affine surfaces (that is, ordinary algebraic surfaces). One passes from a projective surface to the corresponding affine surface by setting to one some coordinate or indeterminate of the defining polynomials (usually the last one). Conversely, one passes from an affine surface to its associated projective surface (called projective completion) by homogenizing the defining polynomial (in case of surfaces in a space of dimension three), or by homogenizing all polynomials of the defining ideal (for surfaces in a space of higher dimension). In higher dimensional spaces One cannot define the concept of an algebraic surface in a space of dimension higher than three without a general definition of an algebraic variety and of the dimension of an algebraic variety. In fact, an algebraic surface is an algebraic variety of dimension two. More precisely, an algebraic surface in a space of dimension is the set of the common zeros of at least polynomials, but these polynomials must satisfy further conditions that may be not immediate to verify. Firstly, the polynomials must not define a variety or an algebraic set of higher dimension, which is typically the case if one of the polynomials is in the ideal generated by the others. Generally, polynomials define an algebraic set of dimension two or higher. If the dimension is two, the algebraic set may have several irreducible components. If there is only one component the polynomials define a surface, which is a complete intersection. If there are several components, then one needs further polynomials for selecting a specific component. Most authors consider as an algebraic surface only algebraic varieties of dimension two, but some also consider as surfaces all algebraic sets whose irreducible components have the dimension two. In the case of surfaces in a space of dimension three, every surface is a complete intersection, and a surface is defined by a single polynomial, which is irreducible or not, depending on whether non-irreducible algebraic sets of dimension two are considered as surfaces or not. Topological surface In topology, a surface is generally defined as a manifold of dimension two. This means that a topological surface is a topological space such that every point has a neighborhood that is homeomorphic to an open subset of a Euclidean plane. Every topological surface is homeomorphic to a polyhedral surface such that all facets are triangles. The combinatorial study of such arrangements of triangles (or, more generally, of higher-dimensional simplexes) is the starting object of algebraic topology. This allows the characterization of the properties of surfaces in terms of purely algebraic invariants, such as the genus and homology groups. The homeomorphism classes of surfaces have been completely described (see Surface (topology)). Differentiable surface Fractal surface In computer graphics See also Area element, the area of a differential element of a surface Coordinate surfaces Hypersurface Perimeter, a two-dimensional equivalent Polyhedral surface Shape Signed distance function Solid figure Surface area Surface patch Surface integral Footnotes Notes Sources Geometry Topology Broad-concept articles
Surface (mathematics)
[ "Physics", "Mathematics" ]
3,145
[ "Spacetime", "Topology", "Space", "Geometry" ]
9,756,354
https://en.wikipedia.org/wiki/ChIP-on-chip
ChIP-on-chip (also known as ChIP-chip) is a technology that combines chromatin immunoprecipitation ('ChIP') with DNA microarray ("chip"). Like regular ChIP, ChIP-on-chip is used to investigate interactions between proteins and DNA in vivo. Specifically, it allows the identification of the cistrome, the sum of binding sites, for DNA-binding proteins on a genome-wide basis. Whole-genome analysis can be performed to determine the locations of binding sites for almost any protein of interest. As the name of the technique suggests, such proteins are generally those operating in the context of chromatin. The most prominent representatives of this class are transcription factors, replication-related proteins, like origin recognition complex protein (ORC), histones, their variants, and histone modifications. The goal of ChIP-on-chip is to locate protein binding sites that may help identify functional elements in the genome. For example, in the case of a transcription factor as a protein of interest, one can determine its transcription factor binding sites throughout the genome. Other proteins allow the identification of promoter regions, enhancers, repressors and silencing elements, insulators, boundary elements, and sequences that control DNA replication. If histones are subject of interest, it is believed that the distribution of modifications and their localizations may offer new insights into the mechanisms of regulation. One of the long-term goals ChIP-on-chip was designed for is to establish a catalogue of (selected) organisms that lists all protein-DNA interactions under various physiological conditions. This knowledge would ultimately help in the understanding of the machinery behind gene regulation, cell proliferation, and disease progression. Hence, ChIP-on-chip offers both potential to complement our knowledge about the orchestration of the genome on the nucleotide level and information on higher levels of information and regulation as it is propagated by research on epigenetics. Technological platforms The technical platforms to conduct ChIP-on-chip experiments are DNA microarrays, or "chips". They can be classified and distinguished according to various characteristics: Probe type: DNA arrays can comprise either mechanically spotted cDNAs or PCR-products, mechanically spotted oligonucleotides, or oligonucleotides that are synthesized in situ. The early versions of microarrays were designed to detect RNAs from expressed genomic regions (open reading frames aka ORFs). Although such arrays are perfectly suited to study gene expression profiles, they have limited importance in ChIP experiments since most "interesting" proteins with respect to this technique bind in intergenic regions. Nowadays, even custom-made arrays can be designed and fine-tuned to match the requirements of an experiment. Also, any sequence of nucleotides can be synthesized to cover genic as well as intergenic regions. Probe size: Early version of cDNA arrays had a probe length of about 200bp. Latest array versions use oligos as short as 70- (Microarrays, Inc.) to 25-mers (Affymetrix). (Feb 2007) Probe composition: There are tiled and non-tiled DNA arrays. Non-tiled arrays use probes selected according to non-spatial criteria, i.e., the DNA sequences used as probes have no fixed distances in the genome. Tiled arrays, however, select a genomic region (or even a whole genome) and divide it into equal chunks. Such a region is called tiled path. The average distance between each pair of neighboring chunks (measured from the center of each chunk) gives the resolution of the tiled path. A path can be overlapping, end-to-end or spaced. Array size: The first microarrays used for ChIP-on-Chip contained about 13,000 spotted DNA segments representing all ORFs and intergenic regions from the yeast genome. Nowadays, Affymetrix offers whole-genome tiled yeast arrays with a resolution of 5bp (all in all 3.2 million probes). Tiled arrays for the human genome become more and more powerful, too. Just to name one example, Affymetrix offers a set of seven arrays with about 90 million probes, spanning the complete non-repetitive part of the human genome with about 35bp spacing. (Feb 2007) Besides the actual microarray, other hard- and software equipment is necessary to run ChIP-on-chip experiments. It is generally the case that one company's microarrays can not be analyzed by another company's processing hardware. Hence, buying an array requires also buying the associated workflow equipment. The most important elements are, among others, hybridization ovens, chip scanners, and software packages for subsequent numerical analysis of the raw data. Workflow of a ChIP-on-chip experiment Starting with a biological question, a ChIP-on-chip experiment can be divided into three major steps: The first is to set up and design the experiment by selecting the appropriate array and probe type. Second, the actual experiment is performed in the wet-lab. Last, during the dry-lab portion of the cycle, gathered data are analyzed to either answer the initial question or lead to new questions so that the cycle can start again. Wet-lab portion of the workflow In the first step, the protein of interest (POI) is cross-linked with the DNA site it binds to in an in vitro environment. Usually this is done by a gentle formaldehyde fixation that is reversible with heat. Then, the cells are lysed and the DNA is sheared by sonication or using micrococcal nuclease. This results in double-stranded chunks of DNA fragments, normally 1 kb or less in length. Those that were cross-linked to the POI form a POI-DNA complex. In the next step, only these complexes are filtered out of the set of DNA fragments, using an antibody specific to the POI. The antibodies may be attached to a solid surface, may have a magnetic bead, or some other physical property that allows separation of cross-linked complexes and unbound fragments. This procedure is essentially an immunoprecipitation (IP) of the protein. This can be done either by using a tagged protein with an antibody against the tag (ex. FLAG, HA, c-myc) or with an antibody to the native protein. The cross-linking of POI-DNA complexes is reversed (usually by heating) and the DNA strands are purified. For the rest of the workflow, the POI is no longer necessary. After an amplification and denaturation step, the single-stranded DNA fragments are labeled with a fluorescent tag such as Cy5 or Alexa 647. Finally, the fragments are poured over the surface of the DNA microarray, which is spotted with short, single-stranded sequences that cover the genomic portion of interest. Whenever a labeled fragment "finds" a complementary fragment on the array, they will hybridize and form again a double-stranded DNA fragment. Dry-lab portion of the workflow After a sufficiently large time frame to allow hybridization, the array is illuminated with fluorescent light. Those probes on the array that are hybridized to one of the labeled fragments emit a light signal that is captured by a camera. This image contains all raw data for the remaining part of the workflow. This raw data, encoded as false-color image, needs to be converted to numerical values before the actual analysis can be done. The analysis and information extraction of the raw data often remains the most challenging part for ChIP-on-chip experiments. Problems arise throughout this portion of the workflow, ranging from the initial chip read-out, to suitable methods to subtract background noise, and finally to appropriate algorithms that normalize the data and make it available for subsequent statistical analysis, which then hopefully lead to a better understanding of the biological question that the experiment seeks to address. Furthermore, due to the different array platforms and lack of standardization between them, data storage and exchange is a huge problem. Generally speaking, the data analysis can be divided into three major steps: During the first step, the captured fluorescence signals from the array are normalized, using control signals derived from the same or a second chip. Such control signals tell which probes on the array were hybridized correctly and which bound nonspecifically. In the second step, numerical and statistical tests are applied to control data and IP fraction data to identify POI-enriched regions along the genome. The following three methods are used widely: median percentile rank, single-array error, and sliding-window. These methods generally differ in how low-intensity signals are handled, how much background noise is accepted, and which trait for the data is emphasized during the computation. In the recent past, the sliding-window approach seems to be favored and is often described as most powerful. In the third step, these regions are analyzed further. If, for example, the POI was a transcription factor, such regions would represent its binding sites. Subsequent analysis then may want to infer nucleotide motifs and other patterns to allow functional annotation of the genome. Strengths and weaknesses Using tiled arrays, ChIP-on-chip allows for high resolution of genome-wide maps. These maps can determine the binding sites of many DNA-binding proteins like transcription factors and also chromatin modifications. Although ChIP-on-chip can be a powerful technique in the area of genomics, it is very expensive. Most published studies using ChIP-on-chip repeat their experiments at least three times to ensure biologically meaningful maps. The cost of the DNA microarrays is often a limiting factor to whether a laboratory should proceed with a ChIP-on-chip experiment. Another limitation is the size of DNA fragments that can be achieved. Most ChIP-on-chip protocols utilize sonication as a method of breaking up DNA into small pieces. However, sonication is limited to a minimal fragment size of 200 bp. For higher resolution maps, this limitation should be overcome to achieve smaller fragments, preferably to single nucleosome resolution. As mentioned previously, the statistical analysis of the huge amount of data generated from arrays is a challenge and normalization procedures should aim to minimize artifacts and determine what is really biologically significant. So far, application to mammalian genomes has been a major limitation, for example, due to the significant percentage of the genome that is occupied by repeats. However, as ChIP-on-chip technology advances, high resolution whole mammalian genome maps should become achievable. Antibodies used for ChIP-on-chip can be an important limiting factor. ChIP-on-chip requires highly specific antibodies that must recognize its epitope in free solution and also under fixed conditions. If it is demonstrated to successfully immunoprecipitate cross-linked chromatin, it is termed "ChIP-grade". Companies that provide ChIP-grade antibodies include Abcam, Cell Signaling Technology, Santa Cruz, and Upstate. To overcome the problem of specificity, the protein of interest can be fused to a tag like FLAG or HA that are recognized by antibodies. An alternative to ChIP-on-chip that does not require antibodies is DamID. Also available are antibodies against a specific histone modification like H3 tri methyl K4. As mentioned before, the combination of these antibodies and ChIP-on-chip has become extremely powerful in determining whole genome analysis of histone modification patterns and will contribute tremendously to our understanding of the histone code and epigenetics. A study demonstrating the non-specific nature of DNA binding proteins has been published in PLoS Biology. This indicates that alternate confirmation of functional relevancy is a necessary step in any ChIP-chip experiment. History A first ChIP-on-chip experiment was performed in 1999 to analyze the distribution of cohesin along budding yeast chromosome III. Although the genome was not completely represented, the protocol in this study remains equivalent as those used in later studies. The ChIP-on-chip technique using all of the ORFs of the genome (that nevertheless remains incomplete, missing intergenic regions) was then applied successfully in three papers published in 2000 and 2001. The authors identified binding sites for individual transcription factors in the budding yeast Saccharomyces cerevisiae. In 2002, Richard Young's group determined the genome-wide positions of 106 transcription factors using a c-Myc tagging system in yeast. The first demonstration of the mammalian ChIp-on-chip technique reported the isolation of nine chromatin fragments containing weak and strong E2F binding site was done by Peggy Farnham's lab in collaboration with Michael Zhang's lab and published in 2001. This study was followed several months later in a collaboration between the Young lab with the laboratory of Brian Dynlacht which used the ChIP-on-chip technique to show for the first time that E2F targets encode components of the DNA damage checkpoint and repair pathways, as well as factors involved in chromatin assembly/condensation, chromosome segregation, and the mitotic spindle checkpoint Other applications for ChIP-on-chip include DNA replication, recombination, and chromatin structure. Since then, ChIP-on-chip has become a powerful tool in determining genome-wide maps of histone modifications and many more transcription factors. ChIP-on-chip in mammalian systems has been difficult due to the large and repetitive genomes. Thus, many studies in mammalian cells have focused on select promoter regions that are predicted to bind transcription factors and have not analyzed the entire genome. However, whole mammalian genome arrays have recently become commercially available from companies like Nimblegen. In the future, as ChIP-on-chip arrays become more and more advanced, high resolution whole genome maps of DNA-binding proteins and chromatin components for mammals will be analyzed in more detail. Alternatives Introduced in 2007, ChIP sequencing (ChIP-seq) is a technology that uses chromatin immunoprecipitation to crosslink the proteins of interest to the DNA but then instead of using a micro-array, it uses the more accurate, higher throughput method of sequencing to localize interaction points. DamID is an alternative method that does not require antibodies. ChIP-exo uses exonuclease treatment to achieve up to single base pair resolution. CUT&RUN sequencing uses antibody recognition with targeted enzymatic cleavage to address some technical limitations of ChIP. References Further reading External links http://www.genome.gov/10005107 ENCODE project Chip-on-Chip (CoC) Package Information from Amkor Technology Analysis and software CoCAS: a free Analysis software for Agilent ChIP-on-Chip experiments rMAT: R implementation from MAT program to normalize and analyze tiling arrays and ChIP-chip data. Genomics techniques Molecular biology Molecular biology techniques Protein methods Bioinformatics Microarrays
ChIP-on-chip
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
3,078
[ "Biochemistry methods", "Genetics techniques", "Genomics techniques", "Biological engineering", "Microtechnology", "Microarrays", "Protein methods", "Protein biochemistry", "Bioinformatics", "Molecular biology techniques", "Molecular biology", "Biochemistry" ]
9,758,780
https://en.wikipedia.org/wiki/Drag%20crisis
In fluid dynamics, the drag crisis (also known as the Eiffel paradox) is a phenomenon in which drag coefficient drops off suddenly as Reynolds number increases. This has been well studied for round bodies like spheres and cylinders. The drag coefficient of a sphere will change rapidly from about 0.5 to 0.2 at a Reynolds number in the range of 300000. This corresponds to the point where the flow pattern changes, leaving a narrower turbulent wake. The behavior is highly dependent on small differences in the condition of the surface of the sphere. History The drag crisis was observed in 1905 by Nikolay Zhukovsky, who guessed that this paradox can be explained by the detachment of streamlines at different points of the sphere at different velocities. Later the paradox was independently discovered in experiments by Gustave Eiffel and Charles Maurain. Upon Eiffel's retirement, he built the first wind tunnel in a lab located at the base of the Eiffel Tower, to investigate wind loads on structures and early aircraft. In a series of tests he found that the force loading experienced an abrupt decline at a critical Reynolds number. The paradox was explained from boundary-layer theory by German fluid dynamicist Ludwig Prandtl. Explanation The drag crisis is associated with a transition from laminar to turbulent boundary layer flow adjacent to the object. For cylindrical structures, this transition is associated with a transition from well-organized vortex shedding to randomized shedding behavior for super-critical Reynolds numbers, eventually returning to well-organized shedding at a higher Reynolds number with a return to elevated drag force coefficients. The super-critical behavior can be described semi-empirically using statistical means or by sophisticated computational fluid dynamics software (CFD) that takes into account the fluid-structure interaction for the given fluid conditions using Large Eddy Simulation (LES) that includes the dynamic displacements of the structure (DLES) [11]. These calculations also demonstrate the importance of the blockage ratio present for intrusive fittings in pipe flow and wind-tunnel tests. The critical Reynolds number is a function of turbulence intensity, upstream velocity profile, and wall-effects (velocity gradients). The semi-empirical descriptions of the drag crisis are often described in terms of a Strouhal bandwidth and the vortex shedding is described by broad-band spectral content. References Additional reading Fung, Y.C. (1960). "Fluctuating Lift and Drag Acting on a Cylinder in a Flow at Supercritical Reynolds Numbers," J. Aerospace Sci., 27 (11), pp. 801–814. Roshko, A. (1961). "Experiments on the flow past a circular cylinder at very high Reynolds number," J. Fluid Mech., 10, pp. 345–356. Jones, G.W. (1968). "Aerodynamic Forces on Stationary and Oscillating Circular Cylinder at High Reynolds Numbers," ASME Symposium on Unsteady Flow, Fluids Engineering Div. , pp. 1–30. Jones, G.W., Cincotta, J.J., Walker, R.W. (1969). "Aerodynamic Forces on Stationary and Oscillating Circular Cylinder at High Reynolds Numbers," NASA Report TAR-300, pp. 1–66. Achenbach, E. Heinecke, E. (1981). "On vortex shedding from smooth and rough cylinders in the range of Reynolds numbers 6x103 to 5x106," J. Fluid Mech. 109, pp. 239–251. Schewe, G. (1983). "On the force fluctuations acting on a circular cylinder in crossflow from subcritical up to transcritical Raynolds numbers," J. Fluid Mech., 133, pp. 265–285. Kawamura, T., Nakao, T., Takahashi, M., Hayashi, T., Murayama, K., Gotoh, N., (2003). "Synchronized Vibrations of a Circular Cylinder in Cross Flow at Supercritical Reynolds Numbers", ASME J. Press. Vessel Tech., 125, pp. 97–108, DOI:10.1115/1.1526855. Zdravkovich, M.M. (1997). Flow Around Circular Cylinders, Vol.I, Oxford Univ. Press. Reprint 2007, p. 188. Zdravkovich, M.M. (2003). Flow Around Circular Cylinders, Vol. II, Oxford Univ. Press. Reprint 2009, p. 761. Bartran, D. (2015). "Support Flexibility and Natural Frequencies of Pipe Mounted Thermowells," ASME J. Press. Vess. Tech., 137, pp. 1–6, DOI:10.1115/1.4028863 Botterill, N. ( 2010). "Fluid structure interaction modelling of cables used in civil engineering structures," PhD dissertation (http://etheses.nottingham.ac.uk/11657/), University of Nottingham. Bartran, D. (2018). "The Drag Crisis and Thermowell Design", J. Press. Ves. Tech. 140(4), 044501, Paper No: PVT-18-1002. DOI: 10.1115/1.4039882. External links Drag (physics) Fluid dynamics
Drag crisis
[ "Chemistry", "Engineering" ]
1,128
[ "Piping", "Drag (physics)", "Chemical engineering", "Fluid dynamics" ]
1,082,645
https://en.wikipedia.org/wiki/Topology%20optimization
Topology optimization is a mathematical method that optimizes material layout within a given design space, for a given set of loads, boundary conditions and constraints with the goal of maximizing the performance of the system. Topology optimization is different from shape optimization and sizing optimization in the sense that the design can attain any shape within the design space, instead of dealing with predefined configurations. The conventional topology optimization formulation uses a finite element method (FEM) to evaluate the design performance. The design is optimized using either gradient-based mathematical programming techniques such as the optimality criteria algorithm and the method of moving asymptotes or non gradient-based algorithms such as genetic algorithms. Topology optimization has a wide range of applications in aerospace, mechanical, bio-chemical and civil engineering. Currently, engineers mostly use topology optimization at the concept level of a design process. Due to the free forms that naturally occur, the result is often difficult to manufacture. For that reason the result emerging from topology optimization is often fine-tuned for manufacturability. Adding constraints to the formulation in order to increase the manufacturability is an active field of research. In some cases results from topology optimization can be directly manufactured using additive manufacturing; topology optimization is thus a key part of design for additive manufacturing. Problem statement A topology optimization problem can be written in the general form of an optimization problem as: The problem statement includes the following: An objective function . This function represents the quantity that is being minimized for best performance. The most common objective function is compliance, where minimizing compliance leads to maximizing the stiffness of a structure. The material distribution as a problem variable. This is described by the density of the material at each location . Material is either present, indicated by a 1, or absent, indicated by a 0. is a state field that satisfies a linear or nonlinear state equation depending on . The design space . This indicates the allowable volume within which the design can exist. Assembly and packaging requirements, human and tool accessibility are some of the factors that need to be considered in identifying this space . With the definition of the design space, regions or components in the model that cannot be modified during the course of the optimization are considered as non-design regions. constraints a characteristic that the solution must satisfy. Examples are the maximum amount of material to be distributed (volume constraint) or maximum stress values. Evaluating often includes solving a differential equation. This is most commonly done using the finite element method since these equations do not have a known analytical solution. Implementation methodologies There are various implementation methodologies that have been used to solve topology optimization problems. Solving with discrete/binary variables Solving topology optimization problems in a discrete sense is done by discretizing the design domain into finite elements. The material densities inside these elements are then treated as the problem variables. In this case material density of one indicates the presence of material, while zero indicates an absence of material. Owing to the attainable topological complexity of the design being dependent on the number of elements, a large number is preferred. Large numbers of finite elements increases the attainable topological complexity, but come at a cost. Firstly, solving the FEM system becomes more expensive. Secondly, algorithms that can handle a large number (several thousands of elements is not uncommon) of discrete variables with multiple constraints are unavailable. Moreover, they are impractically sensitive to parameter variations. In literature problems with up to 30000 variables have been reported. Solving the problem with continuous variables The earlier stated complexities with solving topology optimization problems using binary variables has caused the community to search for other options. One is the modelling of the densities with continuous variables. The material densities can now also attain values between zero and one. Gradient based algorithms that handle large amounts of continuous variables and multiple constraints are available. But the material properties have to be modelled in a continuous setting. This is done through interpolation. One of the most implemented interpolation methodologies is the Solid Isotropic Material with Penalisation method (SIMP). This interpolation is essentially a power law . It interpolates the Young's modulus of the material to the scalar selection field. The value of the penalisation parameter is generally taken between . This has been shown to confirm the micro-structure of the materials. In the SIMP method a lower bound on the Young's modulus is added, , to make sure the derivatives of the objective function are non-zero when the density becomes zero. The higher the penalisation factor, the more SIMP penalises the algorithm in the use of non-binary densities. Unfortunately, the penalisation parameter also introduces non-convexities. Commercial software There are several commercial topology optimization software on the market. Most of them use topology optimization as a hint how the optimal design should look like, and manual geometry re-construction is required. There are a few solutions which produce optimal designs ready for Additive Manufacturing. Examples Structural compliance A stiff structure is one that has the least possible displacement when given certain set of boundary conditions. A global measure of the displacements is the strain energy (also called compliance) of the structure under the prescribed boundary conditions. The lower the strain energy the higher the stiffness of the structure. So, the objective function of the problem is to minimize the strain energy. On a broad level, one can visualize that the more the material, the less the deflection as there will be more material to resist the loads. So, the optimization requires an opposing constraint, the volume constraint. This is in reality a cost factor, as we would not want to spend a lot of money on the material. To obtain the total material utilized, an integration of the selection field over the volume can be done. Finally the elasticity governing differential equations are plugged in so as to get the final problem statement. subject to: But, a straightforward implementation in the finite element framework of such a problem is still infeasible owing to issues such as: Mesh dependency—Mesh Dependency means that the design obtained on one mesh is not the one that will be obtained on another mesh. The features of the design become more intricate as the mesh gets refined. Numerical instabilities—The selection of region in the form of a chess board. Some techniques such as filtering based on image processing are currently being used to alleviate some of these issues. Although it seemed like this was purely a heuristic approach for a long time, theoretical connections to nonlocal elasticity have been made to support the physical sense of these methods. Multiphysics problems Fluid-structure-interaction Fluid-structure-interaction is a strongly coupled phenomenon and concerns the interaction between a stationary or moving fluid and an elastic structure. Many engineering applications and natural phenomena are subject to fluid-structure-interaction and to take such effects into consideration is therefore critical in the design of many engineering applications. Topology optimisation for fluid structure interaction problems has been studied in e.g. references and. Design solutions solved for different Reynolds numbers are shown below. The design solutions depend on the fluid flow with indicate that the coupling between the fluid and the structure is resolved in the design problems. Thermoelectric energy conversion Thermoelectricity is a multi-physic problem which concerns the interaction and coupling between electric and thermal energy in semi conducting materials. Thermoelectric energy conversion can be described by two separately identified effects: The Seebeck effect and the Peltier effect. The Seebeck effect concerns the conversion of thermal energy into electric energy and the Peltier effect concerns the conversion of electric energy into thermal energy. By spatially distributing two thermoelectric materials in a two dimensional design space with a topology optimisation methodology, it is possible to exceed performance of the constitutive thermoelectric materials for thermoelectric coolers and thermoelectric generators. 3F3D Form Follows Force 3D Printing The current proliferation of 3D printer technology has allowed designers and engineers to use topology optimization techniques when designing new products. Topology optimization combined with 3D printing can result in less weight, improved structural performance and shortened design-to-manufacturing cycle. As the designs, while efficient, might not be realisable with more traditional manufacturing techniques. Internal contact Internal contact can be included in topology optimization by applying the third medium contact method. The third medium contact (TMC) method is an implicit contact formulation that is continuous and differentiable. This makes TMC suitable for use with gradient-based approaches to topology optimization. Monolithic as well as staggerede approaches, which are more common in topology optimization, have been used to create various design with internal contact. References Further reading External links Topology optimization animations Mathematical optimization Topology Construction Structural engineering 3D printing
Topology optimization
[ "Physics", "Mathematics", "Engineering" ]
1,782
[ "Structural engineering", "Mathematical analysis", "Construction", "Topology", "Space", "Civil engineering", "Geometry", "Mathematical optimization", "Spacetime" ]
1,083,721
https://en.wikipedia.org/wiki/Cabibbo%E2%80%93Kobayashi%E2%80%93Maskawa%20matrix
In the Standard Model of particle physics, the Cabibbo–Kobayashi–Maskawa matrix, CKM matrix, quark mixing matrix, or KM matrix is a unitary matrix that contains information on the strength of the flavour-changing weak interaction. Technically, it specifies the mismatch of quantum states of quarks when they propagate freely and when they take part in the weak interactions. It is important in the understanding of CP violation. This matrix was introduced for three generations of quarks by Makoto Kobayashi and Toshihide Maskawa, adding one generation to the matrix previously introduced by Nicola Cabibbo. This matrix is also an extension of the GIM mechanism, which only includes two of the three current families of quarks. The matrix Predecessor – the Cabibbo matrix In 1963, Nicola Cabibbo introduced the Cabibbo angle () to preserve the universality of the weak interaction. Cabibbo was inspired by previous work by Murray Gell-Mann and Maurice Lévy, on the effectively rotated nonstrange and strange vector and axial weak currents, which he references. In light of current concepts (quarks had not yet been proposed), the Cabibbo angle is related to the relative probability that down and strange quarks decay into up quarks ( ||   and   || , respectively). In particle physics terminology, the object that couples to the up quark via charged-current weak interaction is a superposition of down-type quarks, here denoted by . Mathematically this is: or using the Cabibbo angle: Using the currently accepted values for   ||   and   ||   (see below), the Cabibbo angle can be calculated using When the charm quark was discovered in 1974, it was noticed that the down and strange quark could transition into either the up or charm quark, leading to two sets of equations: or using the Cabibbo angle: This can also be written in matrix notation as: or using the Cabibbo angle where the various || represent the probability that the quark of flavor decays into a quark of flavor . This 2×2 rotation matrix is called the "Cabibbo matrix", and was subsequently expanded to the 3×3 CKM matrix. CKM matrix In 1973, observing that CP-violation could not be explained in a four-quark model, Kobayashi and Maskawa generalized the Cabibbo matrix into the Cabibbo–Kobayashi–Maskawa matrix (or CKM matrix) to keep track of the weak decays of three generations of quarks: On the left are the weak interaction doublet partners of down-type quarks, and on the right is the CKM matrix, along with a vector of mass eigenstates of down-type quarks. The CKM matrix describes the probability of a transition from one flavour quark to another flavour quark. These transitions are proportional to ||. As of 2023, the best determination of the individual magnitudes of the CKM matrix elements was: Using those values, one can check the unitarity of the CKM matrix. In particular, we find that the first-row matrix elements give: Making the experimental results in line with the theoretical value of 1. The choice of usage of down-type quarks in the definition is a convention, and does not represent a physically preferred asymmetry between up-type and down-type quarks. Other conventions are equally valid: The mass eigenstates , , and of the up-type quarks can equivalently define the matrix in terms of their weak interaction partners , , and . Since the CKM matrix is unitary, its inverse is the same as its conjugate transpose, which the alternate choices use; it appears as the same matrix, in a slightly altered form. General case construction To generalize the matrix, count the number of physically important parameters in this matrix which appear in experiments. If there are generations of quarks (2 flavours) then An  ×  unitary matrix (that is, a matrix such that , where is the conjugate transpose of and is the identity matrix) requires 2 real parameters to be specified. 2 − 1 of these parameters are not physically significant, because one phase can be absorbed into each quark field (both of the mass eigenstates, and of the weak eigenstates), but the matrix is independent of a common phase. Hence, the total number of free variables independent of the choice of the phases of basis vectors is 2 − (2 − 1) = ( − 1)2. Of these, ( − 1) are rotation angles called quark mixing angles. The remaining ( − 1)( − 2) are complex phases, which cause CP violation. = 2 For the case  = 2, there is only one parameter, which is a mixing angle between two generations of quarks. Historically, this was the first version of CKM matrix when only two generations were known. It is called the Cabibbo angle after its inventor Nicola Cabibbo. = 3 For the Standard Model case ( = 3), there are three mixing angles and one CP-violating complex phase. Observations and predictions Cabibbo's idea originated from a need to explain two observed phenomena: the transitions and had similar amplitudes. the transitions with change in strangeness had amplitudes equal to of those with Cabibbo's solution consisted of postulating weak universality (see below) to resolve the first issue, along with a mixing angle , now called the Cabibbo angle, between the and quarks to resolve the second. For two generations of quarks, there can be no CP violating phases, as shown by the counting of the previous section. Since CP violations had already been seen in 1964, in neutral kaon decays, the Standard Model that emerged soon after clearly indicated the existence of a third generation of quarks, as Kobayashi and Maskawa pointed out in 1973. The discovery of the bottom quark at Fermilab (by Leon Lederman's group) in 1976 therefore immediately started off the search for the top quark, the missing third-generation quark. Note, however, that the specific values that the angles take on are not a prediction of the standard model: They are free parameters. At present, there is no generally-accepted theory that explains why the angles should have the values that are measured in experiments. Weak universality The constraints of unitarity of the CKM-matrix on the diagonal terms can be written as separately for each generation . This implies that the sum of all couplings of any one of the up-type quarks to all the down-type quarks is the same for all generations. This relation is called weak universality and was first pointed out by Nicola Cabibbo in 1967. Theoretically it is a consequence of the fact that all SU(2) doublets couple with the same strength to the vector bosons of weak interactions. It has been subjected to continuing experimental tests. The unitarity triangles The remaining constraints of unitarity of the CKM-matrix can be written in the form For any fixed and different and , this is a constraint on three complex numbers, one for each , which says that these numbers form the sides of a triangle in the complex plane. There are six choices of and (three independent), and hence six such triangles, each of which is called a unitary triangle. Their shapes can be very different, but they all have the same area, which can be related to the CP violating phase. The area vanishes for the specific parameters in the Standard Model for which there would be no CP violation. The orientation of the triangles depend on the phases of the quark fields. A popular quantity amounting to twice the area of the unitarity triangle is the Jarlskog invariant (introduced by Cecilia Jarlskog in 1985), For Greek indices denoting up quarks and Latin ones down quarks, the 4-tensor is doubly antisymmetric, Up to antisymmetry, it only has non-vanishing components, which, remarkably, from the unitarity of , can be shown to be all identical in magnitude, that is, so that Since the three sides of the triangles are open to direct experiment, as are the three angles, a class of tests of the Standard Model is to check that the triangle closes. This is the purpose of a modern series of experiments under way at the Japanese BELLE and the American BaBar experiments, as well as at LHCb in CERN, Switzerland. Parameterizations Four independent parameters are required to fully define the CKM matrix. Many parameterizations have been proposed, and three of the most common ones are shown below. KM parameters The original parameterization of Kobayashi and Maskawa used three angles (, , ) and a CP-violating phase angle (). is the Cabibbo angle. For brevity, the cosines and sines of the angles are denoted and , for respectively. "Standard" parameters A "standard" parameterization of the CKM matrix uses three Euler angles (, , ) and one CP-violating phase (). is the Cabibbo angle. Couplings between quark generations and vanish if . Cosines and sines of the angles are denoted and , respectively. The 2008 values for the standard parameters were: = , = , = and =  radians = . Wolfenstein parameters A third parameterization of the CKM matrix was introduced by Lincoln Wolfenstein with the four real parameters , , , and , which would all 'vanish' (would be zero) if there were no coupling. The four Wolfenstein parameters have the property that all are of order 1 and are related to the 'standard' parameterization: {| |- | | |- | | |- | | |} Although the Wolfenstein parameterization of the CKM matrix can be as exact as desired when carried to high order, it is mainly used for generating convenient approximations to the standard parameterization. The approximation to order , good to better than 0.3% accuracy, is: Rates of CP violation correspond to the parameters and . Using the values of the previous section for the CKM matrix, as of 2008 the best determination of the Wolfenstein parameter values is: =.22500 ± 0.0067,   = ,   = 0.159±0.010,   and   = 0.348±0.010. Nobel Prize In 2008, Kobayashi and Maskawa shared one half of the Nobel Prize in Physics "for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature". Some physicists were reported to harbor bitter feelings about the fact that the Nobel Prize committee failed to reward the work of Cabibbo, whose prior work was closely related to that of Kobayashi and Maskawa. Asked for a reaction on the prize, Cabibbo preferred to give no comment. See also Formulation of the Standard Model and CP violations Quantum chromodynamics, flavour and strong CP problem Weinberg angle, a similar angle for Z and photon mixing Pontecorvo–Maki–Nakagawa–Sakata matrix, the equivalent mixing matrix for neutrinos Koide formula References Further reading and external links at SLAC, California, and at KEK, Japan. Standard Model Electroweak theory Matrices
Cabibbo–Kobayashi–Maskawa matrix
[ "Physics", "Mathematics" ]
2,354
[ "Standard Model", "Physical phenomena", "Mathematical objects", "Electroweak theory", "Matrices (mathematics)", "Fundamental interactions", "Particle physics" ]
1,083,982
https://en.wikipedia.org/wiki/Laser%20Doppler%20velocimetry
Laser Doppler velocimetry, also known as laser Doppler anemometry, is the technique of using the Doppler shift in a laser beam to measure the velocity in transparent or semi-transparent fluid flows or the linear or vibratory motion of opaque, reflecting surfaces. The measurement with laser Doppler anemometry is absolute and linear with velocity and requires no pre-calibration. Technology origin The development of the helium–neon laser (He-Ne) in 1962 at the Bell Telephone Laboratories provided the optics community with a continuous wave electromagnetic radiation source that was highly concentrated at a wavelength of 632.8 nanometers (nm) in the red portion of the visible spectrum. It was discovered that fluid flow measurements could be made using the Doppler effect on a He-Ne beam scattered by small polystyrene spheres in the fluid. At the Research Laboratories of Brown Engineering Company (later Teledyne Brown Engineering), this phenomenon was used to develop the first laser Doppler flowmeter using heterodyne signal processing. This instrument became known as the laser Doppler velocimeter and the technique was called laser Doppler velocimetry. It is also referred to as laser Doppler anemometry. Early laser Doppler velocimetry applications included measuring and mapping the exhaust from rocket engines with speeds up to 1000 m/s, as well as determining flow in a near-surface blood artery. Similar instruments were also developed for solid surface monitoring, with applications ranging from measuring product speeds in production lines of paper and steel mills to measuring vibration frequency and amplitude of surfaces. Operating principles In its simplest and most presently used form, laser Doppler velocimetry crosses two beams of collimated, monochromatic, and coherent laser light in the flow of the fluid being measured. The two beams are usually obtained by splitting a single beam, thus ensuring coherence between the two. Lasers with wavelengths in the visible spectrum (390–750 nm) are commonly used; these are typically He-Ne, Argon ion, or laser diode, allowing the beam path to be observed. A transmitting optics system focuses the beams to intersect at their waists (the focal point of a laser beam), where they interfere and generate a set of straight fringes. As particles (either naturally occurring or induced) entrained in the fluid pass through the fringes, they scatter light that is then collected by a receiving optics and focused on a photodetector (typically an avalanche photodiode). The scattered light fluctuates in intensity, the frequency of which is equivalent to the Doppler shift between the incident and scattered light, and is thus proportional to the component of particle velocity which lies in the plane of two laser beams. If the sensor is aligned to the flow such that the fringes are perpendicular to the flow direction, the electrical signal from the photodetector will then be proportional to the full particle velocity. By combining three devices (e.g., He-Ne, Argon ion, and laser diode) with different wavelengths, all three flow velocity components can be simultaneously measured. Another form of laser Doppler velocimetry, particularly used in early device developments, has a completely different approach akin to an interferometer. The sensor also splits the laser beam into two parts; one (the measurement beam) is focused into the flow and the second (the reference beam) passes outside the flow. A receiving optics provides a path that intersects the measurement beam, forming a small volume. Particles passing through this volume will scatter light from the measurement beam with a Doppler shift; a portion of this light is collected by the receiving optics and transferred to the photodetector. The reference beam is also sent to the photodetector where optical heterodyne detection produces an electrical signal proportional to the Doppler shift, by which the particle velocity component perpendicular to the plane of the beams can be determined. The signal detection scheme of the instrument is using the principle of optical heterodyne detection. This principle is similar to other laser Doppler-based instruments such as laser Doppler vibrometer, or laser surface velocimeter. It is possible to apply digital techniques to the signal to obtain the velocity as a measured fraction of the speed-of-light, and therefore in one sense Laser Doppler velocimetry is a particularly fundamental measurement traceable to the S.I. system of measurement. Applications In the decades since the laser Doppler velocimetry was first introduced, there has been a wide variety of laser Doppler sensors developed and applied. Flow research Laser Doppler velocimetry is often chosen over other forms of flow measurement because the equipment can be outside of the flow being measured and therefore has no effect on the flow. Some typical applications include the following: Wind tunnel velocity experiments for testing aerodynamics of aircraft, missiles, cars, trucks, trains, and buildings and other structures Velocity measurements in water flows (research in general hydrodynamics, ship hull design, rotating machinery, pipe flows, channel flow, etc.) Fuel injection and spray research where there is a need to measure velocities inside engines or through nozzles Environmental research (combustion research, wave dynamics, coastal engineering, tidal modeling, river hydrology, etc.). One disadvantage has been that laser Doppler velocimetry sensors are range-dependent; they have to be calibrated minutely and the distances where they measure has to be precisely defined. This distance restriction has recently been at least partially overcome with a new sensor that is range independent. Automation Laser Doppler velocimetry can be useful in automation, which includes the flow examples above. It can also be used to measure the speed of solid objects, like conveyor belts. This can be useful in situations where attaching a rotary encoder (or a different mechanical speed measurement device) to the conveyor belt is impossible or impractical. Medical applications Laser Doppler velocimetry is used in hemodynamics research as a technique to partially quantify blood flow in human tissues such as skin or the eye fundus. Within the clinical environment, the technology is often referred to as laser Doppler flowmetry; when images are made, it is referred to as laser Doppler imaging. The beam from a low-power laser (usually a laser diode) penetrates the skin sufficiently to be scattered with a Doppler shift by the red blood cells and return to be concentrated on a detector. These measurements are useful to monitor the effect of exercise, drug treatments, environmental, or physical manipulations on targeted micro-sized vascular areas. The laser Doppler vibrometer is being used in clinical otology for the measurement of tympanic membrane (eardrum), malleus (hammer), and prosthesis head displacement in response to sound inputs of 80- to 100-dB sound-pressure level. It also has potential use in the operating room to perform measurements of prosthesis and stapes (stirrup) displacement. Navigation The Autonomous Landing Hazard Avoidance Technology used in NASA's Project Morpheus lunar lander to automatically find a safe landing place contains a lidar Doppler velocimeter that measures the vehicle's altitude and velocity. The AGM-129 ACM cruise missile uses laser doppler velocimeter for precise terminal guidance. Calibration and measurement Laser Doppler velocimetry is used in the analysis of vibration of MEMS devices, often to compare the performance of devices such as accelerometers-on-a-chip with their theoretical (calculated) modes of vibration. As a specific example in which the unique features of Laser Doppler velocimetry are important, the measurement of velocity of a MEMS watt balance device has allowed greater accuracy in the measurement of small forces than previously possible, through directly measuring the ratio of this velocity to the speed of light. This is a fundamental, traceable measurement that now allows traceability of small forces to the S.I. System. See also Hot-wire anemometry Laser Doppler imaging Laser Doppler vibrometer Laser surface velocimeter Molecular tagging velocimetry Particle image velocimetry Particle tracking velocimetry Photon Doppler velocimetry Velocity interferometer system for any reflector (VISAR) References External links Laser applications Doppler effects Measurement Transport phenomena
Laser Doppler velocimetry
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
1,762
[ "Transport phenomena", "Physical phenomena", "Physical quantities", "Chemical engineering", "Quantity", "Astrophysics", "Size", "Measurement", "Doppler effects" ]
1,084,219
https://en.wikipedia.org/wiki/Gravimetry
Gravimetry is the measurement of the strength of a gravitational field. Gravimetry may be used when either the magnitude of a gravitational field or the properties of matter responsible for its creation are of interest. The study of gravity changes belongs to geodynamics. Units of measurement Gravity is usually measured in units of acceleration. In the SI system of units, the standard unit of acceleration is metres per second squared (m/s2). Other units include the cgs gal (sometimes known as a galileo, in either case with symbol Gal), which equals 1 centimetre per second squared, and the g (gn), equal to 9.80665 m/s2. The value of the gn is defined as approximately equal to the acceleration due to gravity at the Earth's surface, although the actual acceleration varies slightly by location. Gravimeters A gravimeter is an instrument used to measure gravitational acceleration. Every mass has an associated gravitational potential. The gradient of this potential is a force. A gravimeter measures this gravitational force. For a small body, general relativity predicts gravitational effects indistinguishable from the effects of acceleration by the equivalence principle. Thus, gravimeters can be regarded as special-purpose accelerometers. Many weighing scales may be regarded as simple gravimeters. In one common form, a spring is used to counteract the force of gravity pulling on an object. The change in length of the spring may be calibrated to the force required to balance the gravitational pull. The resulting measurement may be made in units of force (such as the newton), however, gravimeters display their measurements in units of gals (cm/s2), and parts per million, parts per billion, or parts per trillion of the average vertical acceleration with respect to the Earth. Though similar in design to other accelerometers, gravimeters are typically designed to be much more sensitive. Their first uses were to measure the changes in gravity from the varying densities and distribution of masses inside the Earth, from temporal tidal variations in the shape and distribution of mass in the oceans, atmosphere and earth. The resolution of gravimeters can be increased by averaging samples over longer periods. Fundamental characteristics of gravimeters are the accuracy of a single measurement (a single sample) and the sampling rate. for example: Besides precision, stability is also an important property for a gravimeter as it allows the monitoring of gravity changes. These changes can be the result of mass displacements inside the Earth, or of vertical movements of the Earth's crust on which measurements are being made. The first gravimeters were vertical accelerometers, specialized for measuring the constant downward acceleration of gravity on the Earth's surface. The Earth's vertical gravity varies from place to place over its surface by about ±0.5%. It varies by about (nanometers per second squared) at any location because of the changing positions of the Sun and Moon relative to the Earth. The majority of modern gravimeters use specially designed metal or quartz zero-length springs to support the test mass. The special property of these springs is that the natural resonant period of oscillation of the spring–mass system can be made very longapproaching a thousand seconds. This detunes the test mass from most local vibration and mechanical noise, increasing the sensitivity and utility of the gravimeter. Quartz and metal springs are chosen for different reasons; quartz springs are less affected by magnetic and electric fields while metal springs have a much lower drift due to elongation over time. The test mass is sealed in an air-tight container so that tiny changes of barometric pressure from blowing wind and other weather do not change the buoyancy of the test mass in air. Spring gravimeters are, in practice, relative instruments that measure the difference in gravity between different locations. A relative instrument also requires calibration by comparing instrument readings taken at locations with known absolute values of gravity. Absolute gravimeters provide such measurements by determining the gravitational acceleration of a test mass in a vacuum. A test mass is allowed to fall freely inside a vacuum chamber and its position is measured with a laser interferometer and timed with an atomic clock. The laser wavelength is known to ±0.025 ppb and the clock is stable to ±0.03 ppb. Care must be taken to minimize the effects of perturbing forces such as residual air resistance (even in a vacuum), vibration, and magnetic forces. Such instruments are capable of an accuracy of about 2 ppb or 0.002 mGal and reference their measurement to atomic standards of length and time. Their primary use is for calibrating relative instruments, monitoring crustal deformation, and in geophysical studies requiring high accuracy and stability. However, absolute instruments are somewhat larger and significantly more expensive than relative spring gravimeters and are thus relatively rare. Relative gravimeter usually refer to comparisons of gravity from one place to another. They are designed to subtract the average vertical gravity automatically. They can be calibrated at a location where the gravity is known accurately and then transported to where gravity is to be measured. Or they can be calibrated in absolute units at their operating location. Applications Researchers use more sophisticated gravimeters when precise measurements are needed. When measuring Earth's gravitational field, measurements are made to the precision of microgals to find density variations in the rocks making up the Earth. Several types of gravimeters exist for making these measurements, including some that are essentially refined versions of the spring scale described above. These measurements are used to quantify gravity anomalies. Gravimeters can detect vibrations and gravity changes from human activities. Depending on the interests of the researcher or operator, this might be counteracted by integral vibration isolation and signal processing. Gravimeters have been designed to mount in vehicles, including aircraft (note the field of aerogravity), ships and submarines. These special gravimeters isolate acceleration from the vehicle's movement and subtract it from measurements. The acceleration of the vehicles is often hundreds or thousands of times stronger than the changes in gravity being measured. The Lunar Surface Gravimeter was deployed on the surface of the Moon during the 1972 Apollo 17 mission but did not work due to a design error. A second device carried on the same mission, the Lunar Traverse Gravimeter, functioned as anticipated. Gravimeters are used for petroleum and mineral prospecting, seismology, geodesy, geophysical surveys and other geophysical research, and for metrology. Their fundamental purpose is to map the gravity field in space and time. Most current work is Earth-based, with a few satellites around Earth, but gravimeters are also applicable to the Moon, Sun, planets, asteroids, stars, galaxies and other bodies. Gravitational wave experiments monitor the changes with time in the gravitational potential itself, rather than the gradient of the potential that the gravimeter is tracking. This distinction is somewhat arbitrary. The subsystems of the gravitational radiation experiments are very sensitive to changes in the gradient of the potential. The local gravity signals on Earth that interfere with gravitational wave experiments are disparagingly referred to as "Newtonian noise", since Newtonian gravity calculations are sufficient to characterize many of the local (earth-based) signals. Commercial absolute gravimeters Gravimeters for measuring the Earth's gravity as precisely as possible are getting smaller and more portable. A common type measures the acceleration of small masses free falling in a vacuum, when the accelerometer is firmly attached to the ground. The mass includes a retroreflector and terminates one arm of a Michelson interferometer. By counting and timing the interference fringes, the acceleration of the mass can be measured. A more recent development is a "rise and fall" version that tosses the mass upward and measures both upward and downward motion. This allows cancellation of some measurement errors; however, "rise and fall" gravimeters are not yet in common use. Absolute gravimeters are used in the calibration of relative gravimeters, surveying for gravity anomalies (voids), and for establishing the vertical control network. Atom interferometric and atomic fountain methods are used for precise measurement of the Earth's gravity, and atomic clocks and purpose-built instruments can use time dilation (also called general relativistic) measurements to track changes in the gravitational potential and gravitational acceleration on the Earth. The term "absolute" does not convey the instrument's stability, sensitivity, accuracy, ease of use, and bandwidth. The words "Absolute" and "relative" should not be used when more specific characteristics can be given. Relative gravimeters The most common gravimeters are spring-based. They are used in gravity surveys over large areas for establishing the figure of the geoid over those areas. They are basically a weight on a spring, and by measuring the amount by which the weight stretches the spring, local gravity can be measured. However, the strength of the spring must be calibrated by placing the instrument in a location with a known gravitational acceleration. The current standard for sensitive gravimeters are the superconducting gravimeters, which operate by suspending a superconducting niobium sphere in an extremely stable magnetic field; the current required to generate the magnetic field that suspends the niobium sphere is proportional to the strength of the Earth's gravitational acceleration. The superconducting gravimeter achieves sensitivities of (one nanogal), approximately one trillionth (10) of the Earth surface gravity. In a demonstration of the sensitivity of the superconducting gravimeter, Virtanen (2006), describes how an instrument at Metsähovi, Finland, detected the gradual increase in surface gravity as workmen cleared snow from its laboratory roof. The largest component of the signal recorded by a superconducting gravimeter is the tidal gravity of the Sun and Moon acting at the station. This is roughly (nanometers per second squared) at most locations. "SGs", as they are called, can detect and characterize Earth tides, changes in the density of the atmosphere, the effect of changes in the shape of the surface of the ocean, the effect of the atmosphere's pressure on the Earth, changes in the rate of rotation of the Earth, oscillations of the Earth's core, distant and nearby seismic events, and more. Many broadband three-axis seismometers in common use are sensitive enough to track the Sun and Moon. When operated to report acceleration, they are useful gravimeters. Because they have three axes, it is possible to solve for their position and orientation, by either tracking the arrival time and pattern of seismic waves from earthquakes, or by referencing them to the Sun and Moon tidal gravity. Recently, the SGs, and broadband three-axis seismometers operated in gravimeter mode, have begun to detect and characterize the small gravity signals from earthquakes. These signals arrive at the gravimeter at the speed of light, so have the potential to improve earthquake early warning methods. There is some activity to design purpose-built gravimeters of sufficient sensitivity and bandwidth to detect these prompt gravity signals from earthquakes. Not just the magnitude 7+ events, but also the smaller, much more frequent, events. Newer MEMS gravimeters, atom gravimeters – MEMS gravimeters offer the potential for low-cost arrays of sensors. MEMS gravimeters are currently variations on spring type accelerometers where the motions of a tiny cantilever or mass are tracked to report acceleration. Much of the research is focused on different methods of detecting the position and movements of these small masses. In Atom gravimeters, the mass is a collection of atoms. For a given restoring force, the central frequency of the instrument is often given by (in radians per second) The term for the "force constant" changes if the restoring force is electrostatic, magnetostatic, electromagnetic, optical, microwave, acoustic, or any of dozens of different ways to keep the mass stationary. The "force constant" is just the coefficient of the displacement term in the equation of motion: m mass, a acceleration, b viscosity, v velocity, k force constant, x displacement F external force as a function of location/position and time. F is the force being measured, and is the acceleration. + higher derivatives of the restoring force Precise GPS stations can be operated as gravimeters since they are increasingly measuring three-axis positions over time, which, when differentiated twice, give an acceleration signal. The satellite borne gravimeters GOCE, GRACE, mostly operated in gravity gradiometer mode. They yielded detailed information about the Earth's time-varying gravity field. The spherical harmonic gravitational potential models are slowly improving in both spatial and temporal resolution. Taking the gradient of the potentials gives estimate of local acceleration which are what is measured by the gravimeter arrays. The superconducting gravimeter network has been used to ground truth the satellite potentials. This should eventually improve both the satellite and Earth-based methods and intercomparisons. Transportable relative gravimeters also exist; they employ an extremely stable inertial platform to compensate for the masking effects of motion and vibration, a difficult engineering feat. The first transportable relative gravimeters were, reportedly, a secret military technology developed in the 1950–1960s as a navigational aid for nuclear submarines. Subsequently in the 1980s, transportable relative gravimeters were reverse engineered by the civilian sector for use on ship, then in air and finally satellite-borne gravity surveys. Microgravimetry Microgravimetry is an important branch developed on the foundation of classical gravimetry. Microgravity investigations are carried out in order to solve various problems of engineering geology, mainly location of voids and their monitoring. Very detailed measurements of high accuracy can indicate voids of any origin, provided the size and depth are large enough to produce gravity effect stronger than is the level of confidence of relevant gravity signal. History The modern gravimeter was developed by Lucien LaCoste and Arnold Romberg in 1936. They also invented most subsequent refinements, including the ship-mounted gravimeter, in 1965, temperature-resistant instruments for deep boreholes, and lightweight hand-carried instruments. Most of their designs remain in use with refinements in data collection and data processing. Satellite gravimetry Currently, the static and time-variable Earth's gravity field parameters are determined using modern satellite missions, such as GOCE, CHAMP, Swarm, GRACE and GRACE-FO. The lowest-degree parameters, including the Earth's oblateness and geocenter motion are best determined from satellite laser ranging. Large-scale gravity anomalies can be detected from space, as a by-product of satellite gravity missions, e.g., GOCE. These satellite missions aim at the recovery of a detailed gravity field model of the Earth, typically presented in the form of a spherical-harmonic expansion of the Earth's gravitational potential, but alternative presentations, such as maps of geoid undulations or gravity anomalies, are also produced. The Gravity Recovery and Climate Experiment (GRACE) consisted of two satellites that detected gravitational changes across the Earth. Also these changes could be presented as gravity anomaly temporal variations. The Gravity Recovery and Interior Laboratory (GRAIL) also consisted of two spacecraft orbiting the Moon, which orbited for three years before their deorbit in 2015. See also Gravity measurement with pendulums Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) A modern satellite-borne gradiometer containing pairs of gravimeters (accelerometers), launched March 2009 Gravity map GRACE and GRACE-FO, spacecraft launched March 2002 Notes References Geodesy
Gravimetry
[ "Mathematics" ]
3,314
[ "Applied mathematics", "Geodesy" ]
1,085,343
https://en.wikipedia.org/wiki/Semi-empirical%20mass%20formula
In nuclear physics, the semi-empirical mass formula (SEMF) (sometimes also called the Weizsäcker formula, Bethe–Weizsäcker formula, or Bethe–Weizsäcker mass formula to distinguish it from the Bethe–Weizsäcker process) is used to approximate the mass of an atomic nucleus from its number of protons and neutrons. As the name suggests, it is based partly on theory and partly on empirical measurements. The formula represents the liquid-drop model proposed by George Gamow, which can account for most of the terms in the formula and gives rough estimates for the values of the coefficients. It was first formulated in 1935 by German physicist Carl Friedrich von Weizsäcker, and although refinements have been made to the coefficients over the years, the structure of the formula remains the same today. The formula gives a good approximation for atomic masses and thereby other effects. However, it fails to explain the existence of lines of greater binding energy at certain numbers of protons and neutrons. These numbers, known as magic numbers, are the foundation of the nuclear shell model. Liquid-drop model The liquid-drop model was first proposed by George Gamow and further developed by Niels Bohr, John Archibald Wheeler and Lise Meitner. It treats the nucleus as a drop of incompressible fluid of very high density, held together by the nuclear force (a residual effect of the strong force), there is a similarity to the structure of a spherical liquid drop. While a crude model, the liquid-drop model accounts for the spherical shape of most nuclei and makes a rough prediction of binding energy. The corresponding mass formula is defined purely in terms of the numbers of protons and neutrons it contains. The original Weizsäcker formula defines five terms: Volume energy, when an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. So, this nuclear energy is proportional to the volume. Surface energy corrects for the previous assumption made that every nucleon interacts with the same number of other nucleons. This term is negative and proportional to the surface area, and is therefore roughly equivalent to liquid surface tension. Coulomb energy, the potential energy from each pair of protons. As this is a repelling force, the binding energy is reduced. Asymmetry energy (also called Pauli energy), which accounts for the Pauli exclusion principle. Unequal numbers of neutrons and protons imply filling higher energy levels for one type of particle, while leaving lower energy levels vacant for the other type. Pairing energy, which accounts for the tendency of proton pairs and neutron pairs to occur. An even number of particles is more stable than an odd number due to spin coupling. Formula The mass of an atomic nucleus, for neutrons, protons, and therefore nucleons, is given by where and are the rest mass of a neutron and a proton respectively, and is the binding energy of the nucleus. The semi-empirical mass formula states the binding energy is The term is either zero or , depending on the parity of and , where for some exponent . Note that as , the numerator of the term can be rewritten as . Each of the terms in this formula has a theoretical basis. The coefficients , , , , and are determined empirically; while they may be derived from experiment, they are typically derived from least-squares fit to contemporary data. While typically expressed by its basic five terms, further terms exist to explain additional phenomena. Akin to how changing a polynomial fit will change its coefficients, the interplay between these coefficients as new phenomena are introduced is complex; some terms influence each other, whereas the term is largely independent. Volume term The term is known as the volume term. The volume of the nucleus is proportional to A, so this term is proportional to the volume, hence the name. The basis for this term is the strong nuclear force. The strong force affects both protons and neutrons, and as expected, this term is independent of Z. Because the number of pairs that can be taken from A particles is , one might expect a term proportional to . However, the strong force has a very limited range, and a given nucleon may only interact strongly with its nearest neighbors and next nearest neighbors. Therefore, the number of pairs of particles that actually interact is roughly proportional to A, giving the volume term its form. The coefficient is smaller than the binding energy possessed by the nucleons with respect to their neighbors (), which is of order of 40 MeV. This is because the larger the number of nucleons in the nucleus, the larger their kinetic energy is, due to the Pauli exclusion principle. If one treats the nucleus as a Fermi ball of nucleons, with equal numbers of protons and neutrons, then the total kinetic energy is , with the Fermi energy, which is estimated as 38 MeV. Thus the expected value of in this model is not far from the measured value. Surface term The term is known as the surface term. This term, also based on the strong force, is a correction to the volume term. The volume term suggests that each nucleon interacts with a constant number of nucleons, independent of A. While this is very nearly true for nucleons deep within the nucleus, those nucleons on the surface of the nucleus have fewer nearest neighbors, justifying this correction. This can also be thought of as a surface-tension term, and indeed a similar mechanism creates surface tension in liquids. If the volume of the nucleus is proportional to A, then the radius should be proportional to and the surface area to . This explains why the surface term is proportional to . It can also be deduced that should have a similar order of magnitude to . Coulomb term The term or is known as the Coulomb or electrostatic term. The basis for this term is the electrostatic repulsion between protons. To a very rough approximation, the nucleus can be considered a sphere of uniform charge density. The potential energy of such a charge distribution can be shown to be where Q is the total charge, and R is the radius of the sphere. The value of can be approximately calculated by using this equation to calculate the potential energy, using an empirical nuclear radius of and Q = Ze. However, because electrostatic repulsion will only exist for more than one proton, becomes : where now the electrostatic Coulomb constant is Using the fine-structure constant, we can rewrite the value of as where is the fine-structure constant, and is the radius of a nucleus, giving to be approximately 1.25 femtometers. is the proton reduced Compton wavelength, and is the proton mass. This gives an approximate theoretical value of 0.691 MeV, not far from the measured value. Asymmetry term The term is known as the asymmetry term (or Pauli term). The theoretical justification for this term is more complex. The Pauli exclusion principle states that no two identical fermions can occupy exactly the same quantum state in an atom. At a given energy level, there are only finitely many quantum states available for particles. What this means in the nucleus is that as more particles are "added", these particles must occupy higher energy levels, increasing the total energy of the nucleus (and decreasing the binding energy). Note that this effect is not based on any of the fundamental forces (gravitational, electromagnetic, etc.), only the Pauli exclusion principle. Protons and neutrons, being distinct types of particles, occupy different quantum states. One can think of two different "pools" of states one for protons and one for neutrons. Now, for example, if there are significantly more neutrons than protons in a nucleus, some of the neutrons will be higher in energy than the available states in the proton pool. If we could move some particles from the neutron pool to the proton pool, in other words, change some neutrons into protons, we would significantly decrease the energy. The imbalance between the number of protons and neutrons causes the energy to be higher than it needs to be, for a given number of nucleons. This is the basis for the asymmetry term. The actual form of the asymmetry term can again be derived by modeling the nucleus as a Fermi ball of protons and neutrons. Its total kinetic energy is where and are the Fermi energies of the protons and neutrons. Since these are proportional to and respectively, one gets for some constant C. The leading terms in the expansion in the difference are then At the zeroth order in the expansion the kinetic energy is just the overall Fermi energy multiplied by . Thus we get The first term contributes to the volume term in the semi-empirical mass formula, and the second term is minus the asymmetry term (remember, the kinetic energy contributes to the total binding energy with a negative sign). is 38 MeV, so calculating from the equation above, we get only half the measured value. The discrepancy is explained by our model not being accurate: nucleons in fact interact with each other and are not spread evenly across the nucleus. For example, in the shell model, a proton and a neutron with overlapping wavefunctions will have a greater strong interaction between them and stronger binding energy. This makes it energetically favourable (i.e. having lower energy) for protons and neutrons to have the same quantum numbers (other than isospin), and thus increase the energy cost of asymmetry between them. One can also understand the asymmetry term intuitively as follows. It should be dependent on the absolute difference , and the form is simple and differentiable, which is important for certain applications of the formula. In addition, small differences between Z and N do not have a high energy cost. The A in the denominator reflects the fact that a given difference is less significant for larger values of A. Pairing term The term is known as the pairing term (possibly also known as the pairwise interaction). This term captures the effect of spin coupling. It is given by where is found empirically to have a value of about 1000 keV, slowly decreasing with mass number A. The binding energy may be increased by converting one of the odd protons or neutrons into a neutron or proton, so the odd nucleon can form a pair with its odd neighbour forming and even Z, N. The pairs have overlapping wave functions and sit very close together with a bond stronger than any other configuration. When the pairing term is substituted into the binding energy equation, for even Z, N, the pairing term adds binding energy, and for odd Z, N the pairing term removes binding energy. The dependence on mass number is commonly parametrized as The value of the exponent kP is determined from experimental binding-energy data. In the past its value was often assumed to be −3/4, but modern experimental data indicate that a value of −1/2 is nearer the mark: or Due to the Pauli exclusion principle the nucleus would have a lower energy if the number of protons with spin up were equal to the number of protons with spin down. This is also true for neutrons. Only if both Z and N are even, can both protons and neutrons have equal numbers of spin-up and spin-down particles. This is a similar effect to the asymmetry term. The factor is not easily explained theoretically. The Fermi-ball calculation we have used above, based on the liquid-drop model but neglecting interactions, will give an dependence, as in the asymmetry term. This means that the actual effect for large nuclei will be larger than expected by that model. This should be explained by the interactions between nucleons. For example, in the shell model, two protons with the same quantum numbers (other than spin) will have completely overlapping wavefunctions and will thus have greater strong interaction between them and stronger binding energy. This makes it energetically favourable (i.e. having lower energy) for protons to form pairs of opposite spin. The same is true for neutrons. Calculating coefficients The coefficients are calculated by fitting to experimentally measured masses of nuclei. Their values can vary depending on how they are fitted to the data and which unit is used to express the mass. Several examples are as shown below. The formula does not consider the internal shell structure of the nucleus. The semi-empirical mass formula therefore provides a good fit to heavier nuclei, and a poor fit to very light nuclei, especially 4He. For light nuclei, it is usually better to use a model that takes this shell structure into account. Examples of consequences of the formula By maximizing with respect to Z, one would find the best neutron–proton ratio N/Z for a given atomic weight A. We get This is roughly 1 for light nuclei, but for heavy nuclei the ratio grows in good agreement with experiment. By substituting the above value of Z back into , one obtains the binding energy as a function of the atomic weight, . Maximizing with respect to A gives the nucleus which is most strongly bound, i.e. most stable. The value we get is A = 63 (copper), close to the measured values of A = 62 (nickel) and A = 58 (iron). The liquid-drop model also allows the computation of fission barriers for nuclei, which determine the stability of a nucleus against spontaneous fission. It was originally speculated that elements beyond atomic number 104 could not exist, as they would undergo fission with very short half-lives, though this formula did not consider stabilizing effects of closed nuclear shells. A modified formula considering shell effects reproduces known data and the predicted island of stability (in which fission barriers and half-lives are expected to increase, reaching a maximum at the shell closures), though also suggests a possible limit to existence of superheavy nuclei beyond Z = 120 and N = 184. References Sources External links Nuclear liquid drop model in the hyperphysics online reference at Georgia State University. Liquid drop model with parameter fit from First Observations of Excited States in the Neutron Deficient Nuclei 160,161W and 159Ta, Alex Keenan, PhD thesis, University of Liverpool, 1999 (HTML version). Nuclear physics Nuclear chemistry Radiochemistry Mass Gamma rays
Semi-empirical mass formula
[ "Physics", "Chemistry", "Mathematics" ]
2,976
[ "Scalar physical quantities", "Physical quantities", "Spectrum (physical sciences)", "Nuclear chemistry", "Quantity", "Mass", "Electromagnetic spectrum", "Size", "Gamma rays", "Radioactivity", "Nuclear physics", "Radiochemistry", "nan", "Wikipedia categories named after physical quantities...
1,085,417
https://en.wikipedia.org/wiki/Molisch%27s%20test
Molisch's test is a sensitive chemical test, named after Austrian botanist Hans Molisch, for the presence of carbohydrates, based on the dehydration of the carbohydrate by sulfuric acid or hydrochloric acid to produce an aldehyde, which condenses with two molecules of a phenol (usually α-naphthol, though other phenols such as resorcinol and thymol also give colored products), resulting in a violet ring. Procedure The test solution is combined with a small amount of Molisch's reagent (α-naphthol dissolved in ethanol) in a test tube. After mixing, a small amount of concentrated sulfuric acid is slowly added down the sides of the sloping test-tube, without mixing, to form a layer. A positive reaction is indicated by appearance of a purple red ring at the interface between the acid and test layers. Reaction All carbohydrates – monosaccharides, disaccharides, and polysaccharides (except trioses and tetroses)– should give a positive reaction, and nucleic acids and glycoproteins also give a positive reaction, as all these compounds are eventually hydrolyzed to monosaccharides by strong mineral acids. Pentoses are then dehydrated to furfural, while hexoses are dehydrated to 5-hydroxymethylfurfural. Either of these aldehydes, if present, will condense with two molecules of α-naphthol to form a purple-colored product, as illustrated below by the example of glucose: See also Rapid furfural test References Biochemistry detection methods Carbohydrate methods
Molisch's test
[ "Chemistry", "Biology" ]
369
[ "Biochemistry methods", "Biochemistry detection methods", "Chemical tests", "Carbohydrate chemistry", "Carbohydrate methods" ]
1,085,606
https://en.wikipedia.org/wiki/Hypervalent%20molecule
In chemistry, a hypervalent molecule (the phenomenon is sometimes colloquially known as expanded octet) is a molecule that contains one or more main group elements apparently bearing more than eight electrons in their valence shells. Phosphorus pentachloride (), sulfur hexafluoride (), chlorine trifluoride (), the chlorite () ion in chlorous acid and the triiodide () ion are examples of hypervalent molecules. Definitions and nomenclature Hypervalent molecules were first formally defined by Jeremy I. Musher in 1969 as molecules having central atoms of group 15–18 in any valence other than the lowest (i.e. 3, 2, 1, 0 for Groups 15, 16, 17, 18 respectively, based on the octet rule). Several specific classes of hypervalent molecules exist: Hypervalent iodine compounds are useful reagents in organic chemistry (e.g. Dess–Martin periodinane) Tetra-, penta- and hexavalent phosphorus, silicon, and sulfur compounds (e.g. PCl5, PF5, SF6, sulfuranes and persulfuranes) Noble gas compounds (ex. xenon tetrafluoride, XeF4) Halogen polyfluorides (ex. chlorine pentafluoride, ClF5) N-X-L notation N-X-L nomenclature, introduced collaboratively by the research groups of Martin, Arduengo, and Kochi in 1980, is often used to classify hypervalent compounds of main group elements, where: N represents the number of valence electrons X is the chemical symbol of the central atom L the number of ligands to the central atom Examples of N-X-L nomenclature include: XeF2, 10-Xe-2 PCl5, 10-P-5 SF6, 12-S-6 IF7, 14-I-7 History and controversy The debate over the nature and classification of hypervalent molecules goes back to Gilbert N. Lewis and Irving Langmuir and the debate over the nature of the chemical bond in the 1920s. Lewis maintained the importance of the two-center two-electron (2c-2e) bond in describing hypervalence, thus using expanded octets to account for such molecules. Using the language of orbital hybridization, the bonds of molecules like PF5 and SF6 were said to be constructed from sp3dn orbitals on the central atom. Langmuir, on the other hand, upheld the dominance of the octet rule and preferred the use of ionic bonds to account for hypervalence without violating the rule (e.g. " 2F−" for SF6). In the late 1920s and 1930s, Sugden argued for the existence of a two-center one-electron (2c-1e) bond and thus rationalized bonding in hypervalent molecules without the need for expanded octets or ionic bond character; this was poorly accepted at the time. In the 1940s and 1950s, Rundle and Pimentel popularized the idea of the three-center four-electron bond, which is essentially the same concept which Sugden attempted to advance decades earlier; the three-center four-electron bond can be alternatively viewed as consisting of two collinear two-center one-electron bonds, with the remaining two nonbonding electrons localized to the ligands. The attempt to actually prepare hypervalent organic molecules began with Hermann Staudinger and Georg Wittig in the first half of the twentieth century, who sought to challenge the extant valence theory and successfully prepare nitrogen and phosphorus-centered hypervalent molecules. The theoretical basis for hypervalency was not delineated until J.I. Musher's work in 1969. In 1990, Magnusson published a seminal work definitively excluding the significance of d-orbital hybridization in the bonding of hypervalent compounds of second-row elements. This had long been a point of contention and confusion in describing these molecules using molecular orbital theory. Part of the confusion here originates from the fact that one must include d-functions in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result), and the contribution of the d-function to the molecular wavefunction is large. These facts were historically interpreted to mean that d-orbitals must be involved in bonding. However, Magnusson concludes in his work that d-orbital involvement is not implicated in hypervalency. Nevertheless, a 2013 study showed that although the Pimentel ionic model best accounts for the bonding of hypervalent species, the energetic contribution of an expanded octet structure is also not null. In this modern valence bond theory study of the bonding of xenon difluoride, it was found that ionic structures account for about 81% of the overall wavefunction, of which 70% arises from ionic structures employing only the p orbital on xenon while 11% arises from ionic structures employing an hybrid on xenon. The contribution of a formally hypervalent structure employing an orbital of sp3d hybridization on xenon accounts for 11% of the wavefunction, with a diradical contribution making up the remaining 8%. The 11% sp3d contribution results in a net stabilization of the molecule by mol−1, a minor but significant fraction of the total energy of the total bond energy ( mol−1). Other studies have similarly found minor but non-negligible energetic contributions from expanded octet structures in SF6 (17%) and XeF6 (14%). Despite the lack of chemical realism, the IUPAC recommends the drawing of expanded octet structures for functional groups like sulfones and phosphoranes, in order to avoid the drawing of a large number of formal charges or partial single bonds. Hypervalent hydrides A special type of hypervalent molecules is hypervalent hydrides. Most known hypervalent molecules contain substituents more electronegative than their central atoms. Hypervalent hydrides are of special interest because hydrogen is usually less electronegative than the central atom. A number of computational studies have been performed on chalcogen hydrides and pnictogen hydrides. Recently, a new computational study has showed that most hypervalent halogen hydrides XHn can exist. It is suggested that IH3 and IH5 are stable enough to be observable or, possibly, even isolable. Criticism Both the term and concept of hypervalency still fall under criticism. In 1984, in response to this general controversy, Paul von Ragué Schleyer proposed the replacement of 'hypervalency' with use of the term hypercoordination because this term does not imply any mode of chemical bonding and the question could thus be avoided altogether. The concept itself has been criticized by Ronald Gillespie who, based on an analysis of electron localization functions, wrote in 2002 that "as there is no fundamental difference between the bonds in hypervalent and non-hypervalent (Lewis octet) molecules there is no reason to continue to use the term hypervalent." For hypercoordinated molecules with electronegative ligands such as PF5, it has been demonstrated that the ligands can pull away enough electron density from the central atom so that its net content is again 8 electrons or fewer. Consistent with this alternative view is the finding that hypercoordinated molecules based on fluorine ligands, for example PF5 do not have hydride counterparts, e.g. phosphorane (PH5) which is unknown. The ionic model holds up well in thermochemical calculations. It predicts favorable exothermic formation of from phosphorus trifluoride PF3 and fluorine F2 whereas a similar reaction forming is not favorable. Alternative definition Durrant has proposed an alternative definition of hypervalency, based on the analysis of atomic charge maps obtained from atoms in molecules theory. This approach defines a parameter called the valence electron equivalent, γ, as “the formal shared electron count at a given atom, obtained by any combination of valid ionic and covalent resonance forms that reproduces the observed charge distribution”. For any particular atom X, if the value of γ(X) is greater than 8, that atom is hypervalent. Using this alternative definition, many species such as PCl5, , and XeF4, that are hypervalent by Musher's definition, are reclassified as hypercoordinate but not hypervalent, due to strongly ionic bonding that draws electrons away from the central atom. On the other hand, some compounds that are normally written with ionic bonds in order to conform to the octet rule, such as ozone O3, nitrous oxide NNO, and trimethylamine N-oxide , are found to be genuinely hypervalent. Examples of γ calculations for phosphate (γ(P) = 2.6, non-hypervalent) and orthonitrate (γ(N) = 8.5, hypervalent) are shown below. Bonding in hypervalent molecules Early considerations of the geometry of hypervalent molecules returned familiar arrangements that were well explained by the VSEPR model for atomic bonding. Accordingly, AB5 and AB6 type molecules would possess a trigonal bi-pyramidal and octahedral geometry, respectively. However, in order to account for the observed bond angles, bond lengths and apparent violation of the Lewis octet rule, several alternative models have been proposed. In the 1950s an expanded valence shell treatment of hypervalent bonding was adduced to explain the molecular architecture, where the central atom of penta- and hexacoordinated molecules would utilize d AOs in addition to s and p AOs. However, advances in the study of ab initio calculations have revealed that the contribution of d-orbitals to hypervalent bonding is too small to describe the bonding properties, and this description is now regarded as much less important. It was shown that in the case of hexacoordinated SF6, d-orbitals are not involved in S-F bond formation, but charge transfer between the sulfur and fluorine atoms and the apposite resonance structures were able to account for the hypervalency (See below). Additional modifications to the octet rule have been attempted to involve ionic characteristics in hypervalent bonding. As one of these modifications, in 1951, the concept of the 3-center 4-electron (3c-4e) bond, which described hypervalent bonding with a qualitative molecular orbital, was proposed. The 3c-4e bond is described as three molecular orbitals given by the combination of a p atomic orbital on the central atom and an atomic orbital from each of the two ligands on opposite sides of the central atom. Only one of the two pairs of electrons is occupying a molecular orbital that involves bonding to the central atom, the second pair being non-bonding and occupying a molecular orbital composed of only atomic orbitals from the two ligands. This model in which the octet rule is preserved was also advocated by Musher. Molecular orbital theory A complete description of hypervalent molecules arises from consideration of molecular orbital theory through quantum mechanical methods. An LCAO in, for example, sulfur hexafluoride, taking a basis set of the one sulfur 3s-orbital, the three sulfur 3p-orbitals, and six octahedral geometry symmetry-adapted linear combinations (SALCs) of fluorine orbitals, a total of ten molecular orbitals are obtained (four fully occupied bonding MOs of the lowest energy, two fully occupied intermediate energy non-bonding MOs and four vacant antibonding MOs with the highest energy) providing room for all 12 valence electrons. This is a stable configuration only for SX6 molecules containing electronegative ligand atoms like fluorine, which explains why SH6 is not a stable molecule. In the bonding model, the two non-bonding MOs (1eg) are localized equally on all six fluorine atoms. Valence bond theory For hypervalent compounds in which the ligands are more electronegative than the central, hypervalent atom, resonance structures can be drawn with no more than four covalent electron pair bonds and completed with ionic bonds to obey the octet rule. For example, in phosphorus pentafluoride (PF5), 5 resonance structures can be generated each with four covalent bonds and one ionic bond with greater weight in the structures placing ionic character in the axial bonds, thus satisfying the octet rule and explaining both the observed trigonal bipyramidal molecular geometry and the fact that the axial bond length (158 pm) is longer than the equatorial (154 pm). For a hexacoordinate molecule such as sulfur hexafluoride, each of the six bonds is the same length. The rationalization described above can be applied to generate 15 resonance structures each with four covalent bonds and two ionic bonds, such that the ionic character is distributed equally across each of the sulfur-fluorine bonds. Spin-coupled valence bond theory has been applied to diazomethane and the resulting orbital analysis was interpreted in terms of a chemical structure in which the central nitrogen has five covalent bonds; This led the authors to the interesting conclusion that "Contrary to what we were all taught as undergraduates, the nitrogen atom does indeed form five covalent linkages and the availability or otherwise of d-orbitals has nothing to do with this state of affairs." Structure, reactivity, and kinetics Structure Hexacoordinated phosphorus Hexacoordinate phosphorus molecules involving nitrogen, oxygen, or sulfur ligands provide examples of Lewis acid-Lewis base hexacoordination. For the two similar complexes shown below, the length of the C–P bond increases with decreasing length of the N–P bond; the strength of the C–P bond decreases with increasing strength of the N–P Lewis acid–Lewis base interaction. Pentacoordinated silicon This trend is also generally true of pentacoordinated main-group elements with one or more lone-pair-containing ligand, including the oxygen-pentacoordinated silicon examples shown below. The Si-halogen bonds range from close to the expected van der Waals value in A (a weak bond) almost to the expected covalent single bond value in C (a strong bond). Reactivity Silicon Corriu and coworkers performed early work characterizing reactions thought to proceed through a hypervalent transition state. Measurements of the reaction rates of hydrolysis of tetravalent chlorosilanes incubated with catalytic amounts of water returned a rate that is first order in chlorosilane and second order in water. This indicated that two water molecules interacted with the silane during hydrolysis and from this a binucleophilic reaction mechanism was proposed. Corriu and coworkers then measured the rates of hydrolysis in the presence of nucleophilic catalyst HMPT, DMSO or DMF. It was shown that the rate of hydrolysis was again first order in chlorosilane, first order in catalyst and now first order in water. Appropriately, the rates of hydrolysis also exhibited a dependence on the magnitude of charge on the oxygen of the nucleophile. Taken together this led the group to propose a reaction mechanism in which there is a pre-rate determining nucleophilic attack of the tetracoordinated silane by the nucleophile (or water) in which a hypervalent pentacoordinated silane is formed. This is followed by a nucleophilic attack of the intermediate by water in a rate determining step leading to hexacoordinated species that quickly decomposes giving the hydroxysilane. Silane hydrolysis was further investigated by Holmes and coworkers in which tetracoordinated (Mes = mesityl) and pentacoordinated were reacted with two equivalents of water. Following twenty-four hours, almost no hydrolysis of the tetracoordinated silane was observed, while the pentacoordinated silane was completely hydrolyzed after fifteen minutes. Additionally, X-ray diffraction data collected for the tetraethylammonium salts of the fluorosilanes showed the formation of hydrogen bisilonate lattice supporting a hexacoordinated intermediate from which is quickly displaced leading to the hydroxylated product. This reaction and crystallographic data support the mechanism proposed by Corriu et al.. The apparent increased reactivity of hypervalent molecules, contrasted with tetravalent analogues, has also been observed for Grignard reactions. The Corriu group measured Grignard reaction half-times by NMR for related 18-crown-6 potassium salts of a variety of tetra- and pentacoordinated fluorosilanes in the presence of catalytic amounts of nucleophile. Though the half reaction method is imprecise, the magnitudinal differences in reactions rates allowed for a proposed reaction scheme wherein, a pre-rate determining attack of the tetravalent silane by the nucleophile results in an equilibrium between the neutral tetracoordinated species and the anionic pentavalent compound. This is followed by nucleophilic coordination by two Grignard reagents as normally seen, forming a hexacoordinated transition state and yielding the expected product. The mechanistic implications of this are extended to a hexacoordinated silicon species that is thought to be active as a transition state in some reactions. The reaction of allyl- or crotyl-trifluorosilanes with aldehydes and ketones only precedes with fluoride activation to give a pentacoordinated silicon. This intermediate then acts as a Lewis acid to coordinate with the carbonyl oxygen atom. The further weakening of the silicon–carbon bond as the silicon becomes hexacoordinate helps drive this reaction. Phosphorus Similar reactivity has also been observed for other hypervalent structures such as the miscellany of phosphorus compounds, for which hexacoordinated transition states have been proposed. Hydrolysis of phosphoranes and oxyphosphoranes have been studied and shown to be second order in water. Bel'skii et al.. have proposed a prerate determining nucleophilic attack by water resulting in an equilibrium between the penta- and hexacoordinated phosphorus species, which is followed by a proton transfer involving the second water molecule in a rate determining ring-opening step, leading to the hydroxylated product. Alcoholysis of pentacoordinated phosphorus compounds, such as trimethoxyphospholene with benzyl alcohol, have also been postulated to occur through a similar octahedral transition state, as in hydrolysis, however without ring opening. It can be understood from these experiments that the increased reactivity observed for hypervalent molecules, contrasted with analogous nonhypervalent compounds, can be attributed to the congruence of these species to the hypercoordinated activated states normally formed during the course of the reaction. Ab initio calculations The enhanced reactivity at pentacoordinated silicon is not fully understood. Corriu and coworkers suggested that greater electropositive character at the pentavalent silicon atom may be responsible for its increased reactivity. Preliminary ab initio calculations supported this hypothesis to some degree, but used a small basis set. A software program for ab initio calculations, Gaussian 86, was used by Dieters and coworkers to compare tetracoordinated silicon and phosphorus to their pentacoordinate analogues. This ab initio approach is used as a supplement to determine why reactivity improves in nucleophilic reactions with pentacoordinated compounds. For silicon, the 6-31+G* basis set was used because of its pentacoordinated anionic character and for phosphorus, the 6-31G* basis set was used. Pentacoordinated compounds should theoretically be less electrophilic than tetracoordinated analogues due to steric hindrance and greater electron density from the ligands, yet experimentally show greater reactivity with nucleophiles than their tetracoordinated analogues. Advanced ab initio calculations were performed on series of tetracoordinated and pentacoordinated species to further understand this reactivity phenomenon. Each series varied by degree of fluorination. Bond lengths and charge densities are shown as functions of how many hydride ligands are on the central atoms. For every new hydride, there is one less fluoride. For silicon and phosphorus bond lengths, charge densities, and Mulliken bond overlap, populations were calculated for tetra and pentacoordinated species by this ab initio approach. Addition of a fluoride ion to tetracoordinated silicon shows an overall average increase of 0.1 electron charge, which is considered insignificant. In general, bond lengths in trigonal bipyramidal pentacoordinate species are longer than those in tetracoordinate analogues. Si-F bonds and Si-H bonds both increase in length upon pentacoordination and related effects are seen in phosphorus species, but to a lesser degree. The reason for the greater magnitude in bond length change for silicon species over phosphorus species is the increased effective nuclear charge at phosphorus. Therefore, silicon is concluded to be more loosely bound to its ligands. In addition Dieters and coworkers show an inverse correlation between bond length and bond overlap for all series. Pentacoordinated species are concluded to be more reactive because of their looser bonds as trigonal-bipyramidal structures. By calculating the energies for the addition and removal of a fluoride ion in various silicon and phosphorus species, several trends were found. In particular, the tetracoordinated species have much higher energy requirements for ligand removal than do pentacoordinated species. Further, silicon species have lower energy requirements for ligand removal than do phosphorus species, which is an indication of weaker bonds in silicon. See also Charge-shift bond References External links Chemical bonding Molecular geometry
Hypervalent molecule
[ "Physics", "Chemistry", "Materials_science" ]
4,634
[ "Molecular geometry", "Molecules", "Stereochemistry", "Hypervalent molecules", "Condensed matter physics", "nan", "Chemical bonding", "Matter" ]
1,085,916
https://en.wikipedia.org/wiki/Clomifene
Clomifene, also known as clomiphene, is a medication used to treat infertility in women who do not ovulate, including those with polycystic ovary syndrome. It is taken by mouth. Common side effects include pelvic pain and hot flashes. Other side effects can include changes in vision, vomiting, trouble sleeping, ovarian cancer, and seizures. It is not recommended in people with liver disease or abnormal vaginal bleeding of unknown cause or who are pregnant. Clomifene is in the selective estrogen receptor modulator (SERM) family of medication and is a nonsteroidal medication. It works by causing the release of GnRH by the hypothalamus, and subsequently gonadotropin from the anterior pituitary. Clomifene was approved for medical use in the United States in 1967. It is on the World Health Organization's List of Essential Medicines. Its introduction began the era of assisted reproductive technology. Clomifene (particularly the purified enclomiphene isomer) has also been found to have a powerful ability to boost or restore testosterone levels in hypogonadal men. It can be used to enhance performance in sports and is banned by the World Anti-Doping Agency. Medical uses Reproductive medicine Clomifene is one of several alternatives for inducing ovulation in those who are infertile due to anovulation or oligoovulation. Evidence is lacking for the use of clomifene in those who are infertile without a known reason. In such cases, studies have observed a clinical pregnancy rate 5.6% per cycle with clomifene treatment vs. 1.3%–4.2% per cycle without treatment. Clomifene has also been used with other assisted reproductive technology to increase success rates of these other modalities. Clomifene has been effectively used to restore spermatogenesis in trans women looking to have biological children. The effect of feminizing hormone therapy on fertility is not clear, but it is known that it can prevent sperm production. Testosterone replacement therapy Clomifene is sometimes used in the treatment of male hypogonadism as an alternative to testosterone replacement therapy. It has been found to increase testosterone levels by 2- to 2.5-times in hypogonadal men at such dosages. Despite the use of questionnaires in testosterone replacement comparator trials being called into question, clomifene's lower cost, therapeutic benefits, and greater value towards hypogonadism improvement have been noted. Clomifene consists of two stereoisomers in equal proportion: enclomifene and zuclomifene. Zuclomifene has pro-estrogenic properties, whereas enclomifene is pro-androgenic, i.e. it promotes testosterone production through stimulation of the HPG axis. For this reason, purified enclomifene isomer has been found to be twice as effective in boosting testosterone compared to the standard mix of both isomers. Additionally, enclomifene has a half-life of just 10 hours, but zuclomifene has a half-life on the order of several days to a week, so if the goal is to boost testosterone, taking regular clomifene may produce far longer-lasting pro-estrogenic effects than pro-androgenic effects. Gynecomastia Clomifene has been used in the treatment of gynecomastia. It has been found to be useful in the treatment of some cases of gynecomastia but it is not as effective as tamoxifen or raloxifene for this indication. It has shown variable results for gynecomastia (probably because the zuclomifene isomer is estrogenic), and hence is not recommended for treatment of the condition. Pure enclomifene isomer is likely to be more effective than clomifene at treating gynecomastia, because of the lack of the zuclomifene isomer (as noted above). Due to its long half-life, zuclomifene can be detected in urine for at least 261 days after discontinuation (261 days after discontinuation with a half-life of 30 days, there is still 0.24% of the peak level of zuclomifene being excreted, whereas with a half-life of 10 hours, enclomifene reaches the same 0.24% level in less than 4 days). Prohibited use in sports The World Anti-Doping Agency (WADA) prohibits clomifene under category S4 of hormone and metabolic modulators. It can be present as an undeclared ingredient in black market products available online to enhance athletic performance. Like other substances with anabolic properties, clomifene leads to increased muscle mass in males. Because clomifene can enhance egg production in hens, athletes may inadvertently consume the substance through contaminated food. A WADA study found that clomifene given to laying hens migrates into their eggs but was able to develop a method of distinguishing egg ingestion from doping. Contraindications Contraindications include an allergy to the medication, pregnancy, prior liver problems, abnormal vaginal bleeding of unclear cause, ovarian cysts other than those due to polycystic ovarian syndrome, unmanaged adrenal or thyroid problems, and pituitary tumors. Side effects The most common adverse drug reaction associated with the use of clomifene (>10% of people) is reversible ovarian enlargement. Less common effects (1–10% of people) include visual symptoms (blurred vision, double vision, floaters, eye sensitivity to light, scotomata), headaches, vasomotor flushes (or hot flashes), light sensitivity and pupil constriction, abnormal uterine bleeding and/or abdominal discomfort. Rare adverse events (<1% of people) include: high blood level of triglycerides, liver inflammation, reversible baldness and/or ovarian hyperstimulation syndrome. Rates of birth defects and miscarriages do not appear to change with the use of clomifene for fertility. Clomifene has been associated with liver abnormalities and a couple of cases of hepatotoxicity. Cancer risk Some studies have suggested that clomifene if used for more than a year may increase the risk of ovarian cancer. This may only be the case in those who have never been and do not become pregnant. Subsequent studies have failed to support those findings. Clomifene has been shown to be associated with an increased risk of malignant melanomas and thyroid cancer. Thyroid cancer risk was not associated with the number of pregnancies carried to viability. Pharmacology Pharmacodynamics Selective estrogen receptor modulator activity Clomifene is a nonsteroidal triphenylethylene derivative that acts as a selective estrogen receptor modulator (SERM). It consists of a non-racemic mixture of zuclomifene (~38%) and enclomifene (~62%), each of which has unique pharmacologic properties. It is a mixed agonist and antagonist of the estrogen receptor (ER). Clomifene activates the ERα in the setting of low baseline estrogen levels and partially blocks the receptor in the context of high baseline estrogen levels. Conversely, it is an antagonist of the ERβ. Clomifene has antiestrogenic effects in the uterus. There is little clinical research on the influence of clomifene in many target tissues, such as lipids, the cardiovascular system, and the breasts. Positive effects of clomifene on bone have been observed. Clomifene has been found to decrease insulin-like growth factor 1 (IGF-1) levels in women. Clomifene is a long-acting ER ligand, with a nuclear retention of greater than 48 hours. Clomifene is a prodrug being activated via similar metabolic pathways as the related triphenylethylene SERMs tamoxifen and toremifene. The affinity of clomifene for the ER relative to estradiol ranges from 0.1 to 12% in different studies, which is similar to the range for tamoxifen (0.06–16%). 4-Hydroxyclomifene, a major active metabolite of clomifene, and afimoxifene (4-hydroxytamoxifen), a major active metabolite of tamoxifen, show 89–251% and 41–246% of the affinity of estradiol for the ER in human MCF-7 breast cancer cells, respectively. The ER affinities of the isomers of 4-hydroxyclomifene were 285% for (E)-4-hydroxyclomifene and 16% for (Z)-4-hydroxyclomifene relative to estradiol. 4-Hydroxy-N-desethylclomiphene has similar affinity to 4-hydroxyclomifene for the ER. In one study, the affinities of clomifene and its metabolites for the ERα were ~100 nM for clomifene, ~2.4 nM for 4-hydroxyclomifene, ~125 nM for N-desethylclomiphene, and ~1.4 nM for 4-hydroxy-N-desethylclomiphene. Even though clomifene has some estrogenic effect, the antiestrogenic property is believed to be the primary source for stimulating ovulation. Clomifene appears to act mostly in the hypothalamus where it depletes hypothalamic ERs and blocks the negative feedback effect of circulating endogenous estradiol, which in turn results in an increase in hypothalamic gonadotropin-releasing hormone (GnRH) pulse frequency and circulating concentrations of follicle-stimulating hormone (FSH) and luteinizing hormone (LH). In normal physiologic female hormonal cycling, at seven days past ovulation, high levels of estrogen and progesterone produced from the corpus luteum inhibit GnRH, FSH, and LH at the hypothalamus and anterior pituitary. If fertilization does not occur in the post-ovulation period the corpus luteum disintegrates due to a lack of human chorionic gonadotropin (hCG). This would normally be produced by the embryo in the effort of maintaining progesterone and estrogen levels during pregnancy. Therapeutically, clomifene is given early in the menstrual cycle to produce follicles. Follicles, in turn, produce the estrogen, which circulates in serum. In the presence of clomifene, the body perceives a low level of estrogen, similar to day 22 in the previous cycle. Since estrogen can no longer effectively exert negative feedback on the hypothalamus, GnRH secretion becomes more rapidly pulsatile, which results in increased pituitary gonadotropin release. (More rapid, lower amplitude pulses of GnRH lead to increased LH and FSH secretion, while more irregular, larger amplitude pulses of GnRH leads to a decrease in the ratio of LH to FSH.) Increased FSH levels cause the growth of more ovarian follicles, and subsequently rupture of follicles resulting in ovulation. Ovulation occurs most often 6 to 7 days after a course of clomifene. In normal men, 50 mg/day clomifene for 8 months has been found to increase testosterone levels by around 870 ng/dL in younger men and by around 490 ng/dL in elderly men. Estradiol levels increased by 62 pg/mL in younger men and by 40 pg/mL in elderly men. These findings suggest that the progonadotropic effects of clomifene are stronger in younger men than in older men. In men with hypogonadism, clomifene has been found to increase testosterone levels by 293 to 362 ng/dL and estradiol levels by 5.5 to 13 pg/mL. In a large clinical study of men with low testosterone levels (<400 ng/dL), 25 mg/day clomifene increased testosterone levels from 309 ng/dL to 642 ng/dL after 3 months of therapy. No significant changes in HDL cholesterol, triglycerides, fasting glucose, or prolactin levels were observed, although total cholesterol levels decreased significantly. Other activities Clomifene is an inhibitor of the conversion of desmosterol into cholesterol by the enzyme 24-dehydrocholesterol reductase. Concerns about possible induction of desmosterolosis and associated symptoms such as cataracts and ichthyosis with extended exposure precluded the use of clomifene in the treatment of breast cancer. Continuous use of clomifene has been found to increase desmosterol levels by 10% and continuous high doses of clomifene (200 mg/day) have been reported to produce visual disturbances. Pharmacokinetics Clomifene produces N-desethylclomiphene, clomifenoxide (clomifene N-oxide), 4-hydroxyclomifene, and 4-hydroxy-N-desethylclomiphene as metabolites. Clomifene is a prodrug most importantly of 4-hydroxyclomifene and 4-hydroxy-N-desethylclomiphene, which are the most active of its metabolites. In one study, the peak levels after a single 50 mg dose of clomifene were 20.37 nmol/L for clomifene, 0.95 nmol/L for 4-hydroxyclomifene, and 1.15 nmol/L for 4-hydroxy-N-desethylclomiphene. Clomifene has an onset of action of 5 to 10 days following course of treatment and an elimination half-life about 4 - 7days. In one study, after a single 50 mg dose of clomifene, the half-life of clomifene was 128 hours (5.3 days), of 4-hydroxyclomifene was 13 hours, and of 4-hydroxy-N-desethylclomiphenewas 15 hours. Individuals with the CYP2D6*10 allele showed longer half-lives for 4-hydroxyclomifene and 4-hydroxy-N-desethylclomiphene. Primarily due to differences in CYP2D6 genetics, steady state concentrations and individual response to clomifene are highly variable. Most clomifene metabolism occurs in the liver, where it undergoes enterohepatic recirculation. Clomifene and its metabolites are excreted primarily through feces (42%), and excretion can occur up to 6 weeks after discontinuation. Chemistry Clomifene is a triphenylethylene derivative. It is a mixture of two geometric isomers, the cis enclomifene ((E)-clomifene) form and trans zuclomifene ((Z)-clomifene) form. These two isomers contribute to the mixed estrogenic and antiestrogenic properties of clomifene. The typical ratio of these isomers after synthesis is 38% zuclomiphene and 62% enclomiphene. The United States Pharmacopeia specifies that clomifene preparations must contain between 30% and 50% zuclomiphene. History A team at William S. Merrell Chemical Company led by Frank Palopoli synthesized clomifene in 1956; after its biological activity was confirmed a patent was filed and issued in November 1959. Scientists at Merrell had previously synthesized chlorotrianisene and ethamoxytriphetol. Clomifene was studied in the treatment of advanced breast cancer during the period of 1964 to 1974 and was found to be effective but was abandoned due to concerns about desmosterolosis with extended use. Short-term use (e.g. days to months) did not raise the same concerns and clomifene continued to be studied for other indications. Clinical studies were conducted under an Investigational New Drug Application; clomifene was third drug for which an IND had been filed under the 1962 Kefauver Harris Amendment to the Federal Food, Drug, and Cosmetic Act that had been passed in response to the thalidomide tragedy. It was approved for marketing in 1967 under the brand name Clomid. It was first used to treat cases of oligomenorrhea but was expanded to include treatment of anovulation when women undergoing treatment had higher than expected rates of pregnancy. The drug is widely considered to have been a revolution in the treatment of female infertility, the beginning of the modern era of assisted reproductive technology, and the beginning of what in the words of Eli Y. Adashi, was "the onset of the US multiple births epidemic". The company was acquired by Dow Chemical in 1980, and in 1989 Dow Chemical acquired 67 percent interest of Marion Laboratories, which was renamed Marion Merrell Dow. In 1995, Hoechst AG acquired the pharmaceutical business of Marion Merrell Dow. Hoechst in turn became part of Aventis in 1999, and subsequently a part of Sanofi. It became the most widely prescribed drug for ovulation induction to reverse anovulation or oligoovulation. Society and culture Brand names Clomifene is marketed under many brand names worldwide, including Beclom, Bemot, Biogen, Blesifen, Chloramiphene, Clofert, Clomene, ClomHEXAL, Clomi, Clomid, Clomidac, Clomifen, Clomifencitrat, Clomifene, Clomifène, Clomifene citrate, Clomifeni citras, Clomifeno, Clomifert, Clomihexal, Clomiphen, Clomiphene, Clomiphene Citrate, Cloninn, Clostil, Clostilbegyt, Clovertil, Clovul, Dipthen, Dufine, Duinum, Fensipros, Fertab, Fertec, Fertex, Ferticlo, Fertil, Fertilan, Fertilphen, Fertin, Fertomid, Ferton, Fertotab, Fertyl, Fetrop, Folistim, Genoclom, Genozym, Hete, I-Clom, Ikaclomin, Klofit, Klomen, Klomifen, Lomifen, MER 41, Milophene, Ofertil, Omifin, Ova-mit, Ovamit, Ovinum, Ovipreg, Ovofar, Ovuclon, Ovulet, Pergotime, Pinfetil, Profertil, Prolifen, Provula, Reomen, Serofene, Serophene, Serpafar, Serpafar, Surole, Tocofeno, and Zimaquin. Regulation Clomifene is included on the World Anti-Doping Agency list of illegal doping agents in sport. It is listed because it is an "anti-estrogenic substance". Research Clomifene has been used almost exclusively for ovulation induction in premenopausal women, and has been studied very limitedly in postmenopausal women. Clomifene was studied for treatment and prevention of breast cancer, but issues with toxicity led to abandonment of this indication, as did the discovery of tamoxifen. Like the structurally related drug triparanol, clomifene is known to inhibit the enzyme 24-dehydrocholesterol reductase and increase circulating desmosterol levels, making it unfavorable for extended use in breast cancer due to risk of side effects like irreversible cataracts. References 2-Phenoxyethanamines 24-Dehydrocholesterol reductase inhibitors Diethylamino compounds Drugs developed by Merck Hepatotoxins Organochlorides Prodrugs Progonadotropins Selective estrogen receptor modulators Triphenylethylenes Wikipedia medicine articles ready to translate World Health Organization essential medicines
Clomifene
[ "Chemistry" ]
4,418
[ "Chemicals in medicine", "Prodrugs" ]
1,086,081
https://en.wikipedia.org/wiki/Araldite
Araldite is a registered trademark of Huntsman Advanced Materials (previously part of Ciba-Geigy) referring to their range of engineering and structural epoxy, acrylic, and polyurethane adhesives. Swiss manufacturers originally launched Araldite DIY adhesive products in 1946. The first batches of Araldite epoxy resins, for which the brand is best known, were made in Duxford, England in 1950. Araldite adhesive sets by the interaction of an epoxy resin with a hardener. Mixing an epoxy resin and hardener together starts a chemical reaction that produces heat – an exothermic reaction. It is claimed that after curing the bond is impervious to boiling water and to all common organic solvents. History Aero Research Limited (ARL), founded in the UK in 1934, developed a new synthetic-resin adhesive for bonding metals, glass, porcelain, china and other materials. The name "Araldite" recalls the ARL brand: ARaLdite. De Trey Frères SA of Switzerland carried out the first production of epoxy resins. They licensed the process to Ciba AG in the early 1940s and Ciba first demonstrated a product under the tradename "Araldite" at the Swiss Industries Fair in 1945. Ciba went on to become one of the three major epoxy-resin producers worldwide. Ciba's epoxy business was spun off and later sold in the late 1990s and became the advanced materials business unit of Huntsman Corporation of the US. Notable applications Despite a widespread myth, Araldite was not used in the production of the De Havilland Mosquito aircraft in the 1940s. Another Aero Research Limited glue was used, called Aerolite, which was not an epoxy resin, but a gap-filling urea-formaldehyde adhesive. Araldite adhesive is used to join together the two sections of carbon composite which make up the monocoque of the Lamborghini Aventador. The use of Araldite adhesive in architecture to bond thin joints of pre-cast concrete units was pioneered by Ove Arup in Coventry cathedral and the Sydney Opera House. At Coventry cathedral, Araldite adhesive was used to bond its columns and fins, while at Sydney Opera House, Araldite adhesive was used to bond the rib sections of the shells, since a traditional concrete joint would have slowed construction, as it would need 24 hours to cure before stressing. Highmark Manufacturing uses Araldite epoxy resin in the manufacture of advanced ballistic protection body armour. Schlösser Metallbau, a manufacturer of metal parts for railway carriages, uses Araldite epoxy resin to bond aluminium profiles of cab doorframes on the DBAG Class 423 Siemens Bombardier train. Fischer Composite Technology GmbH uses the Araldite RTM System to produce carbon composite side blades for the Audi R8. Araldite epoxy resin is commonly used as an embedding medium for electron microscopy. Some Flamenco guitarists (e.g. Paco Peña) use it to reinforce their fingernails. Brian May used it to seal some of the pickups in his homemade Red Special guitar to reduce microphonic feedback. Advertising In 1983, British advertising agency FCO Univas set up a visual stunt presentation of the strength of Araldite adhesive by gluing a yellow Ford Cortina to a billboard on Cromwell Road, London, with the tagline "It also sticks handles to teapots". Later, further to advertise the strength of Araldite, a red Cortina was placed on top of the yellow Cortina, with the tagline "The tension mounts". Finally, the cars were removed, leaving a hole on the billboard and a tagline "How did we pull it off?". See also Aerolite J-B Weld Loctite Redux References External links Specifications for 'Araldite Super Strength' Adhesives Aerospace engineering Thermosetting plastics Brand name materials
Araldite
[ "Engineering" ]
839
[ "Aerospace engineering" ]
1,086,083
https://en.wikipedia.org/wiki/X-ray%20absorption%20fine%20structure
X-ray absorption fine structure (XAFS) is a specific structure observed in X-ray absorption spectroscopy (XAS). By analyzing the XAFS, information can be acquired on the local structure and on the unoccupied local electronic states. Atomic spectra The atomic X-ray absorption spectrum (XAS) of a core-level in an absorbing atom is separated into states in the discrete part of the spectrum called "bounds final states" or "Rydberg states" below the ionization potential (IP) and "states in the continuum" part of the spectrum above the ionization potential due to excitations of the photoelectron in the vacuum. Above the IP the absorption cross section attenuates gradually with the X-ray energy. Following early experimental and theoretical works in the thirties, in the sixties using synchrotron radiation at the National Bureau of Standards it was established that the broad asymmetric absorption peaks are due to Fano resonances above the atomic ionization potential where the final states are many body quasi-bound states (i.e., a doubly excited atom) degenerate with the continuum. Spectra of molecules and condensed matter The XAS spectra of condensed matter are usually divided in three energy regions: Edge region The edge region usually extends in a range of few eV around the absorption edge. The spectral features in the edge region i) in good metals are excitations to final delocalized states above the Fermi level; ii) in insulators are core excitons below the ionization potential; iii) in molecules are electronic transitions to the first unoccupied molecular levels above the chemical potential in the initial states which are shifted into the discrete part of the core absorption spectrum by the Coulomb interaction with the core hole. Multi-electron excitations and configuration interaction between many body final states dominate the edge region in strongly correlated metals and insulators. For many years the edge region was referred to as the “Kossel structure” but now it is known as "absorption edge region" since the Kossel structure refers only to unoccupied molecular final states which is a correct description only for few particular cases: molecules and strongly disordered systems. X-ray Absorption Near Edge Structure The XANES energy region extends between the edge region and the EXAFS region over a 50-100 eV energy range around the core level x-ray absorption threshold. Before 1980 the XANES region was wrongly assigned to different final states: a) unoccupied total density of states, or b) unoccupied molecular orbitals (kossel structure) or c) unoccupied atomic orbitals or d) low energy EXAFS oscillations. In the seventies, using synchrotron radiation in Frascati and Stanford synchrotron sources, it was experimentally shown that the features in this energy region are due to multiple scattering resonances of the photoelectron in a nanocluster of variable size. Antonio Bianconi in 1980 invented the acronym XANES to indicate the spectral region dominated by multiple scattering resonances of the photoelectron in the soft x-ray range and in the hard X-ray range. In the XANES energy range the kinetic energy of the photoelectron in the final state is between few eV and 50-100 eV. In this regime the photoelectron has a strong scattering amplitude by neighboring atoms in molecules and condensed matter, its wavelength is larger than interatomic distances, its mean free path could be smaller than one nanometer and finally the lifetime of the excited state is in the order of femtoseconds. The XANES spectral features are described by full multiple scattering theory proposed in the early seventies. Therefore, the key step for XANES interpretation is the determination of the size of the atomic cluster of neighbor atoms, where the final states are confined, which could range from 0.2 nm to 2 nm in different systems. This energy region has been called later (in 1982) also near-edge X-ray absorption fine structure (NEXAFS), which is synonymous with XANES. During more than 20 years the XANES interpretation has been object of discussion but recently there is agreement that the final states are "multiple scattering resonances" and many body final states play an important role. Intermediate region There is an intermediate region between the XANES and EXAFS regions where low n-body distribution functions play a key role. Extended X-ray absorption fine structure The oscillatory structure extending for hundreds of electron volts past the edges was called the “Kronig structure” after the scientist, Ralph Kronig, who assigned this structure in the high energy range ( i.e., for a kinetic energy range - larger than 100 eV - of the photoelectron in the weak scattering regime) to the single scattering of the excited photoelectron by neighbouring atoms in molecules and condensed matter. This regime was called EXAFS in 1971 by Sayers, Stern and Lytle. and it developed only after the use of intense synchrotron radiation sources. Applications of x-ray absorption spectroscopy X-ray absorption edge spectroscopy corresponds to the transition from a core-level to an unoccupied orbital or band and mainly reflects the electronic unoccupied states. EXAFS, resulting from the interference in the single scattering process of the photoelectron scattered by surrounding atoms, provides information on the local structure. Information on the geometry of the local structure is provided by the analysis of the multiple scattering peaks in the XANES spectra. The XAFS acronym has been later introduced to indicate the sum of the XANES and EXAFS spectra. See also SEXAFS EXAFS XANES References External links M. Newville, Fundamentals of XAFS S. Bare, XANES measurements and interpretation B. Ravel, A practical introduction to multiple scattering X-ray absorption spectroscopy fr:Spectrométrie d'absorption it:EXAFS
X-ray absorption fine structure
[ "Chemistry", "Materials_science", "Engineering" ]
1,248
[ "X-ray absorption spectroscopy", "Materials science", "Laboratory techniques in condensed matter physics" ]
1,086,718
https://en.wikipedia.org/wiki/Helix-turn-helix
Helix-turn-helix is a DNA-binding domain (DBD). The helix-turn-helix (HTH) is a major structural motif capable of binding DNA. Each monomer incorporates two α helices, joined by a short strand of amino acids, that bind to the major groove of DNA. The HTH motif occurs in many proteins that regulate gene expression. It should not be confused with the helix–loop–helix motif. Discovery The discovery of the helix-turn-helix motif was based on similarities between several genes encoding transcription regulatory proteins from bacteriophage lambda and Escherichia coli: Cro, CAP, and λ repressor, which were found to share a common 20–25 amino acid sequence that facilitates DNA recognition. Function The helix-turn-helix motif is a DNA-binding motif. The recognition and binding to DNA by helix-turn-helix proteins is done by the two α helices, one occupying the N-terminal end of the motif, the other at the C-terminus. In most cases, such as in the Cro repressor, the second helix contributes most to DNA recognition, and hence it is often called the "recognition helix". It binds to the major groove of DNA through a series of hydrogen bonds and various Van der Waals interactions with exposed bases. The other α helix stabilizes the interaction between protein and DNA, but does not play a particularly strong role in its recognition. The recognition helix and its preceding helix always have the same relative orientation. Classification of helix-turn-helix motifs Several attempts have been made to classify the helix-turn-helix motifs based on their structure and the spatial arrangement of their helices. Some of the main types are described below. Di-helical The di-helical helix-turn-helix motif is the simplest helix-turn-helix motif. A fragment of Engrailed homeodomain encompassing only the two helices and the turn was found to be an ultrafast independently folding protein domain. Tri-helical An example of this motif is found in the transcriptional activator Myb. Tetra-helical The tetra-helical helix-turn-helix motif has an additional C-terminal helix compared to the tri-helical motifs. These include the LuxR-type DNA-binding HTH domain found in bacterial transcription factors and the helix-turn-helix motif found in the TetR repressors. Multihelical versions with additional helices also occur. Winged helix-turn-helix The winged helix-turn-helix (wHTH) motif is formed by a 3-helical bundle and a 3- or 4-strand beta-sheet (wing). The topology of helices and strands in the wHTH motifs may vary. In the transcription factor ETS wHTH folds into a helix-turn-helix motif on a four-stranded anti-parallel beta-sheet scaffold arranged in the order α1-β1-β2-α2-α3-β3-β4 where the third helix is the DNA recognition helix. Other modified helix-turn-helix motifs Other derivatives of the helix-turn-helix motif include the DNA-binding domain found in MarR, a regulator of multiple antibiotic resistance, which forms a winged helix-turn-helix with an additional C-terminal alpha helix. See also DNA-binding domain DNA-binding protein Secondary structure Zinc finger References Further reading External links Helix-turn-helix motif, lambda-like repressor, from EMBL Full PDB entry for PDB ID 1LMB Cro/C1-type HTH domain, more HTHs in PROSITE Protein structural motifs Transcription factors DNA-binding substances Protein domains Protein superfamilies
Helix-turn-helix
[ "Chemistry", "Biology" ]
779
[ "Genetics techniques", "Transcription factors", "Gene expression", "Protein classification", "Signal transduction", "Protein structural motifs", "DNA-binding substances", "Protein domains", "Protein superfamilies", "Induced stem cells" ]
3,185,688
https://en.wikipedia.org/wiki/Nuclear%20reactor%20physics
Nuclear reactor physics is the field of physics that studies and deals with the applied study and engineering applications of chain reaction to induce a controlled rate of fission in a nuclear reactor for the production of energy. Most nuclear reactors use a chain reaction to induce a controlled rate of nuclear fission in fissile material, releasing both energy and free neutrons. A reactor consists of an assembly of nuclear fuel (a reactor core), usually surrounded by a neutron moderator such as regular water, heavy water, graphite, or zirconium hydride, and fitted with mechanisms such as control rods which control the rate of the reaction. The physics of nuclear fission has several quirks that affect the design and behavior of nuclear reactors. This article presents a general overview of the physics of nuclear reactors and their behavior. Criticality In a nuclear reactor, the neutron population at any instant is a function of the rate of neutron production (due to fission processes) and the rate of neutron losses (due to non-fission absorption mechanisms and leakage from the system). When a reactor's neutron population remains steady from one generation to the next (creating as many new neutrons as are lost), the fission chain reaction is self-sustaining and the reactor's condition is referred to as "critical". When the reactor's neutron production exceeds losses, characterized by increasing power level, it is considered "supercritical", and when losses dominate, it is considered "subcritical" and exhibits decreasing power. The "Six-factor formula" is the neutron life-cycle balance equation, which includes six separate factors, the product of which is equal to the ratio of the number of neutrons in any generation to that of the previous one; this parameter is called the effective multiplication factor k, also denoted by Keff, where k = Є Lf ρ Lth f η, where Є = "fast-fission factor", Lf = "fast non-leakage factor", ρ = "resonance escape probability", Lth = "thermal non-leakage factor", f = "thermal fuel utilization factor", and η = "reproduction factor". This equation's factors are roughly in order of potential occurrence for a fission born neutron during critical operation. As already mentioned before, k = (Neutrons produced in one generation)/(Neutrons produced in the previous generation). In other words, when the reactor is critical, k = 1; when the reactor is subcritical, k < 1; and when the reactor is supercritical, k > 1. Reactivity is an expression of the departure from criticality. δk = (k − 1)/k. When the reactor is critical, δk = 0. When the reactor is subcritical, δk < 0. When the reactor is supercritical, δk > 0. Reactivity is also represented by the lowercase Greek letter rho (ρ). Reactivity is commonly expressed in decimals or percentages or pcm (per cent mille) of Δk/k. When reactivity ρ is expressed in units of delayed neutron fraction β, the unit is called the dollar. If we write 'N' for the number of free neutrons in a reactor core and for the average lifetime of each neutron (before it either escapes from the core or is absorbed by a nucleus), then the reactor will follow the differential equation (evolution equation). where is a constant of proportionality, and is the rate of change of the neutron count in the core. This type of differential equation describes exponential growth or exponential decay, depending on the sign of the constant , which is just the expected number of neutrons after one average neutron lifetime has elapsed: Here, is the probability that a particular neutron will strike a fuel nucleus, is the probability that the neutron, having struck the fuel, will cause that nucleus to undergo fission, is the probability that it will be absorbed by something other than fuel, and is the probability that it will "escape" by leaving the core altogether. is the number of neutrons produced, on average, by a fission event—it is between 2 and 3 for both 235U and 239Pu (e.g., for thermal neutrons in 235U, = 2.4355 ± 0.0023 ). If is positive, then the core is supercritical and the rate of neutron production will grow exponentially until some other effect stops the growth. If is negative, then the core is "subcritical" and the number of free neutrons in the core will shrink exponentially until it reaches an equilibrium at zero (or the background level from spontaneous fission). If is exactly zero, then the reactor is critical and its output does not vary in time (, from above). Nuclear reactors are engineered to reduce and . Small, compact structures reduce the probability of direct escape by minimizing the surface area of the core, and some materials (such as graphite) can reflect some neutrons back into the core, further reducing . The probability of fission, , depends on the nuclear physics of the fuel, and is often expressed as a cross section. Reactors are usually controlled by adjusting . Control rods made of a strongly neutron-absorbent material such as cadmium or boron can be inserted into the core: any neutron that happens to impact the control rod is lost from the chain reaction, reducing . is also controlled by the recent history of the reactor core itself (see below). Starter sources The mere fact that an assembly is supercritical does not guarantee that it contains any free neutrons at all. At least one neutron is required to "strike" a chain reaction, and if the spontaneous fission rate is sufficiently low it may take a long time (in 235U reactors, as long as many minutes) before a chance neutron encounter starts a chain reaction even if the reactor is supercritical. Most nuclear reactors include a "starter" neutron source that ensures there are always a few free neutrons in the reactor core, so that a chain reaction will begin immediately when the core is made critical. A common type of startup neutron source is a mixture of an alpha particle emitter such as 241Am (americium-241) with a lightweight isotope such as 9Be (beryllium-9). The primary sources described above have to be used with fresh reactor cores. For operational reactors, secondary sources are used; most often a combination of antimony with beryllium. Antimony becomes activated in the reactor and produces high-energy gamma photons, which produce photoneutrons from beryllium. Uranium-235 undergoes a small rate of natural spontaneous fission, so there are always some neutrons being produced even in a fully shutdown reactor. When the control rods are withdrawn and criticality is approached the number increases because the absorption of neutrons is being progressively reduced, until at criticality the chain reaction becomes self-sustaining. Note that while a neutron source is provided in the reactor, this is not essential to start the chain reaction, its main purpose is to give a shutdown neutron population which is detectable by instruments and so make the approach to critical more observable. The reactor will go critical at the same control rod position whether a source is loaded or not. Once the chain reaction is begun, the primary starter source may be removed from the core to prevent damage from the high neutron flux in the operating reactor core; the secondary sources usually remains in situ to provide a background reference level for control of criticality. Subcritical multiplication Even in a subcritical assembly such as a shut-down reactor core, any stray neutron that happens to be present in the core (for example from spontaneous fission of the fuel, from radioactive decay of fission products, or from a neutron source) will trigger an exponentially decaying chain reaction. Although the chain reaction is not self-sustaining, it acts as a multiplier that increases the equilibrium number of neutrons in the core. This subcritical multiplication effect can be used in two ways: as a probe of how close a core is to criticality, and as a way to generate fission power without the risks associated with a critical mass. If is the neutron multiplication factor of a subcritical core and is the number of neutrons coming per generation in the reactor from an external source, then at the instant when the neutron source is switched on, number of neutrons in the core will be . After 1 generation, this neutrons will produce neutrons in the reactor and reactor will have a totality of neutrons considering the newly entered neutrons in the reactor. Similarly after 2 generation, number of neutrons produced in the reactor will be and so on. This process will continue and after a long enough time, the number of neutrons in the reactor will be, This series will converge because for the subcritical core, . So the number of neutrons in the reactor will be simply, The fraction is called subcritical multiplication factor (α). As a measurement technique, subcritical multiplication was used during the Manhattan Project in early experiments to determine the minimum critical masses of 235U and of 239Pu. It is still used today to calibrate the controls for nuclear reactors during startup, as many effects (discussed in the following sections) can change the required control settings to achieve criticality in a reactor. As a power-generating technique, subcritical multiplication allows generation of nuclear power for fission where a critical assembly is undesirable for safety or other reasons. A subcritical assembly together with a neutron source can serve as a steady source of heat to generate power from fission. Including the effect of an external neutron source ("external" to the fission process, not physically external to the core), one can write a modified evolution equation: where is the rate at which the external source injects neutrons into the core in neutrons/Δt. In equilibrium, the core is not changing and dN/dt is zero, so the equilibrium number of neutrons is given by: If the core is subcritical, then is negative so there is an equilibrium with a positive number of neutrons. If the core is close to criticality, then is very small and thus the final number of neutrons can be made arbitrarily large. Neutron moderators To improve and enable a chain reaction, natural or low enrichment uranium-fueled reactors must include a neutron moderator that interacts with newly produced fast neutrons from fission events to reduce their kinetic energy from several MeV to thermal energies of less than one eV, making them more likely to induce fission. This is because 235U has a larger cross section for slow neutrons, and also because 238U is much less likely to absorb a thermal neutron than a freshly produced neutron from fission. Neutron moderators are thus materials that slow down neutrons. Neutrons are most effectively slowed by colliding with the nucleus of a light atom, hydrogen being the lightest of all. To be effective, moderator materials must thus contain light elements with atomic nuclei that tend to scatter neutrons on impact rather than absorb them. In addition to hydrogen, beryllium and carbon atoms are also suited to the job of moderating or slowing down neutrons. Hydrogen moderators include water (H2O), heavy water (D2O), and zirconium hydride (ZrH2), all of which work because a hydrogen nucleus has nearly the same mass as a free neutron: neutron-H2O or neutron-ZrH2 impacts excite rotational modes of the molecules (spinning them around). Deuterium nuclei (in heavy water) absorb kinetic energy less well than do light hydrogen nuclei, but they are much less likely to absorb the impacting neutron. Water or heavy water have the advantage of being transparent liquids, so that, in addition to shielding and moderating a reactor core, they permit direct viewing of the core in operation and can also serve as a working fluid for heat transfer. Carbon in the form of graphite has been widely used as a moderator. It was used in Chicago Pile-1, the world's first man-made critical assembly, and was commonplace in early reactor designs including the Soviet RBMK nuclear power plants such as the Chernobyl plant. Moderators and reactor design The amount and nature of neutron moderation affects reactor controllability and hence safety. Because moderators both slow and absorb neutrons, there is an optimum amount of moderator to include in a given geometry of reactor core. Less moderation reduces the effectiveness by reducing the term in the evolution equation, and more moderation reduces the effectiveness by increasing the term. Most moderators become less effective with increasing temperature, so under-moderated reactors are stable against changes in temperature in the reactor core: if the core overheats, then the quality of the moderator is reduced and the reaction tends to slow down (there is a "negative temperature coefficient" in the reactivity of the core). Water is an extreme case: in extreme heat, it can boil, producing effective voids in the reactor core without destroying the physical structure of the core; this tends to shut down the reaction and reduce the possibility of a fuel meltdown. Over-moderated reactors are unstable against changes in temperature (there is a "positive temperature coefficient" in the reactivity of the core), and so are less inherently safe than under-moderated cores. Some reactors use a combination of moderator materials. For example, TRIGA type research reactors use ZrH2 moderator mixed with the 235U fuel, an H2O-filled core, and C (graphite) moderator and reflector blocks around the periphery of the core. Delayed neutrons and controllability Fission reactions and subsequent neutron escape happen very quickly; this is important for nuclear weapons, where the objective is to make a nuclear pit release as much energy as possible before it physically explodes. Most neutrons emitted by fission events are prompt: they are emitted effectively instantaneously. Once emitted, the average neutron lifetime () in a typical core is on the order of a millisecond, so if the exponential factor is as small as 0.01, then in one second the reactor power will vary by a factor of (1 + 0.01)1000, or more than ten thousand. Nuclear weapons are engineered to maximize the power growth rate, with lifetimes well under a millisecond and exponential factors close to 2; but such rapid variation would render it practically impossible to control the reaction rates in a nuclear reactor. Fortunately, the effective neutron lifetime is much longer than the average lifetime of a single neutron in the core. About 0.65% of the neutrons produced by 235U fission, and about 0.20% of the neutrons produced by 239Pu fission, are not produced immediately, but rather are emitted from an excited nucleus after a further decay step. In this step, further radioactive decay of some of the fission products (almost always negative beta decay), is followed by immediate neutron emission from the excited daughter product, with an average life time of the beta decay (and thus the neutron emission) of about 15 seconds. These so-called delayed neutrons increase the effective average lifetime of neutrons in the core, to nearly 0.1 seconds, so that a core with of 0.01 would increase in one second by only a factor of (1 + 0.01)10, or about 1.1: a 10% increase. This is a controllable rate of change. Most nuclear reactors are hence operated in a prompt subcritical, delayed critical condition: the prompt neutrons alone are not sufficient to sustain a chain reaction, but the delayed neutrons make up the small difference required to keep the reaction going. This has effects on how reactors are controlled: when a small amount of control rod is slid into or out of the reactor core, the power level changes at first very rapidly due to prompt subcritical multiplication and then more gradually, following the exponential growth or decay curve of the delayed critical reaction. Furthermore, increases in reactor power can be performed at any desired rate simply by pulling out a sufficient length of control rod. However, without addition of a neutron poison or active neutron-absorber, decreases in fission rate are limited in speed, because even if the reactor is taken deeply subcritical to stop prompt fission neutron production, delayed neutrons are produced after ordinary beta decay of fission products already in place, and this decay-production of neutrons cannot be changed. The rate of change of reactor power is determined by the reactor period , which is related to the reactivity through the Inhour equation. Kinetics The kinetics of the reactor is described by the balance equations of neutrons and nuclei (fissile, fission products). Reactor poisons Any nuclide that strongly absorbs neutrons is called a reactor poison, because it tends to shut down (poison) an ongoing fission chain reaction. Some reactor poisons are deliberately inserted into fission reactor cores to control the reaction; boron or cadmium control rods are the best example. Many reactor poisons are produced by the fission process itself, and buildup of neutron-absorbing fission products affects both the fuel economics and the controllability of nuclear reactors. Long-lived poisons and fuel reprocessing In practice, buildup of reactor poisons in nuclear fuel is what determines the lifetime of nuclear fuel in a reactor: long before all possible fissions have taken place, buildup of long-lived neutron absorbing fission products damps out the chain reaction. This is the reason that nuclear reprocessing is a useful activity: spent nuclear fuel contains about 96% of the original fissionable material present in newly manufactured nuclear fuel. Chemical separation of the fission products restores the nuclear fuel so that it can be used again. Nuclear reprocessing is useful economically because chemical separation is much simpler to accomplish than the difficult isotope separation required to prepare nuclear fuel from natural uranium ore, so that in principle chemical separation yields more generated energy for less effort than mining, purifying, and isotopically separating new uranium ore. In practice, both the difficulty of handling the highly radioactive fission products and other political concerns make fuel reprocessing a contentious subject. One such concern is the fact that spent uranium nuclear fuel contains significant quantities of 239Pu, a prime ingredient in nuclear weapons (see breeder reactor). Short-lived poisons and controllability Short-lived reactor poisons in fission products strongly affect how nuclear reactors can operate. Unstable fission product nuclei transmute into many different elements (secondary fission products) as they undergo a decay chain to a stable isotope. The most important such element is xenon, because the isotope 135Xe, a secondary fission product with a half-life of about 9 hours, is an extremely strong neutron absorber. In an operating reactor, each nucleus of 135Xe becomes 136Xe (which may later sustain beta decay) by neutron capture almost as soon as it is created, so that there is no buildup in the core. However, when a reactor shuts down, the level of 135Xe builds up in the core for about 9 hours before beginning to decay. The result is that, about 6–8 hours after a reactor is shut down, it can become physically impossible to restart the chain reaction until the 135Xe has had a chance to decay over the next several hours. This temporary state, which may last several days and prevent restart, is called the iodine pit or xenon-poisoning. It is one reason why nuclear power reactors are usually operated at an even power level around the clock. 135Xe buildup in a reactor core makes it extremely dangerous to operate the reactor a few hours after it has been shut down. Because the 135Xe absorbs neutrons strongly, starting a reactor in a high-Xe condition requires pulling the control rods out of the core much farther than normal. However, if the reactor does achieve criticality, then the neutron flux in the core becomes high and 135Xe is destroyed rapidly—this has the same effect as very rapidly removing a great length of control rod from the core, and can cause the reaction to grow too rapidly or even become prompt critical. 135Xe played a large part in the Chernobyl accident: about eight hours after a scheduled maintenance shutdown, workers tried to bring the reactor to a zero power critical condition to test a control circuit. Since the core was loaded with 135Xe from the previous day's power generation, it was necessary to withdraw more control rods to achieve this. As a result, the overdriven reaction grew rapidly and uncontrollably, leading to steam explosion in the core, and violent destruction of the facility. Uranium enrichment While many fissionable isotopes exist in nature, one useful fissile isotope found in viable quantities is 235U. About 0.7% of the uranium in most ores is the 235 isotope, and about 99.3% is the non-fissile 238 isotope. For most uses as a nuclear fuel, uranium must be enriched - purified so that it contains a higher percentage of 235U. Because 238U absorbs fast neutrons, the critical mass needed to sustain a chain reaction increases as the 238U content increases, reaching infinity at 94% 238U (6% 235U). Concentrations lower than 6% 235U cannot go fast critical, though they are usable in a nuclear reactor with a neutron moderator. A nuclear weapon primary stage using uranium uses HEU enriched to ~90% 235U, though the secondary stage often uses lower enrichments. Nuclear reactors with water moderator require at least some enrichment of 235U. Nuclear reactors with heavy water or graphite moderation can operate with natural uranium, eliminating altogether the need for enrichment and preventing the fuel from being useful for nuclear weapons; the CANDU power reactors used in Canadian power plants are an example of this type. Other candidates for future reactors include Americium but the process is even more difficult than the Uranium enrichment because the chemical properties of 235U and 238U are identical, so physical processes such as gaseous diffusion, gas centrifuge, laser, or mass spectrometry must be used for isotopic separation based on small differences in mass. Because enrichment is the main technical hurdle to production of nuclear fuel and simple nuclear weapons, enrichment technology is politically sensitive. Oklo: a natural nuclear reactor Modern deposits of uranium contain only up to ~0.7% 235U (and ~99.3% 238U), which is not enough to sustain a chain reaction moderated by ordinary water. But 235U has a much shorter half-life (700 million years) than 238U (4.5 billion years), so in the distant past the percentage of 235U was much higher. About two billion years ago, a water-saturated uranium deposit (in what is now the Oklo mine in Gabon, West Africa) underwent a naturally occurring chain reaction that was moderated by groundwater and, presumably, controlled by the negative void coefficient as the water boiled from the heat of the reaction. Uranium from the Oklo mine is about 50% depleted compared to other locations: it is only about 0.3% to 0.7% 235U; and the ore contains traces of stable daughters of long-decayed fission products. See also Critical mass List of nuclear reactors Nuclear physics Nuclear fission Nuclear fusion Void coefficient References External links Fermi age theory Notes on nuclear diffusion by Dr. Abdelhamid Dokhane Nuclear technology + Pressure vessels + + Nuclear physics
Nuclear reactor physics
[ "Physics", "Chemistry", "Engineering" ]
4,825
[ "Structural engineering", "Chemical equipment", "Nuclear technology", "Physical systems", "Hydraulics", "Nuclear physics", "Pressure vessels" ]
3,185,844
https://en.wikipedia.org/wiki/Ethyl%20cyanoacrylate
Ethyl cyanoacrylate (ECA), a cyanoacrylate ester, is an ethyl ester of 2-cyano-acrylic acid. It is a colorless liquid with low viscosity and a faint sweet smell in pure form. It is the main component of cyanoacrylate glues and can be encountered under many trade names. It is soluble in acetone, methyl ethyl ketone, nitromethane, and methylene chloride. ECA polymerizes rapidly in presence of moisture. Production Ethyl cyanoacrylate is prepared by the condensation of formaldehyde with ethyl cyanoacetate: This exothermic reaction affords the polymer, which is subsequently sintered, thermally "cracked" to give the monomer. Alternatively, it can be prepared by the ethoxycarbonylation of cyanoacetylene. Applications Ethyl cyanoacrylate is used for gluing. In forensics, cyanoacrylate ester has excellent non-destructive impressioning abilities, which are especially important when lifting fingerprints from delicate evidence items, or when the prints could not be lifted using traditional means such as fingerprinting powder. The procedure involves heating the acrylate in a sealed chamber. Its fumes then react with deposited proteins that form into a white, stable, and clear print outlines. The resulting prints could be used 'as is' or enhanced further by staining them with darker pigments. Liquid bandage systems use the less toxic n-butyl and octyl cyanoacrylates. Safety In the U.S., the threshold limit value for ECA is 0.2 ppm. It is a strong irritant to the lungs and eyes. See also Cyanoacrylate Methyl cyanoacrylate Butyl cyanoacrylate Octyl cyanoacrylate References Ethyl esters Monomers Cyanoacrylate esters Lachrymatory agents Sweet-smelling chemicals
Ethyl cyanoacrylate
[ "Chemistry", "Materials_science" ]
417
[ "Monomers", "Lachrymatory agents", "Polymer chemistry", "Chemical weapons" ]
3,186,107
https://en.wikipedia.org/wiki/Race%3A%20The%20Reality%20of%20Human%20Difference
Race: The Reality of Human Differences is an anthropology book, in which authors Vincent M. Sarich, Emeritus Professor of Anthropology at the University of California, Berkeley, and Frank Miele, senior editor of Skeptic Magazine, argue for the reality of race. The book was published by Basic Books in 2004. It disputes the statements of the PBS documentary Race: The Power of an Illusion aired in 2003. After arguing that human races exist, the authors put forth three different political systems that take race into account in the final chapter, "Learning to Live with Race." These are "Meritocracy in the Global Marketplace", "Affirmative Action and Race Norming", and "Resegregation and the Emergence of Ethno-States." Sarich and Miele list the advantages and disadvantages of each system and advocate Global Meritocracy as the best of the three options. The authors then discuss "the horrific prospect of ethnically targeted weapons," which they view as technically feasible but not very likely to be used. References External links Website of the video Race: The Power of an Illusion Books about race and ethnicity Biology books Race and intelligence controversy
Race: The Reality of Human Difference
[ "Biology" ]
239
[ "Biology theories", "Obsolete biology theories", "Scientific racism" ]
3,188,361
https://en.wikipedia.org/wiki/Gobe%20Software
Gobe Software, Inc was a software company founded in 1997 by members of the ClarisWorks development team that developed and published an integrated desktop software suite for BeOS. In later years, it was the distributor of BeOS itself. History Gobe was founded in 1997 by members of the ClarisWorks development team and some of the authors of the original Styleware application for the Apple II. After leaving StyleWare and creating the product later known as ClarisWorks and AppleWorks, Bob Hearn, Scott Holdaway joined Tom Hoke, Scott Lindsey, Bruce Q. Hammond, and Carl Grice who also worked at Apple Computer's Claris subsidiary and formed Gobe Software, Inc with the notion to create a next-generation integrated office suite similar to ClarisWorks, but for the BeOS platform. It released Gobe Productive in 1998. When Be Inc. outsourced publication of BeOS in 2000, Gobe became the publisher of BeOS in North America, Australia, and sections of Asia. Only weeks after signing up other publishers around the globe, Be, Inc. halted development for the BeOS platform and publicly announced that all of its corporate focus would be on "Internet Appliances" and made public announcements that hampered forward momentum of the BeOS platform. In addition, the publishers in general and Gobe in particular did not have source code access to the BeOS and were not able to continue its development or add drivers that the platform needed to be a viable alternative to Windows or Linux. Gobe also published Hicom Entertainment/Next Generation Entertainments "Corum III" role-playing game for BeOS during this period. The failure of Be, Inc and BeOS meant ports had to be undertaken, and Windows and Linux variants were developed. Although the company shipped a Windows version of its software in December 2001, it was unable to obtain sufficient operating capital after the 2000 stock market crash and suspended operations 2002. In 2008 Gobe management began to work with distribution and development teams in Greater Asia and had plans to ship a new version of the product for the India market early 2010. Later in August 2010, Gobe Productive's website was disabled and then sold to an Indian movie producer called ErosNow. Gobe Productive The main product, Gobe Productive, was by far the most polished of the word processors, spreadsheet and vector graphics applications for BeOS, but as an integrated package a la ClarisWorks and Microsoft Works. Gobe Productive v1.0 for BeOS was released in August 1998, v2.0 in August 1999, and v2.0.1 on 29 February 2000. After the failure of Be, Inc a Windows and Linux variants were developed. The company shipped a Windows version of Gobe Productive 3 in December 2001. Other Gobe employees Dave Johnson Ben Chang Joël Spaltenstein Kurt von Finck Daniel Maia Alves Cheyenne Tuller Tomy Hudson See also Comparison of office suites Notes References Further reading External links Be, Inc. article (archived) BeOS Discontinued software Defunct software companies of the United States
Gobe Software
[ "Technology" ]
621
[ "BeOS", "Computing platforms" ]
3,188,555
https://en.wikipedia.org/wiki/Ring%20size
Ring size is a measurement used to denote the circumference (or sometimes the diameter) of jewellery rings and smart rings. Measuring tools Ring sizes can be measured physically by a paper, plastic, or metal ring sizer (as a gauge) or by measuring the inner diameter of a ring that already fits. Ring sticks are tools used to measure the inner size of a ring, and are typically made from plastic, delrin, wood, aluminium, or of multiple materials. Digital ring sticks can be used for highly accurate measurements. Measurement systems International standard ISO 8653:2016 defines standard ring sizes in terms of the inner circumference of the ring measured in millimetres. ISO sizes are used in Austria, France, Belgium, Scandinavia (Norway, Sweden, Denmark, Finland, Iceland), and other countries in Continental Europe. Other traditional and regional systems Other ring size measurement systems are used in areas that do not use ISO 8653:2016. North America In the United States, Canada, and Mexico, ring sizes are specified using a numerical scale with steps, where whole sizes differ by of internal diameter, equivalent to of internal circumference. The relationship of this size () to ISO 8653:2016 circumference () is , while the relationship to ISO 8653:2016 diameter () is . The Circular of the Bureau of Standards summarizes the situation with this system: "While there apparently is only one standard in use in the United States, in reality, because of the lack of specific dimensions and because of the errors introduced by the adoption of a common commercial article as a pattern, there are many, although similar, standards." The standards are generally consistent and remain so. There does not appear to have been any improvement in the standard since then. Ireland, United Kingdom, Australia In Ireland, the United Kingdom and Australia, ring sizes are specified using an alphabetical scale with half sizes. Originally in 1945, the divisions were based on the ring inside diameter in steps of . However, in 1987 BSI updated the standard to the metric system so that one alphabetical size division equals 1.25 mm of circumferential length. For a baseline, ring size C has a circumference of 40 mm. India, Japan, China In India, Japan and China, ring sizes are specified using a numerical scale with whole sizes that do not have a linear correlation with diameter or circumference. Germany and Netherlands Netherlands, Germany, and sometimes Argentina use a standard (referred to as the German System) where ring sizes are defined by the diameter of the ring, measured in mm. This system may also be used at times in Russia. Italy, Spain, Switzerland In Italy, Spain, and Switzerland, ring sizes are specified as the circumference minus 40 mm: for example, size 10 in this system is equivalent to ISO 8653:2016 size 50. This may also be referred to as the Swiss Ring Size System. Russia In Russia, ring sizes are equal to the inner diameter rounded to whole and half numbers, sometimes to quarters, for example diameter 16.92 mm is equal to size 17, 16.1 mm is equal to size 16. Equivalency table Resizing Most rings can be resized; the method of doing so depends on the complexity of the ring and its material. Rings of soft material may be enlarged using mechanical stretching. For example, the ring may be enlarged using a rolling mill, a steel ring mandrel, or a Schwann Ring Stretcher. Adding Material In some cases, the ring may need to be cut open and material either added or removed before fusing the ring together again. The ring may be slightly heated to reveal any solder line so the jeweler can open the ring on the same seam so as to minimize the total number of solder joins on the ring. Sizing beads Small metal beads called sizing beads can be added to the inner circumference of a ring to: Decrease the effective inner diameter of a ring that is too big, to aid in holding the ring in place against the finger Counterbalance top-heavy rings Keep a ring from spinning for wearers whose knuckles are much larger than their finger base Sizing beads are typically made of the same metal as the rest of the ring since it is easier to solder two similar metals. References Rings (jewellery) Sizes in clothing
Ring size
[ "Physics", "Mathematics" ]
892
[ "Sizes in clothing", "Quantity", "Physical quantities", "Size" ]
3,190,609
https://en.wikipedia.org/wiki/Endophysics
The term endophysics (lit. “physics from within”) was coined by the American physicist David Finkelstein in a letter to the German biochemist Otto E. Rössler, who originally came up with the concept. It refers to the study of how observations are affected and limited by the observer being within the universe. This is in contrast with "exophysics," which assumes a system observed from the “outside”. See also Physics Internal measurement (This notion is very similar to endophysics.) References R. J. Boskovich, De spacio et tempore, ut a nobis cognoscuntur, partial English translation in: J. M. Child (Ed.), A Theory of Natural Philosophy, Open Court (1922) and MIT Press, Cambridge, MA, 1966, pp. 203–205. T. Toffoli, The role of the observer in uniform systems, in: G. J. Klir (Ed.), Applied General Systems Research, Recent Developments and Trends, Plenum Press, New York, London, 1978, pp. 395–400. K. Svozil, Connections between deviations from Lorentz transformation and relativistic energy-momentum relation, Europhysics Letters 2 (1986) 83–85. O. E. Rössler, Endophysics, in: J. L. Casti, A. Karlquist (Eds.), Real Brains, Artificial Minds, North-Holland, New York, 1987, p. 25. O. E. Rössler, Endophysics. Die Welt des inneren Beobachters, Merwe Verlag, Berlin, 1992, with a foreword by Peter Weibel. K. Svozil, Extrinsic-intrinsic concept and complementarity, in: H. Atmanspacker, G. J. Dalenoort (Eds.), Inside versus Outside, Springer-Verlag, Heidelberg, 1994, pp. 273–288. Further reading External links Karl Svozil (2005). Computational universes. Interview with O. E. Rössler (in German) Vom Chaos, der Virtuellen Realität und der Endophysik Concepts in the philosophy of mind Quantum mind
Endophysics
[ "Physics" ]
473
[ "Quantum mind", "Quantum mechanics" ]
3,191,150
https://en.wikipedia.org/wiki/Galbanum
Galbanum is an aromatic gum resin and a product of certain umbelliferous Persian plant species in the genus Ferula, chiefly Ferula gummosa (synonym F. galbaniflua) and Ferula rubricaulis. Galbanum-yielding plants grow plentifully on the slopes of the mountain ranges of northern Iran. It occurs usually in hard or soft, irregular, more or less translucent and shining lumps, or occasionally in separate tears, of a light-brown, yellowish or greenish-yellow colour. Galbanum has a disagreeable, bitter taste, a peculiar, a somewhat musky odour, and an intense green scent. With a specific gravity of 1.212, it contains about 8% terpenes; about 65% of a resin which contains sulfur; about 20% gum; and a very small quantity of the colorless crystalline substance umbelliferone. It also contains α-pinene, β-pinene, limonene, cadinene, 3-carene, and ocimene. Uses Biblical use In the Book of Exodus 30:34, it is mentioned as being used in the making of the Ketoret which is used when referring to the consecrated incense described in the Hebrew Bible and Talmud. It was offered on the specialized incense altar in the time when the Tabernacle was located in the First and Second Jerusalem Temples. The ketoret was an important component of the Temple service in Jerusalem. Rashi (1040-1105) comments on this passage that galbanum is bitter and was included in the incense as a reminder of deliberate and unrepentant sinners. The incense formula was apparently ground small or into a powder. This would be possible because Galbanum, which is a sticky tar-like resin, can be made into a powder by drying, low boiling, or adding a diluent. Perfumes and scents Galbanum was highly treasured as a sacred substance by the ancient Egyptians. The "green" incense of Egyptian antiquity is believed to have been galbanum. Galbanum resin has a very intense green scent accompanied by a turpentine odor. The initial notes are a very bitter, acrid, and peculiar scent followed by a complex green, spicy, woody, balsamlike fragrance. When diluted the scent of galbanum has variously been described as reminiscent of pine (due to the pinene and limonene content), evergreen, green bamboo, parsley, green apples, musk, or simply intense green. The oil has a pine like topnote which is less pronounced in the odor of the resinoid. The latter, in turn, has a more woody balsamic, conifer resinous character. Galbanum is frequently adulterated with pine oil. It is occasionally used in the making of modern perfume, and is the ingredient which gives the distinctive smell to the fragrances "Must" by Cartier, "Vent Vert" by Balmain, "Chanel No. 19", "Vol De Nuit" by Guerlain, as well as Silver Mountain Water by Creed, the esteemed scent of James Gandolfini used during the filming of the sixth season of The Sopranos. The debut of galbanum in fine modern perfumery is generally thought to be the origin of the "Green" family of scents, exemplified by the scent "Vent Vert" first launched by Balmain in 1945. Galbanum absolute is a brown viscous liquid which will easily resinify over time even with minimal exposure to air obtained by solvent-extraction from the gum oleo-resin of the plant. Its odour profile is described as ambery-green, sweet, balsamic, resinous with hints of freshness, "similar to how galbanum oil would smell when mixed with labdanum". It acts as a base note in perfume compositions - one of a handful of green base notes of natural origin. Because it is perceived as simultaneously 'green' and sweet, it finds a more specific role to create a special effect in 'Chypre green', 'floral green', 'Chypre coniferous', 'Woody Fougères' and 'Aquatic Fougères'. Medicinal use Hippocrates employed it in medicine, and Pliny (Nat. Hist. xxiv. 13) ascribes to it extraordinary curative powers, concluding his account of it with the assertion that "the very touch of it mixed with oil of spondylium is sufficient to kill a serpent." The drug was occasionally given in more contemporary medicine, in doses of from five to fifteen grains. It has the actions "common to substances containing a resin and a volatile oil". Its use is now obsolescent. Other uses The Latin name ferula derives in part from Ferule which is a schoolmaster's rod, such as a cane, stick, or flat piece of wood, used in punishing children. A ferula called narthex (or Giant fennel), which shares the galbanum-like scent, has long, straight and sturdy hollow stalks, which are segmented like bamboo. They were used as torches in antiquity and it is with such a torch that, according to Greek mythology, Prometheus, who deceived his father stealing some of his fire, brought fire to humanity. Bacchae were described using the bamboo-like stalks as weapons. Such rods were also used for walking sticks, splints, for stirring boiling liquids, and for corporal punishment. Some of the mythology may have transferred to the related galbanum which was referred to as the sacred "mother resin." In 1858, Lola Montez recommended using a mixture of galbanum (which she spelled "gaulbanum") and pitch plaster attached to a leather strip as a tool for removing hair from body parts where more visible hair might be unwanted, similar to modern day 'waxing'. References Ferula Resins Incense material Flora of Kurdistan
Galbanum
[ "Physics" ]
1,250
[ "Resins", "Unsolved problems in physics", "Incense material", "Materials", "Amorphous solids", "Matter" ]
16,553,854
https://en.wikipedia.org/wiki/Cavity%20quantum%20electrodynamics
Cavity quantum electrodynamics (cavity QED) is the study of the interaction between light confined in a reflective cavity and atoms or other particles, under conditions where the quantum nature of photons is significant. It could in principle be used to construct a quantum computer. The case of a single 2-level atom in the cavity is mathematically described by the Jaynes–Cummings model, and undergoes vacuum Rabi oscillations , that is between an excited atom and photons, and a ground state atom and photons. If the cavity is in resonance with the atomic transition, a half-cycle of oscillation starting with no photons coherently swaps the atom qubit's state onto the cavity field's, , and can be repeated to swap it back again; this could be used as a single photon source (starting with an excited atom), or as an interface between an atom or trapped ion quantum computer and optical quantum communication. Other interaction durations create entanglement between the atom and cavity field; for example, a quarter-cycle on resonance starting from gives the maximally entangled state (a Bell state) . This can in principle be used as a quantum computer, mathematically equivalent to a trapped ion quantum computer with cavity photons replacing phonons. Nobel Prize in Physics The 2012 Nobel Prize for Physics was awarded to Serge Haroche and David Wineland for their work on controlling quantum systems. Haroche shares half of the prize for developing a new field called cavity quantum electrodynamics (CQED) – whereby the properties of an atom are controlled by placing it in an optical or microwave cavity. Haroche focused on microwave experiments and turned the technique on its head – using CQED to control the properties of individual photons. In a series of ground-breaking experiments, Haroche used CQED to realize Schrödinger's famous cat experiment in which a system is in a superposition of two very different quantum states until a measurement is made on the system. Such states are extremely fragile, and the techniques developed to create and measure CQED states are now being applied to the development of quantum computers. See also Circuit quantum electrodynamics Superconducting radio frequency Dicke model References Microwave wavelengths, atoms passing through cavity Optical wavelengths, atoms trapped Quantum optics Quantum information science
Cavity quantum electrodynamics
[ "Physics" ]
480
[ "Quantum optics", "Quantum mechanics" ]
16,568,196
https://en.wikipedia.org/wiki/Rubitecan
Rubitecan (INN, marketing name Orathecin) is an oral topoisomerase inhibitor, developed by SuperGen (now Astex Pharmaceuticals, Inc.; a member of the Otsuka Group). History On January 27, 2004, SuperGen announced that it has completed the submission of an NDA for rubitecan to the US FDA, and was accepted for filing in March 2004. In January 2005, and under the direction of then-CEO James Manuso, SuperGen withdrew the NDA for rubitecan, based on feedback indicating that the current data package would not be sufficient to gain US approval, and in January 2006, the Marketing Authorization Application (MAA) filed with the European Medicines Agency (EMA) was also withdrawn. The name Rubitecan is a portmanteau of SuperGen's founder, Dr. Joseph Rubinfeld, and the chemical name 9-Nitrocamptothecin. Synthesis Large scale production of Rubitecan has encountered problems. The direct nitration of camptothecin results in regioselectivity problems. One way that has been used to synthesize Rubitecan is to nitrate 10-hydroxycamptothecin then remove the hydroxyl functional group. Use as Anti-Cancer Drug Rubitecan is a compound used extensively in cancer research. Rubitecan is an effective drug against pancreatic cancer and other solid tumors. One major problem is the lack of oral bioavailability due to low permeability and poor water solubility. One study shows 9-NC-SD through Soluplus1-based solid dispersion system is a much more effective delivery method than free 9-NC. References Abandoned drugs Topoisomerase inhibitors
Rubitecan
[ "Chemistry" ]
358
[ "Drug safety", "Abandoned drugs" ]
16,568,500
https://en.wikipedia.org/wiki/Overall%20labor%20effectiveness
Overall labor effectiveness (OLE) is a key performance indicator (KPI) that measures the utilization, performance, and quality of the workforce and its impact on productivity. Similar to overall equipment effectiveness (OEE), OLE measures availability, performance, and quality. Availability – the percentage of time employees spend making effective contributions Performance – the amount of product delivered Quality – the percentage of perfect or saleable product produced OLE allows manufacturers to make operational decisions by giving them the ability to analyze the cumulative effect of these three workforce factors on productive output, while considering the impact of both direct and indirect labor. OLE supports Lean and Six Sigma methodologies and applies them to workforce processes, allowing manufacturers to make labor-related activities more efficient, repeatable and impactful. Measuring availability There are many factors that influence workforce availability and therefore the potential output of equipment and the manufacturing plant. OLE can help manufacturers be sure that they have the person with the right skills available at the right time by enabling manufacturers to locate areas where providing and scheduling the right mix of employees can increase the number of productive hours. OLE also accounts for labor utilization. Understanding where downtime losses are coming from and the impact they have on production can reveal root causes—which can include machine downtime, material delays, or absenteeism—that delay a line startup. Calculation: Availability = Time operators are working productively / Time scheduled Example: Two employees (workforce) are scheduled to work 8 hour (480 minutes) shifts. The normal shift includes a scheduled 30 minute break. The employees experience 60 minutes of unscheduled downtime. Scheduled Time = 960 min − 60 min break = 900 Min Available Time = 900 min Scheduled − 120 min Unscheduled Downtime = 780 Min Availability = 780 Avail Min / 900 Scheduled Min = 86.67% Measuring performance When employees cannot perform their work within standard times, performance can suffer. Effective training can increase performance by improving the skills that directly impact the quality of output. A skilled operator knows how to measure work, understands the impacts of variability, and knows to stop production for corrective actions when quality falls below specified limits. Accurately measuring this metric with OLE can pinpoint performance improvement opportunities down to the individual level. Calculation: Performance = Actual output of the operators / the expected output (or labor standard) Example: Two employees (workforce) are scheduled to work an 8-hour (480 minute) shift with a 30-minute scheduled break. Available Time = 960 min − 60 min break − 120 min Unscheduled Downtime = 780 Min The Standard Rate for the part being produced is 60 Units/Hour or 1 Minute/Unit The Workforce produces 700 Total Units during the shift. Time to Produce Parts = 700 Units * 1 Minutes/Unit = 700 Minutes Performance = 700 minutes / 780 minutes = 89.74 % Measuring quality A number of drivers contribute to quality, but the effort to improve quality can result in a lowering of labor performance. When making the correlation between the workforce and quality it is important to consider factors such as the training and skills of employees, whether they have access to the right tools to follow procedures, and their understanding of how their roles drive and impact quality. OLE can help manufacturers analyze shift productivity down to a single-shift level, and determine which individual workers are most productive, and then identify corrective actions to bring operations up to standards. Calculation: Quality = Saleable parts / Total parts produced Example: Two employees (workforce) produce 670 Good Units during a shift. 700 Units were started in order to produce the 670 Good Units. Quality = 670 Good Units / 700 Units Started = 95.71% Calculation Effective use of OLE uncovers the data that fuels root-cause analysis and points to corrective actions. Likewise, OLE exposes trends that can be used to diagnose more subtle problems. It also helps managers understand whether corrective actions did, in fact, solve problems and improve overall productivity. Example: Calculation: OLE = Availability x Performance x Quality Example: A workforce experiences... Availability of 87% The Work Center Performance is 89.74%. Work Center Quality is 96%. OLE = 86.67% Availability x 89.74% Performance x 95.71% Quality = 74.44% Labor information tracked The following table provides examples of the labor information tracked by overall labor effectiveness organized by its major categories. Using this labor information, manufacturers can make operational decisions to improve the cumulative effect of labor availability, performance, and quality. See also Lean Six Sigma Process-centered design Total productive maintenance References External links What is OEE? Lean Labor Book OEE Key Points Lean manufacturing
Overall labor effectiveness
[ "Engineering" ]
932
[ "Lean manufacturing" ]
16,568,967
https://en.wikipedia.org/wiki/Landau%E2%80%93Lifshitz%E2%80%93Gilbert%20equation
In physics, the Landau–Lifshitz–Gilbert equation (usually abbreviated as LLG equation), named for Lev Landau, Evgeny Lifshitz, and T. L. Gilbert, is a name used for a differential equation describing the dynamics (typically the precessional motion) of magnetization in a solid. It is a modified version by Gilbert of the original equation of Landau and Lifshitz. The LLG equation is similar to the Bloch equation, but they differ in the form of the damping term. The LLG equation describes a more general scenario of magnetization dynamics beyond the simple Larmor precession. In particular, the effective field driving the precessional motion of is not restricted to real magnetic fields; it incorporates a wide range of mechanisms including magnetic anisotropy, exchange interaction, and so on. The various forms of the LLG equation are commonly used in micromagnetics to model the effects of a magnetic field and other magnetic interactions on ferromagnetic materials. It provides a practical way to model the time-domain behavior of magnetic elements. Recent developments generalizes the LLG equation to include the influence of spin-polarized currents in the form of spin-transfer torque. Landau–Lifshitz equation In a ferromagnet, the magnitude of the magnetization at each spacetime point is approximated by the saturation magnetization (although it can be smaller when averaged over a chunk of volume). The LLG equation describes the rotation of the magnetization in response to the effective field and accounts for not only a real magnetic field but also internal magnetic interactions such as exchange and anisotropy. An earlier, but equivalent, equation (the Landau–Lifshitz equation) was introduced by : where is the electron gyromagnetic ratio and is a phenomenological damping parameter, often replaced by where is a dimensionless constant called the damping factor. The effective field is a combination of the external magnetic field, the demagnetizing field, and various internal magnetic interactions involving quantum mechanical effects, which is typically defined as the functional derivative of the magnetic free energy with respect to the local magnetization . To solve this equation, additional conditions for the demagnetizing field must be included to accommodate the geometry of the material. Landau–Lifshitz–Gilbert equation In 1955 Gilbert replaced the damping term in the Landau–Lifshitz (LL) equation by one that depends on the time derivative of the magnetization: This is the Landau–Lifshitz–Gilbert (LLG) equation, where is the damping parameter, which is characteristic of the material. It can be transformed into the Landau–Lifshitz equation: where In this form of the LL equation, the precessional term depends on the damping term. This better represents the behavior of real ferromagnets when the damping is large. Landau–Lifshitz–Gilbert–Slonczewski equation In 1996 John Slonczewski expanded the model to account for the spin-transfer torque, i.e. the torque induced upon the magnetization by spin-polarized current flowing through the ferromagnet. This is commonly written in terms of the unit moment defined by : where is the dimensionless damping parameter, and are driving torques, and is the unit vector along the polarization of the current. References and footnotes Further reading This is only an abstract; the full report is "Armor Research Foundation Project No. A059, Supplementary Report, May 1, 1956", but was never published. A description of the work is given in External links Magnetization dynamics applet Eponymous equations of physics Magnetic ordering Partial differential equations Lev Landau
Landau–Lifshitz–Gilbert equation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
777
[ "Equations of physics", "Eponymous equations of physics", "Electric and magnetic fields in matter", "Materials science", "Magnetic ordering", "Condensed matter physics" ]
13,792,019
https://en.wikipedia.org/wiki/Stokes%27s%20law%20of%20sound%20attenuation
In acoustics, Stokes's law of sound attenuation is a formula for the attenuation of sound in a Newtonian fluid, such as water or air, due to the fluid's viscosity. It states that the amplitude of a plane wave decreases exponentially with distance traveled, at a rate given by where is the dynamic viscosity coefficient of the fluid, is the sound's angular frequency, is the fluid density, and is the speed of sound in the medium. The law and its derivation were published in 1845 by the Anglo-Irish physicist G. G. Stokes, who also developed Stokes's law for the friction force in fluid motion. A generalisation of Stokes attenuation taking into account the effect of thermal conductivity was proposed by the German physicist Gustav Kirchhoff in 1868. Sound attenuation in fluids is also accompanied by acoustic dispersion, meaning that the different frequencies are propagating at different sound speeds. Interpretation Stokes's law of sound attenuation applies to sound propagation in an isotropic and homogeneous Newtonian medium. Consider a plane sinusoidal pressure wave that has amplitude at some point. After traveling a distance from that point, its amplitude will be The parameter is a kind of attenuation constant, dimensionally the reciprocal of length. In the International System of Units (SI), it is expressed in neper per meter or simply reciprocal of meter (m). That is, if  = 1 m, the wave's amplitude decreases by a factor of for each meter traveled. Importance of volume viscosity The law is amended to include a contribution by the volume viscosity : The volume viscosity coefficient is relevant when the fluid's compressibility cannot be ignored, such as in the case of ultrasound in water. The volume viscosity of water at 15 C is 3.09 centipoise. Modification for very high frequencies Stokes's law is actually an asymptotic approximation for low frequencies of a more general formula involving relaxation time : The relaxation time for water is about per radian, corresponding to an angular frequency of radians (500 gigaradians) per second and therefore a frequency of about . See also Acoustic attenuation References Colloidal chemistry Fluid dynamics Acoustics
Stokes's law of sound attenuation
[ "Physics", "Chemistry", "Engineering" ]
467
[ "Colloidal chemistry", "Chemical engineering", "Classical mechanics", "Colloids", "Acoustics", "Surface science", "Piping", "Fluid dynamics" ]
13,793,480
https://en.wikipedia.org/wiki/Volume%20viscosity
Volume viscosity (also called bulk viscosity, or second viscosity or, dilatational viscosity) is a material property relevant for characterizing fluid flow. Common symbols are or . It has dimensions (mass / (length × time)), and the corresponding SI unit is the pascal-second (Pa·s). Like other material properties (e.g. density, shear viscosity, and thermal conductivity) the value of volume viscosity is specific to each fluid and depends additionally on the fluid state, particularly its temperature and pressure. Physically, volume viscosity represents the irreversible resistance, over and above the reversible resistance caused by isentropic bulk modulus, to a compression or expansion of a fluid. At the molecular level, it stems from the finite time required for energy injected in the system to be distributed among the rotational and vibrational degrees of freedom of molecular motion. Knowledge of the volume viscosity is important for understanding a variety of fluid phenomena, including sound attenuation in polyatomic gases (e.g. Stokes's law), propagation of shock waves, and dynamics of liquids containing gas bubbles. In many fluid dynamics problems, however, its effect can be neglected. For instance, it is 0 in a monatomic gas at low density (unless the gas is moderately relativistic), whereas in an incompressible flow the volume viscosity is superfluous since it does not appear in the equation of motion. Volume viscosity was introduced in 1879 by Sir Horace Lamb in his famous work Hydrodynamics. Although relatively obscure in the scientific literature at large, volume viscosity is discussed in depth in many important works on fluid mechanics, fluid acoustics, theory of liquids, rheology, and relativistic hydrodynamics. Derivation and use At thermodynamic equilibrium, the negative-one-third of the trace of the Cauchy stress tensor is often identified with the thermodynamic pressure, which depends only on equilibrium state variables like temperature and density (equation of state). In general, the trace of the stress tensor is the sum of thermodynamic pressure contribution and another contribution which is proportional to the divergence of the velocity field. This coefficient of proportionality is called volume viscosity. Common symbols for volume viscosity are and . Volume viscosity appears in the classic Navier-Stokes equation if it is written for compressible fluid, as described in most books on general hydrodynamics and acoustics. where is the shear viscosity coefficient and is the volume viscosity coefficient. The parameters and were originally called the first and bulk viscosity coefficients, respectively. The operator is the material derivative. By introducing the tensors (matrices) , and (where e is a scalar called dilation, and is the identity tensor), which describes crude shear flow (i.e. the strain rate tensor), pure shear flow (i.e. the deviatoric part of the strain rate tensor, i.e. the shear rate tensor) and compression flow (i.e. the isotropic dilation tensor), respectively, the classic Navier-Stokes equation gets a lucid form. Note that the term in the momentum equation that contains the volume viscosity disappears for an incompressible flow because there is no divergence of the flow, and so also no flow dilation e to which is proportional: So the incompressible Navier-Stokes equation can be simply written: In fact, note that for the incompressible flow the strain rate is purely deviatoric since there is no dilation (e=0). In other words, for an incompressible flow the isotropic stress component is simply the pressure: and the deviatoric (shear) stress is simply twice the product between the shear viscosity and the strain rate (Newton's constitutive law): Therefore, in the incompressible flow the volume viscosity plays no role in the fluid dynamics. However, in a compressible flow there are cases where , which are explained below. In general, moreover, is not just a property of the fluid in the classic thermodynamic sense, but also depends on the process, for example the compression/expansion rate. The same goes for shear viscosity. For a Newtonian fluid the shear viscosity is a pure fluid property, but for a non-Newtonian fluid it is not a pure fluid property due to its dependence on the velocity gradient. Neither shear nor volume viscosity are equilibrium parameters or properties, but transport properties. The velocity gradient and/or compression rate are therefore independent variables together with pressure, temperature, and other state variables. Landau's explanation According to Landau, He later adds: After an example, he concludes (with used to represent volume viscosity): Measurement A brief review of the techniques available for measuring the volume viscosity of liquids can be found in Dukhin & Goetz and Sharma (2019). One such method is by using an acoustic rheometer. Below are values of the volume viscosity for several Newtonian liquids at 25 °C (reported in cP): methanol - 0.8 ethanol - 1.4 propanol - 2.7 pentanol - 2.8 acetone - 1.4 toluene - 7.6 cyclohexanone - 7.0 hexane - 2.4 Recent studies have determined the volume viscosity for a variety of gases, including carbon dioxide, methane, and nitrous oxide. These were found to have volume viscosities which were hundreds to thousands of times larger than their shear viscosities. Fluids having large volume viscosities include those used as working fluids in power systems having non-fossil fuel heat sources, wind tunnel testing, and pharmaceutical processing. Modeling There are many publications dedicated to numerical modeling of volume viscosity. A detailed review of these studies can be found in Sharma (2019) and Cramer. In the latter study, a number of common fluids were found to have bulk viscosities which were hundreds to thousands of times larger than their shear viscosities. For relativistic liquids and gases, bulk viscosity is conveniently modeled in terms of a mathematical duality with chemically reacting relativistic fluids. References Colloidal chemistry Fluid dynamics Viscosity
Volume viscosity
[ "Physics", "Chemistry", "Engineering" ]
1,344
[ "Physical phenomena", "Colloidal chemistry", "Physical quantities", "Chemical engineering", "Colloids", "Surface science", "Piping", "Wikipedia categories named after physical quantities", "Viscosity", "Physical properties", "Fluid dynamics" ]
13,793,747
https://en.wikipedia.org/wiki/Group%20method%20of%20data%20handling
Group method of data handling (GMDH) is a family of inductive algorithms for computer-based mathematical modeling of multi-parametric datasets that features fully automatic structural and parametric optimization of models. GMDH is used in such fields as data mining, knowledge discovery, prediction, complex systems modeling, optimization and pattern recognition. GMDH algorithms are characterized by inductive procedure that performs sorting-out of gradually complicated polynomial models and selecting the best solution by means of the external criterion. The last section of contains a summary of the applications of GMDH in the 1970s. Other names include "polynomial feedforward neural network", or "self-organization of models". It was one of the first deep learning methods, used to train an eight-layer neural net in 1971. Mathematical content Polynomial regression This section is based on. This is the general problem of statistical modelling of data: Consider a dataset , with points. Each point contains observations, and one target to predict. How to best predict the target based on the observations? First, we split the full dataset into two parts: a training set and a validation set. The training set would be used to fit more and more model parameters, and the validation set would be used to decide which parameters to include, and when to stop fitting completely. The GMDH starts by considering degree-2 polynomial in 2 variables. Suppose we want to predict the target using just the parts of the observation, and using only degree-2 polynomials, then the most we can do is this:where the parameters are computed by linear regression. Now, the parameters depend on which we have chosen, and we do not know which we should choose, so we choose all of them. That is, we perform all such polynomial regressions:obtaining polynomial models of the dataset. We do not want to accept all the polynomial models, since it would contain too many models. To only select the best subset of these models, we run each model on the validation dataset, and select the models whose mean-square-error is below a threshold. We also write down the smallest mean-square-error achieved as . Suppose that after this process, we have obtained a set of models. We now run the models on the training dataset, to obtain a sequence of transformed observations: . The same algorithm can now be run again. The algorithm continues, giving us . As long as each is smaller than the previous one, the process continues, giving us increasingly deep models. As soon as some , the algorithm terminates. The last layer fitted (layer ) is discarded, as it has overfit the training set. The previous layers are outputted. More sophisticated methods for deciding when to terminate are possible. For example, one might keep running the algorithm for several more steps, in the hope of passing a temporary rise in . In general Instead of a degree-2 polynomial in 2 variables, each unit may use higher-degree polynomials in more variables: And more generally, a GMDH model with multiple inputs and one output is a subset of components of the base function (1): where fi are elementary functions dependent on different sets of inputs, ai are coefficients and m is the number of the base function components. External criteria External criteria are optimization objectives for the model, such as minimizing mean-squared error on the validation set, as given above. The most common criteria are: Criterion of Regularity (CR) – least mean squares on a validation set. Least squares on a cross-validation set. Criterion of Minimum bias or Consistency – squared difference between the estimated outputs (or coefficients vectors) of two models fit on the A and B set, divided by squared predictions on the B set. Idea Like linear regression, which fits a linear equation over data, GMDH fits arbitrarily high orders of polynomial equations over data. To choose between models, two or more subsets of a data sample are used, similar to the train-validation-test split. GMDH combined ideas from: black box modeling, successive genetic selection of pairwise features, the Gabor's principle of "freedom of decisions choice", and the Beer's principle of external additions. Inspired by an analogy between constructing a model out of noisy data, and sending messages through a noisy channel, they proposed "noise-immune modelling": the higher the noise, the less parameters must the optimal model have, since the noisy channel does not allow more bits to be sent through. The model is structured as a feedforward neural network, but without restrictions on the depth, they had a procedure for automatic models structure generation, which imitates the process of biological selection with pairwise genetic features. History The method was originated in 1968 by Prof. Alexey G. Ivakhnenko in the Institute of Cybernetics in Kyiv. Period 1968–1971 is characterized by application of only regularity criterion for solving of the problems of identification, pattern recognition and short-term forecasting. As reference functions polynomials, logical nets, fuzzy Zadeh sets and Bayes probability formulas were used. Authors were stimulated by very high accuracy of forecasting with the new approach. Noise immunity was not investigated. Period 1972–1975. The problem of modeling of noised data and incomplete information basis was solved. Multicriteria selection and utilization of additional priory information for noiseimmunity increasing were proposed. Best experiments showed that with extended definition of the optimal model by additional criterion noise level can be ten times more than signal. Then it was improved using Shannon's Theorem of General Communication theory. Period 1976–1979. The convergence of multilayered GMDH algorithms was investigated. It was shown that some multilayered algorithms have "multilayerness error" – analogous to static error of control systems. In 1977 a solution of objective systems analysis problems by multilayered GMDH algorithms was proposed. It turned out that sorting-out by criteria ensemble finds the only optimal system of equations and therefore to show complex object elements, their main input and output variables. Period 1980–1988. Many important theoretical results were received. It became clear that full physical models cannot be used for long-term forecasting. It was proved, that non-physical models of GMDH are more accurate for approximation and forecast than physical models of regression analysis. Two-level algorithms which use two different time scales for modeling were developed. Since 1989 the new algorithms (AC, OCC, PF) for non-parametric modeling of fuzzy objects and SLP for expert systems were developed and investigated. Present stage of GMDH development can be described as blossom out of deep learning neuronets and parallel inductive algorithms for multiprocessor computers. Such procedure is currently used in deep learning networks. GMDH-type neural networks There are many different ways to choose an order for partial models consideration. The very first consideration order used in GMDH and originally called multilayered inductive procedure is the most popular one. It is a sorting-out of gradually complicated models generated from base function. The best model is indicated by the minimum of the external criterion characteristic. Multilayered procedure is equivalent to the Artificial Neural Network with polynomial activation function of neurons. Therefore, the algorithm with such an approach usually referred as GMDH-type Neural Network or Polynomial Neural Network. Li showed that GMDH-type neural network performed better than the classical forecasting algorithms such as Single Exponential Smooth, Double Exponential Smooth, ARIMA and back-propagation neural network. Combinatorial GMDH Another important approach to partial models consideration that becomes more and more popular is a combinatorial search that is either limited or full. This approach has some advantages against Polynomial Neural Networks, but requires considerable computational power and thus is not effective for objects with a large number of inputs. An important achievement of Combinatorial GMDH is that it fully outperforms linear regression approach if noise level in the input data is greater than zero. It guarantees that the most optimal model will be founded during exhaustive sorting. Basic Combinatorial algorithm makes the following steps: Divides data sample at least into two samples A and B. Generates subsamples from A according to partial models with steadily increasing complexity. Estimates coefficients of partial models at each layer of models complexity. Calculates value of external criterion for models on sample B. Chooses the best model (set of models) indicated by minimal value of the criterion. For the selected model of optimal complexity recalculate coefficients on a whole data sample. In contrast to GMDH-type neural networks Combinatorial algorithm usually does not stop at the certain level of complexity because a point of increase of criterion value can be simply a local minimum, see Fig.1. Algorithms Combinatorial (COMBI) Multilayered Iterative (MIA) GN Objective System Analysis (OSA) Harmonical Two-level (ARIMAD) Multiplicative–Additive (MAA) Objective Computer Clusterization (OCC); Pointing Finger (PF) clusterization algorithm; Analogues Complexing (AC) Harmonical Rediscretization Algorithm on the base of Multilayered Theory of Statistical Decisions (MTSD) Group of Adaptive Models Evolution (GAME) Software implementations FAKE GAME Project — Open source. Cross-platform. GEvom — Free upon request for academic use. Windows-only. GMDH Shell — GMDH-based, predictive analytics and time series forecasting software. Free Academic Licensing and Free Trial version available. Windows-only. KnowledgeMiner — Commercial product. Mac OS X-only. Free Demo version available. PNN Discovery client — Commercial product. Sciengy RPF! — Freeware, Open source. wGMDH — Weka plugin, Open source. R Package – Open source. R Package for regression tasks – Open source. Python library of MIA algorithm - Open source. Python library of basic GMDH algorithms (COMBI, MULTI, MIA, RIA) - Open source. References Further reading A.G. Ivakhnenko. Heuristic Self-Organization in Problems of Engineering Cybernetics, Automatica, vol.6, 1970 — p. 207-219. S.J. Farlow. Self-Organizing Methods in Modelling: GMDH Type Algorithms. New-York, Bazel: Marcel Decker Inc., 1984, 350 p. H.R. Madala, A.G. Ivakhnenko. Inductive Learning Algorithms for Complex Systems Modeling. CRC Press, Boca Raton, 1994. External links Library of GMDH books and articles Group Method of Data Handling Computational statistics Artificial neural networks Classification algorithms Regression variable selection Soviet inventions
Group method of data handling
[ "Mathematics" ]
2,161
[ "Computational statistics", "Computational mathematics" ]
13,793,754
https://en.wikipedia.org/wiki/Geophysical%20MASINT
Geophysical MASINT is a branch of Measurement and Signature Intelligence (MASINT) that involves phenomena transmitted through the earth (ground, water, atmosphere) and manmade structures including emitted or reflected sounds, pressure waves, vibrations, and magnetic field or ionosphere disturbances. According to the United States Department of Defense, MASINT has technically derived intelligence (excluding traditional imagery IMINT and signals intelligence SIGINT) that – when collected, processed, and analyzed by dedicated MASINT systems – results in intelligence that detects, tracks, identifies or describes the signatures (distinctive characteristics) of fixed or dynamic target sources. MASINT was recognized as a formal intelligence discipline in 1986. Another way to describe MASINT is a "non-literal" discipline. It feeds on a target's unintended emissive by-products, the "trails" - the spectral, chemical or RF that an object leaves behind. These trails form distinct signatures, which can be exploited as reliable discriminators to characterize specific events or disclose hidden targets." As with many branches of MASINT, specific techniques may overlap with the six major conceptual disciplines of MASINT defined by the Center for MASINT Studies and Research, which divides MASINT into Electro-optical, Nuclear, Geophysical, Radar, Materials, and Radiofrequency disciplines. Military requirements Geophysical sensors have a long history in conventional military and commercial applications, from weather prediction for sailing, to fish finding for commercial fisheries, to nuclear test ban verification. New challenges, however, keep emerging. For first-world military forces opposing other conventional militaries, there is an assumption that if a target can be located, it can be destroyed. As a result, concealment and deception have taken on new criticality. "Stealth" low-observability aircraft have gotten much attention, and new surface ship designs feature observability reduction. Operating in a confusing littoral environment produces a great deal of concealing interference. Of course, submariners feel they invented low observability, and others are simply learning from them. They know that going deep or at least ultraquiet, and hiding among natural features, makes them very hard to detect. Two families of military applications, among many, represent new challenges against which geophysical MASINT can be tried. Also, see Unattended Ground Sensors. Deeply buried structures One of the easiest ways for nations to protect weapons of mass destruction, command posts, and other critical structures is to bury them deeply, perhaps by enlarging natural caves or disused mines. Deep burial is not only a means of protection against physical attack, as even without the use of nuclear weapons, there are deeply penetrating precision-guided bombs that can attack them. Deep burial, with appropriate concealment during construction, is a way to avoid the opponent's knowing the buried facility's position well enough to direct precision-guided weapons against it. Finding deeply buried structures, therefore, is a critical military requirement. The usual first step in finding a deep structure is IMINT, especially using hyperspectral IMINT sensors to help eliminate concealment. "Hyperspectral images can help reveal information not obtainable through other forms of imagery intelligence such as the moisture content of soil. This data can also help distinguish camouflage netting from natural foliage." Still, a facility dug under a busy city would be extremely hard to find during construction. When the opponent knows that it is suspected that a deeply buried facility exists, there can be a variety of decoys and lures, such as buried heat sources to confuse infrared sensors, or simply digging holes and covering them, with nothing inside. MASINT using acoustic, seismic, and magnetic sensors would appear to have promise, but these sensors must be fairly close to the target. Magnetic Anomaly Detection (MAD) is used in antisubmarine warfare, for final localization before an attack. The existence of the submarine is usually established through passive listening and refined with directional passive sensors and active sonar. Once these sensors (as well as HUMINT and other sources) have failed, there is promise for surveying large areas and deeply concealed facilities using gravitimetric sensors. Gravity sensors are a new field, but military requirements are making it important while the technology to do it is becoming possible. Naval operations in shallow water Especially in today's "green water" and "brown water" naval applications, navies are looking at MASINT solutions to meet new challenges of operating in littoral areas of operations. This symposium found it useful to look at five technology areas, which are interesting to contrast to the generally accepted categories of MASINT: acoustics and geology and geodesy/sediments/transport, nonacoustical detection (biology/optics/chemistry), physical oceanography, coastal meteorology, and electromagnetic detection. Although it is unlikely there will ever be another World War II-style opposed landing on a fortified beach, another aspect of the littoral is being able to react to opportunities for amphibious warfare. Detecting shallow-water and beach mines remain a challenge since mine warfare is a deadly "poor man's weapon." While initial landings from an offshore force would be from helicopters or tiltrotor aircraft, with air cushion vehicles bringing ashore larger equipment, traditional landing craft, portable causeways, or other equipment will eventually be needed to bring heavy equipment across a beach. The shallow depth and natural underwater obstacles can block beach access to these crafts and equipment, as can shallow-water mines. Synthetic Aperture Radar (SAR), airborne laser detection and ranging (LIDAR) and the use of bioluminescence to detect wake trails around underwater obstacles all may help solve this challenge. Moving onto and across the beach has its own challenges. Remotely operated vehicles may be able to map landing routes, and they, as well as LIDAR and multispectral imaging, may be able to detect shallow water. Once on the beach, the soil has to support heavy equipment. Techniques here include estimating soil type from multispectral imaging, or from an airdropped penetrometer that actually measures the loadbearing capacity of the surface. Weather and sea intelligence MASINT The science and art of weather prediction used the ideas of measurement and signatures to predict phenomena, long before there were any electronic sensors. Masters of sailing ships might have no more sophisticated instrument than a wetted finger raised to the wind, and the flapping of sails. Weather information, in the normal course of military operations, has a major effect on tactics. High winds and low pressures can change artillery trajectories. High and low temperatures cause both people and equipment to require special protection. Aspects of weather, however, also can be measured and compared with signatures, to confirm or reject the findings of other sensors. The state of art is to fuse meteorological, oceanographic, and acoustic data in a variety of display modes. Temperature, salinity and sound speed can be displayed horizontally, vertically, or in a three-dimensional perspective. Predicting weather based on measurements and signatures While early sailors had no sensors beyond their five senses, the modern meteorologist has a wide range of geophysical and electro-optical measuring devices, operating on platforms from the bottom of the sea to deep space. Predictions based on these measurements are based on signatures of past weather events, a deep understanding of theory, and computational models. Weather predictions can give significant negative intelligence when the signature of some combat systems is such that they can operate only under certain weather conditions. The weather has long been an extremely critical part of modern military operations, as when the decision to land at Normandy on June 6, rather than June 5, 1944, depended on Dwight D. Eisenhower's trust in his staff weather advisor, Group Captain James Martin Stagg. It is rarely understood that something as fast as a ballistic missile reentry vehicle, or as "smart" as a precision guided munition, can still be affected by winds in the target area. As part of Unattended Ground Sensors,. The Remote Miniature Weather Station (RMWS), from System Innovations, is an air-droppable version with a lightweight, expendable and modular system with two components: a meteorological (MET) sensor and a ceilometer (cloud ceiling height) with limited MET. The basic MET system is surface-based and measures wind speed and direction, horizontal visibility, surface atmospheric pressure, air temperature and relative humidity. The ceilometer sensor determines cloud height and discrete cloud layers. The system provides near-real-time data capable of 24-hour operation for 60 days. The RMWS can also go in with US Air Force Special Operations combat weathermen The man-portable version, brought in by combat weathermen, has an additional function, as a remote miniature ceilometer. Designed to measure multiple layer cloud ceiling heights and then send that data via satellite communications link to an operator display, the system uses a Neodinum YAG (NdYAG), 4 megawatt non-eye safe laser. According to one weatherman, "We have to watch that one,” he said. “Leaving it out there basically we’re worried about civilian populace going out there and playing with it—firing the laser and there goes somebody’s eye. There are two different units [to RMWS]. One has the laser and one doesn’t. The basic difference is the one with the laser is going to give you cloud height." Hydrographic sensors Hydrographic MASINT is subtly different from weather, in that it considers factors such as water temperature and salinity, biological activities, and other factors that have a major effect on sensors and weapons used in shallow water. ASW equipment, especially acoustic performance depends on the season of the specific coastal site. Water column conditions, such as temperature, salinity, and turbidity are more variable in shallow than deep water. Water depth will influence bottom bounce conditions, as will the material of the bottom. Seasonal water column conditions (particularly summer versus winter) are inherently more variable in shallow water than in deep water. While much attention is given to shallow waters of the littoral, other areas have unique hydrographic characteristics. Regional areas with freshwater eddies Open ocean salinity fronts Near ice floes Under ice A submarine tactical development activity observed, "Freshwater eddies exist in many areas of the world. As we have experienced recently in the Gulf of Mexico using the Tactical Oceanographic Monitoring System (TOMS), there exist very distinct surface ducts that causes the Submarine Fleet Mission Program Library (SFMPL) sonar prediction to be unreliable. Accurate bathythermic information is paramount and a precursor for accurate sonar predictions.” Temperature and salinity Critical to the prediction of sound, needed by active and passive MASINT systems operating in water is knowing the temperature and salinity at specific depths. Antisubmarine aircraft, ships, and submarines can release independent sensors that measure the water temperature at various depths. The water temperature is critically important in acoustic detections, as changes in water temperature at thermoclines can act as a "barrier" or "layer" to acoustic propagation. To hunt a submarine, which is aware of water temperature, the hunter must drop acoustic sensors below the thermocline. Water conductivity is used as a surrogate marker for salinity. The current and most recently developed software, however, does not give information on suspended material in the water or bottom characteristics, both considered critical in shallow-water operations. The US Navy does this by dropping expendable probes, which transmit to a recorder, of 1978-1980 vintage, the AN/BQH-7 for submarines and the AN/BQH-71 for surface ships. While the redesign of the late seventies did introduce digital logic, the devices kept hard-to-maintain analog recorders, and maintainability became critical by 1995. A project was begun to extend with COTS components, to result in the AN/BQH-7/7A EC-3. In 1994-5, the maintainability of the in-service units became critical. Variables in selecting the appropriate probe include: Maximum depth sounded Speed of launching vessel Resolution of the vertical distance between data points (ft) Depth accuracy Biomass Large schools of fish contain enough entrapped air to conceal the sea floor, or manmade underwater vehicles and structures. Fishfinders, developed for commercial and recreational fishing, are specialized sonars that can identify acoustic reflections between the surface and the bottom. Variations on commercial equipment are apt to be needed, especially in littoral areas rich in marine life. Sea bottom measurement A variety of sensors can be used to characterise the sea bottom into, for example, mud, sand, and gravel. Active acoustic sensors are the most obvious, but there is potential information from gravimetric sensors, electro-optical and radar sensors for making inferences from the water surface, etc. Relatively simple sonars such as echo sounders can be promoted to seafloor classification systems via add-on modules, converting echo parameters into sediment type. Different algorithms exist, but they are all based on changes in the energy or shape of the reflected sounder pings. Side-scan sonars can be used to derive maps of the topography of an area by moving the sonar across it just above the bottom. Multibeam hull-mounted sonars are not as precise as a sensor near the bottom, but both can give reasonable three-dimensional visualization. Another approach comes from greater signal processing of existing military sensors. The US Naval Research Laboratory demonstrated both seafloor characterization, as well as subsurface characteristics of the seafloor. Sensors used, in different demonstrations, included normal incidence beams from the AM/UQN-4 surface ship depth finder, and AN/BQN-17 submarine fathometer; backscatter from the Kongsberg EM-121 commercial multibeam sonar; AN/UQN-4 fathometers on mine countermeasures (MCM) ships, and the AN/AQS-20 mine-hunting system. These produced the "Bottom and Subsurface Characterization" graphic. Weather effects on chemical, biological, and radiological weapon propagation One of the improvements in the Fuchs 2 reconnaissance vehicle is adding onboard weather instrumentations, including data such as wind direction and speed; air and ground temperature; barometric pressure and humidity. Acoustic MASINT This includes the collection of passive or active emitted or reflected sounds, pressure waves or vibrations in the atmosphere (ACOUSTINT) or in the water (ACINT) or conducted through the ground Going well back into the Middle Ages, military engineers would listen to the ground for sounds of telltale digging under fortifications. In modern times, acoustic sensors were first used in the air, as with artillery ranging in World War I. Passive hydrophones were used by the World War I Allies against German submarines; the UC-3 was sunk with the aid of a hydrophone on 23 April 1916. Since submerged submarines cannot use radar, passive and active acoustic systems are their primary sensors. Especially for the passive sensors, the submarine acoustic sensor operators must have extensive libraries of acoustic signatures, to identify sources of sound. In shallow water, there are sufficient challenges to conventional acoustic sensors that additional MASINT sensors may be required. Two major confounding factors are: Boundary interactions. The effects of the seafloor and the sea surface on acoustic systems in shallow water are highly complex, making range predictions difficult. Multi-path degradation affects the overall figure of merit and active classification. As a result, false target identifications are frequent. Practical limitations. Another key issue is the range dependence of shallow water propagation and reverberation. For example, shallow water limits the depth of towed sound detection arrays, thus increasing the possibility of the system's detecting its own noise. In addition, closer ship spacing increases the potential for mutual interference effects. It is believed that non-acoustic sensors, of magnetic, optical, bioluminescent, chemical, and hydrodynamic disturbances will be necessary for shallow-water naval operations. Counterbattery and countersniper location and ranging While now primarily of historical interest, one of the first applications of acoustic and optical MASINT was locating enemy artillery by the sound of their firing and flashes respectively during World War I. Effective sound ranging was pioneered by the British Army under the leadership of the Nobel Lauriate William Bragg. Flash spotting developed in parallel in the British, French and German armies. The combination of sound ranging (i.e., acoustic MASINT) and flash ranging (i.e., before modern optoelectronics) gave information unprecedented for the time, in both accuracy and timeliness. Enemy gun positions were located within 25 to 100 yards, with the information coming in three minutes or less. Initial WWI counterbattery acoustic systems In the "Sound Ranging" graphic, the manned Listening (or Advanced) Post, is sited a few 'sound seconds (or about 2000 yards) forward of the line of the unattended microphones, it sends an electrical signal to the recording station to switch on the recording apparatus. The positions of the microphones are precisely known. The differences in sound time of arrival, taken from the recordings, were then used to plot the source of the sound by one of several techniques. See http://nigelef.tripod.com/p_artyint-cb.htm#SoundRanging Where sound ranging is a time-of-arrival technique not dissimilar to that of modern multistatic sensors, flash spotting used optical instruments to take bearings on the flash from accurately surveyed observation posts. The location of the gun was determined by plotting the bearings reported to the same gun flashes. See http://nigelef.tripod.com/p_artyint-cb.htm#FieldSurveyCoy Flash ranging, today, would be called electro-optical MASINT. Artillery sound and flash ranging remained in use through World War II and in its latest forms until the present day, although flash spotting generally ceased in the 1950s due to the widespread adoption of flashless propellants and the increasing range of artillery. Mobile counterbattery radars able to detect guns, itself a MASINT radar sensor, became available in the late 1970s, although counter-mortar radars appeared in World War II. These techniques paralleled radio direction finding in SIGINT that started in World War I, using graphical bearing plotting and now, with the precision time synchronization from GPS, is often time-of-arrival. Modern acoustic artillery locators Artillery positions now are located primarily with Unmanned Air Systems and IMINT or counterartillery radar, such as the widely used Swedish ArtHuR. SIGINT also may give clues to positions, both with COMINT for firing orders, and ELINT for such things as weather radar. Still, there is renewed interest in both acoustic and electro-optical systems to complement counter-artillery radar. Acoustic sensors have come a long way since World War I. Typically, the acoustic sensor is part of a combined system, in which it cues radar or electro-optical sensors of greater precision, but a narrower field of view. HALO The UK's hostile artillery locating system (HALO) has been in service with the British Army since the 1990s. HALO is not as precise as radar, but especially complements directional radars. It passively detects artillery cannons, mortars and tank guns, with 360-degree coverage and can monitor over 2,000 square kilometers. HALO has worked in urban areas, the mountains of the Balkans, and the deserts of Iraq. The system consists of three or more unmanned sensor positions, each with four microphones and local processing, these deduce the bearing to a gun, mortar, etc. These bearings are automatically communicated to a central processor that combines them to triangulate the source of the sound. It can compute location data on up to 8 rounds per second, and display the data to the system operator. HALO may be used in conjunction with COBRA and ArtHur counter-battery radars, which are not omnidirectional, to focus on the correct sector. UTAMS Another acoustic system is the Unattended Transient Acoustic MASINT Sensor (UTAMS), developed by the U.S. Army Research Laboratory, which detects mortar and rocket launches and impacts. UTAMS remains the primary cueing sensor for the Persistent Threat Detection System (PTDS). ARL mounted aerostats with UTAMS, developing the system in a little over two months. After receiving a direct request from Iraq, ARL merged components from several programs to enable the rapid fielding of this capability. UTAMS has three to five acoustic arrays, each with four microphones, a processor, a radio link, a power source, and a laptop control computer. UTAMS, which was first operational in Iraq, first tested in November 2004 at a Special Forces Operating Base (SFOB) in Iraq. UTAMS was used in conjunction with AN/TPQ-36 and AN/TPQ-37 counter-artillery radar. While UTAMS was intended principally for detecting indirect artillery fire, Special Forces and their fire support officer learned it could pinpoint improvised explosive device (IED) explosions and small arms/rocket-propelled grenade (RPG) fires. It detected Points of Origin (POO) up to 10 kilometers from the sensor. Analyzing the UTAMS and radar logs revealed several patterns. The opposing force was firing 60 mm mortars during observed dining hours, presumably since that gave the largest groupings of personnel and the best chance of producing heavy casualties. That would have been obvious from the impact history alone, but these MASINT sensors established a pattern of the enemy firing locations. This allowed the US forces to move mortars into range of the firing positions, give coordinates to cannons when the mortars were otherwise committed, and use attack helicopters as a backup to both. The opponents changed to night fires, which, again, were countered with mortar, artillery, and helicopter fires. They then moved into an urban area where US artillery was not allowed to fire, but a combination of PSYOPS leaflet drops and deliberate near misses convinced the locals not to give sanctuary to the mortar crews. Originally for a Marine requirement in Afghanistan, UTAMS was combined with electro-optical MASINT to produce the Rocket Launch Spotter (RLS) system useful against both rockets and mortars. In the Rocket Launch Spotter (RLS) application, each array consists of four microphones and processing equipment. Analyzing the time delays between an acoustic wavefront’s interaction with each microphone in the array UTAMS provides an azimuth of origin. The azimuth from each tower is reported to the UTAMS processor at the control station, and a POO is triangulated and displayed. The UTAMS subsystem can also detect and locate the point of impact (POI), but, due to the difference between the speeds of sound and light, it may take UTAMS as long as 30 seconds to determine the POO for a rocket launch 13 km away. In this application, the electro-optical component of RLS will detect the rocket POO earlier, while UTAMS may do better with the mortar prediction. Passive sea-based acoustic sensors (hydrophones) Modern hydrophones convert sound to electrical energy, which then can undergo additional signal processing, or can be transmitted immediately to a receiving station. They may be directional or omnidirectional. Navies use a variety of acoustic systems, especially passive, in antisubmarine warfare, both tactical and strategic. For tactical use, passive hydrophones, both on ships and airdropped sonobuoys, are used extensively in antisubmarine warfare. They can detect targets far further away than with active sonar, but generally will not have the precision location of active sonar, approximating it with a technique called Target Motion Analysis (TMA). Passive sonar has the advantage of not revealing the position of the sensor. The Integrated Undersea Surveillance System (IUSS) consists of multiple subsystems in SOSUS, Fixed Distributed System (FDS), and the Advanced Deployable System (ADS or SURTASS). Reducing the emphasis on Cold War blue-water operations put SOSUS, with more flexible "tuna boat" sensing vessels called SURTASS being the primary blue-water long-range sensors SURTASS used longer, more sensitive towed passive acoustic arrays than could be deployed from maneuvering vessels, such as submarines and destroyers. SURTASS is now being complemented by Low-Frequency Active (LFA) sonar; see the sonar section. Air-dropped passive acoustic sensors Passive sonobuoys, such as the AN/SSQ-53F, can be directional or omnidirectional and can be set to sink to a specific depth. These would be dropped from helicopters and maritime patrol aircraft such as the P-3. Fixed underwater passive acoustic sensors The US installed massive Fixed Surveillance System (FSS, also known as SOSUS) hydrophone arrays on the ocean floor, to track Soviet and other submarines. Surface ship passive acoustic sensors Purely from the standpoint of detection, towed hydrophone arrays offer a long baseline and exceptional measurement capability. Towed arrays, however, are not always feasible, because when deployed, their performance can suffer, or they can suffer outright damage, from fast speeds or radical turns. Steerable sonar arrays on the hull or bow usually have a passive as well as active mode, as do variable-depth sonars Surface ships may have warning receivers to detect hostile sonar. Submarine passive acoustic sensors Modern submarines have multiple passive hydrophone systems, such as a steerable array in a bow dome, fixed sensors along the sides of the submarines, and towed arrays. They also have specialized acoustic receivers, analogous to radar warning receivers, to alert the crew to the use of active sonar against their submarine. US submarines made extensive clandestine patrols to measure the signatures of Soviet submarines and surface vessels. This acoustic MASINT mission included both routine patrols of attack submarines, and submarines sent to capture the signature of a specific vessel. US antisubmarine technicians on air, surface, and subsurface platforms had extensive libraries of vessel acoustic signatures. Passive acoustic sensors can detect aircraft flying low over the sea. Land-based passive acoustic sensors (geophones) Vietnam-era acoustic MASINT sensors included "Acoubuoy (36 inches long, 26 pounds) floated down by camouflaged parachute and caught in the trees, where it hung to listen. The Spikebuoy (66 inches long, 40 pounds) planted itself in the ground like a lawn dart. Only the antenna, which looked like the stalks of weeds, was left showing above ground." This was part of Operation Igloo White. Part of the AN/GSQ-187 Improved Remote Battlefield Sensor System (I-REMBASS) is a passive acoustic sensor, which, with other MASINT sensors, detects vehicles and personnel on a battlefield. Passive acoustic sensors provide additional measurements that can be compared with signatures, and used to complement other sensors. I-REMBASS control will integrate, in approximately 2008, with the Prophet SIGINT/EW ground system. For example, a ground search radar may not be able to differentiate between a tank and a truck moving at the same speed. Adding acoustic information, however, may quickly distinguish between them. Active acoustic sensors and supporting measurements Combatant vessels, of course, made extensive use of active sonar, which is yet another acoustic MASINT sensor. Besides the obvious application in antisubmarine warfare, specialized active acoustic systems have roles in: Mapping the seafloor for navigation and collision avoidance. These include basic depth gauges, but quickly get into devices that do 3-dimensional underwater mapping Determining seafloor characteristics, for applications varying from understanding its sound-reflecting properties, to predicting the type of marine life that may be found there, to knowing when a surface is appropriate for anchoring or for using various equipment that will contact the seafloor Various synthetic aperture sonars have been built in the laboratory and some have entered use in mine-hunting and search systems. An explanation of their operation is given in synthetic aperture sonar. Water surface, fish interference and bottom characterization The water surface and bottom are reflecting and scattering boundaries. Large schools of fish, with air in their swim bladder balance apparatus, can also have a significant effect on acoustic propagation. For many purposes, but not all naval tactical applications, the sea-air surface can be thought of as a perfect reflector. "The effects of the seafloor and the sea surface on acoustic systems in shallow water are highly complex, making range predictions difficult. Multi-path degradation affects the overall figure of merit and active classification. As a result, false target identifications are frequent." The acoustic impedance mismatch between water and the bottom is generally much less than at the surface and is more complex. It depends on the bottom material types and the depth of the layers. Theories have been developed for predicting the sound propagation in the bottom in this case, for example by Biot and by Buckingham. Water surface For high-frequency sonars (above about 1 kHz) or when the sea is rough, some of the incident sounds is scattered, and this is taken into account by assigning a reflection coefficient whose magnitude is less than one. Rather than measuring surface effects directly from a ship, radar MASINT, in aircraft or satellites, may give better measurements. These measurements would then be transmitted to the vessel's acoustic signal processor. Under ice A surface covered with ice, of course, is tremendously different than even storm-driven water Purely from collision avoidance and acoustic propagation, a submarine needs to know how close it is to the bottom of the ice. Less obvious is the need to know the three-dimensional structure of the ice, because submarines may need to break through it to launch missiles, raise electronic masts, or surface the boat. Three-dimensional ice information also can tell the submarine captain whether antisubmarine warfare aircraft can detect or attack the boat. The state of the art is providing the submarine with a three-dimensional visualization of the ice above: the lowest part (ice keel) and the ice canopy. While sound will propagate differently in ice than in liquid water, the ice still needs to be considered as a volume, to understand the nature of reverberations within it. Bottom A typical basic depth measuring device is the US AN/UQN-4A. Both the water surface and bottom are reflecting and scattering boundaries. For many purposes, but not all naval tactical applications, the sea-air surface can be thought of as a perfect reflector. In reality, there are complex interactions of water surface activity, seafloor characteristics, water temperature and salinity, and other factors that make "...range predictions difficult. Multi-path degradation affects overall figure of merit and active classification. As a result, false target identifications are frequent." This device, however, does not give information on the characteristics of the bottom. In many respects, commercial fishing and marine scientists have equipment that is perceived as needed for shallow-water operations. Biologic effects on sonar reflection A further complication is the presence of wind-generated bubbles or fish close to the sea surface. . The bubbles can also form plumes that absorb some of the incidents and scattered sound, and scatter some of sound themselves. . This problem is distinct from biologic interference caused by acoustic energy generated by marine life, such as the squeaks of porpoises and other cetaceans, and measured by acoustic receivers. The signatures of biological sound generators need to be differentiated from more deadly denizens of the depths. Classifying biologics is a very good example of an acoustic MASINT process. Surface combatants Modern surface combatants with an ASW mission will have a variety of active systems, with a hull- or bow-mounted array, protected from water by a rubber dome; a "variable-depth" dipping sonar on a cable, and, especially on smaller vessels, a fixed acoustic generator and receiver. Some, but not all, vessels carry passive towed arrays or combined active-passive arrays. These depend on target noise, which, in the combined littoral environment of ultraquiet submarines in the presence of much ambient noise. Vessels that have deployed towed arrays cannot make radical course maneuvers. Especially when active capabilities are included, the array can be treated as a bistatic or multistatic sensor, and act as a synthetic aperture sonar (SAS) For ships that cooperate with aircraft, they will need a data link to sonobuoys and a sonobuoy signal processor, unless the aircraft has extensive processing capability and can send information that can be accepted directly by tactical computers and displays. Signal processors not only analyze the signals but constantly track propagation conditions. The former is usually considered part of a particular sonar, but the US Navy has a separate propagation predictor called the AN/UYQ-25B(V) Sonar in situ Mode Assessment System (SIMAS) Echo Tracker Classifiers (ETC) are adjuncts, with a clear MASINT flavor, to existing surface ship sonars . ETC is an application of synthetic aperture sonar (SAS). SAS is already used for minehunting but could help existing surface combatants, as well as future vessels and unmanned surface vehicles (USV), detect threats, such as very silent air-independent propulsion non-nuclear submarines, outside torpedo range. The torpedo range, especially in shallow water, is considered anything greater than 10 nmi. Conventional active sonar may be more effective than towed arrays, but the small size of modern littoral submarines makes them difficult threats. Highly variable bottom paths, biologics, and other factors complicate sonar detection. If the target is slow-moving or waiting on the bottom, they have little or no Doppler effect, which current sonars use to recognize threats. Continual active tracking measurement of all acoustically detected objects, with recognition of signatures as deviations from ambient noise, still gives a high false alarm rate (FAR) with conventional sonar. SAS processing, however, improves the resolution, especially of azimuth measurements, by assembling the data from multiple pings into a synthetic beam that gives the effect of a far larger receiver. MASINT-oriented SAS measures shape characteristics and eliminate acoustically detected objects that do not conform to the signature of threats. Shape recognition is only one of the parts of the signature, which include course and Doppler when available. Air-dropped active sonobuoys Active sonobuoys, containing a sonar transmitter and receiver, can be dropped from fixed-wing maritime patrol aircraft (e.g., P-3, Nimrod, Chinese Y-8, Russian and Indian Bear ASW variants), antisubmarine helicopters, and carrier-based antisubmarine aircraft (e.g., S-3). While there have been some efforts to use other aircraft simply as carriers of sonobuoys, the general assumption is that the sonobuoy-carrying aircraft can issue commands to the sonobuoys and receive, and to some extent process, their signals. The Directional Hydrophone Command Activated Sonobuoy system (DICASS) both generates sound and listen for it. A typical modern active sonobuoy, such as the AN/SSQ 963D, generates multiple acoustic frequencies . Other active sonobuoys, such as the AN/SSQ 110B, generate small explosions as acoustic energy sources. Airborne dipping sonar Antisubmarine helicopters can carry a "dipping" sonar head at the end of a cable, which the helicopter can raise from or lower into the water. The helicopter would typically dip the sonar when trying to localize a target submarine, usually in cooperation with other ASW platforms or with sonobuoys. Typically, the helicopter would raise its head after dropping an ASW weapon, to avoid damaging the sensitive receiver. Not all variants of the same basic helicopter, even assigned to ASW, carry dipping sonar; some may trade the weight of the sonar for more sonobuoy or weapon capacity. The EH101 helicopter, used by a number of nations, has a variety of dipping sonars. The (British) Royal Navy version has Ferranti/Thomson-CSF (now Thales) sonar, while the Italian version uses the HELRAS. Russian Ka-25 helicopters carry dipping sonar, as does the US LAMPS, US MH-60R helicopter, which carries the Thales AQS-22 dipping sonar. The older SH-60F helicopter carries the AQS-13F dipping sonar. Surveillance vessel low-frequency active Newer Low-Frequency Active (LFA) systems are controversial, as their very high sound pressures may be hazardous to whales and other marine life . A decision has been made to employ LFA on SURTASS vessels, after an environmental impact statement that indicated, if LFA is used with decreased power levels in certain high-risk areas for marine life, it would be safe when employed from a moving ship. The ship motion, and the variability of the LFA signal, would limit the exposure to individual sea animals. LFA operates in the low-frequency (LF) acoustic band of 100–500 Hz. It has an active component, the LFA proper, and the passive SURTASS hydrophone array. "The active component of the system, LFA, is a set of 18 LF acoustic transmitting source elements (called projectors) suspended by cable from underneath an oceanographic surveillance vessel, such as the Research Vessel (R/V) Cory Chouest, USNS Impeccable (T-AGOS 23), and the Victorious class (TAGOS 19 class). "The source level of an individual projector is 215 dB. These projectors produce the active sonar signal or “ping.” A "ping," or transmission, can last between 6 and 100 seconds. The time between transmissions is typically 6 to 15 minutes with an average transmission of 60 seconds. Average duty cycle (ratio of sound “on” time to total time) is less than 20 percent. The typical duty cycle, based on historical LFA operational parameters (2003 to 2007), is normally 7.5 to 10 percent." This signal "...is not a continuous tone, but rather a transmission of waveforms that vary in frequency and duration. The duration of each continuous frequency sound transmission is normally 10 seconds or less. The signals are loud at the source, but levels diminish rapidly over the first kilometer." Submarine active acoustic sensors The primary tactical active sonar of a submarine is usually in the bow, covered with a protective dome. Submarines for blue-water operations used active systems such as the AN/SQS-26 and AN/SQS-53 have been developed but were generally designed for convergence zone and single bottom bounce environments. Submarines that operate in the Arctic also have specialized sonar for under-ice operation; think of an upside-down fathometer. Submarines also may have minehunting sonar. Using measurements to differentiate between biologic signatures and signatures of objects that will permanently sink the submarine is as critical a MASINT application as could be imagined. Active acoustic sensors for minehunting Sonars optimized to detect objects of the size and shapes of mines can be carried by submarines, remotely operated vehicles, surface vessels (often on a boom or cable) and specialized helicopters. The classic emphasis on minesweeping, and detonating the mine released from its tether using gunfire, has been replaced with the AN/SLQ-48(V)2 mine neutralization system (MNS)AN/SLQ-48 - (remotely operated) Mine Neutralization Vehicle. This works well for rendering save mines in deep water, by placing explosive charges on the mine and/or its tether. The AN/SLQ-48 is not well suited to the neutralization of shallow-water mines. The vehicle tends to be underpowered and may leave on the bottom a mine that looks like a mine to any subsequent sonar search and an explosive charge subject to later detonation under proper impact conditions. There is mine-hunting sonar, as well as an (electro-optical) television on the ROV, and AN/SQQ-32 minehunting sonar on the ship. Acoustic sensing of large explosions An assortment of time-synchronized sensors can characterize conventional or nuclear explosions. One pilot study, the Active Radio Interferometer for Explosion Surveillance (ARIES). This technique implements an operational system for monitoring ionospheric pressure waves resulting from surface or atmospheric nuclear or chemical explosives. Explosions produce pressure waves that can be detected by measuring phase variations between signals generated by ground stations along two different paths to a satellite. This is a very modernized version, on a larger scale, of World War I sound ranging. As can many sensors, ARIES can be used for additional purposes. Collaborations are being pursued with the Space Forecast Center to use ARIES data for total electron content measures on a global scale, and with the meteorology/global environment community to monitor global climate change (via tropospheric water vapor content measurements), and by the general ionospheric physics community to study travelling ionospheric disturbances. Sensors relatively close to a nuclear event, or a high-explosive test simulating a nuclear event, can detect, using acoustic methods, the pressure produced by the blast. These include infrasound microbarographs (acoustic pressure sensors) that detect very low-frequency sound waves in the atmosphere produced by natural and man-made events. Closely related to the microbarographs, but detecting pressure waves in water, are hydro-acoustic sensors, both underwater microphones and specialized seismic sensors that detect the motion of islands. Seismic MASINT US Army Field Manual 2-0 defines seismic intelligence as "The passive collection and measurement of seismic waves or vibrations in the earth surface." One strategic application of seismic intelligence makes use of the science of seismology to locate and characterize nuclear testing, especially underground testing. Seismic sensors also can characterize large conventional explosions that are used in testing the high-explosive components of nuclear weapons. Seismic intelligence also can help locate such things as large underground construction projects. Since many areas of the world have a great deal of natural seismic activity, seismic MASINT is one of the emphatic arguments that there must be a long-term commitment to measuring, even during peacetime, so that the signatures of natural behavior is known before it is necessary to search for variations from signatures. Strategic seismic MASINT For nuclear test detection, seismic intelligence is limited by the "threshold principle" coined in 1960 by George Kistiakowsky, which recognized that while detection technology would continue to improve, there would be a threshold below which small explosions could not be detected. Tactical seismic MASINT The most common sensor in the Vietnam-era "McNamara Line" of remote sensors was the ADSID (Air-Delivered Seismic Intrusion Detector) sensed earth motion to detect people and vehicles. It resembled the Spikebuoy, except it was smaller and lighter (31 inches long, 25 pounds). The challenge for the seismic sensors (and for the analysts) was not so much in detecting the people and the trucks as it was in separating out the false alarms generated by wind, thunder, rain, earth tremors, and animals—especially frogs." Vibration MASINT This subdiscipline is also called piezoelectric MASINT after the sensor most often used to sense vibration, but vibration detectors need not be piezoelectric. Note that some discussions treat seismic and vibration sensors as a subset of acoustic MASINT. Other possible detectors could be moving coil or surface acoustic wave. . Vibration, as a form of geophysical energy to be sensed, has similarities to acoustic and seismic MASINT, but also has distinct differences that make it useful, especially in unattended ground sensors (UGS). In the UGS application, one advantage of a piezoelectric sensor is that it generates electricity when triggered, rather than consuming electricity, an important consideration for remote sensors whose lifetime may be determined by their battery capacity. While acoustic signals at sea travel through water, on land, they can be assumed to come through the air. Vibration, however, is conducted through a solid medium on land. It has a higher frequency than is typical of seismic conducted signals. A typical detector, the Thales MA2772 vibration is a piezoelectric cable, shallowly buried below the ground surface, and extended for 750 meters. Two variants are available, a high-sensitivity version for personnel detection, and a lower-sensitivity version to detect vehicles. Using two or more sensors will determine the direction of travel, from the sequence in which the sensors trigger. In addition to being buried, piezoelectric vibration detectors, in a cable form factor, also are used as part of high-security fencing. They can be embedded in walls or other structures that need protection. Magnetic MASINT A magnetometer is a scientific instrument used to measure the strength and/or direction of the magnetic field in the vicinity of the instrument. The measurements they make can be compared to signatures of vehicles on land, submarines underwater, and atmospheric radio propagation conditions. They come in two basic types: Scalar magnetometers measure the total strength of the magnetic field to which they are subjected, and Vector magnetometers have the capability to measure the component of the magnetic field in a particular direction. Earth's magnetism varies from place to place and differences in the Earth's magnetic field (the magnetosphere) can be caused by two things: The differing nature of rocks The interaction between charged particles from the sun and the magnetosphere Metal detectors use electromagnetic induction to detect metal. They can also determine the changes in existing magnetic fields caused by metallic objects. Indicating loops for detecting submarines One of the first means for detecting submerged submarines, first installed by the Royal Navy in 1914, was the effect of their passage over an anti-submarine indicator loop on the bottom of a body of water. A metal object passing over it, such as a submarine, will, even if degaussed, have enough magnetic properties to induce a current in the loop's cable. . In this case, the motion of the metal submarine across the indicating coil acts as an oscillator, producing electric current. MAD A magnetic anomaly detector (MAD) is an instrument used to detect minute variations in the Earth's magnetic field. The term refers specifically to magnetometers used either by military forces to detect submarines (a mass of ferromagnetic material creates a detectable disturbance in the magnetic field)Magnetic anomaly detectors were first employed to detect submarines during World War II. MAD gear was used by both Japanese and U.S. anti-submarine forces, either towed by ship or mounted in aircraft to detect shallow submerged enemy submarines. After the war, the U.S. Navy continued to develop MAD gear as a parallel development with sonar detection technologies. To reduce interference from electrical equipment or metal in the fuselage of the aircraft, the MAD sensor is placed at the end of a boom or a towed aerodynamic device. Even so, the submarine must be very near the aircraft's position and close to the sea surface for detection of the change or anomaly. The detection range is normally related to the distance between the sensor and the submarine. The size of the submarine and its hull composition determine the detection range. MAD devices are usually mounted on aircraft or helicopters. There is some misunderstanding of the mechanism of detection of submarines in water using the MAD boom system. Magnetic moment displacement is ostensibly the main disturbance, yet submarines are detectable even when oriented parallel to the Earth's magnetic field, despite construction with non-ferromagnetic hulls. For example, the Soviet-Russian Alfa class submarine, was constructed out of titanium. This light, strong material, as well as a unique nuclear power system, allowed the submarine to break speed and depth records for operational boats. It was thought that nonferrous titanium would defeat magnetic ASW sensors, but this was not the case. to give dramatic submerged performance and protection from detection by MAD sensors, is still detectable. Since titanium structures are detectable, MAD sensors do not directly detect deviations in the Earth's magnetic field. Instead, they may be described as long-range electric and electromagnetic field detector arrays of great sensitivity. An electric field is set up in conductors experiencing a variation in physical environmental conditions, providing that they are contiguous and possess sufficient mass. Particularly in submarine hulls, there is a measurable temperature difference between the bottom and top of the hull producing a related salinity difference, as salinity is affected by the temperature of the water. The difference in salinity creates an electric potential across the hull. An electric current then flows through the hull, between the laminae of sea water separated by depth and temperature. The resulting dynamic electric field produces an electromagnetic field of its own, and thus even a titanium hull will be detectable on a MAD scope, as will a surface ship for the same reason. Vehicle detectors The Remotely Emplaced Battlefield Surveillance System (REMBASS) is a US Army program for detecting the presence, speed, and direction of a ferrous object, such as a tank. Coupled with acoustic sensors that recognize the sound signature of a tank, it could offer high accuracy. It also collects weather information. The Army's AN/GSQ-187 Improved Remote Battlefield Sensor System (I-REMBASS) includes both magnetic-only and combined passive infrared/magnetic intrusion detectors. The DT-561/GSQ hand emplaced MAG "sensor detects vehicles (tracked or wheeled) and personnel carrying ferrous metal. It also provides information on which to base a count of objects passing through its detection zone and reports their direction of travel relative to their location. The monitor uses two different (MAG and IR) sensors and their identification codes to determine the direction of travel. Magnetic detonators and countermeasures Magnetic sensors, much more sophisticated than the early inductive loops, can trigger the explosion of mines or torpedoes. Early in World War II, the US tried to put a magnetic torpedo exploder far beyond the limits of the technology of the time and had to disable it, and then work on also-unreliable contact fuzing, to make torpedoes more than blunt objects than banged into hulls. Since water is incompressible, an explosion under the keel of a vessel is far more destructive than one at the air-water interface. Torpedo and mine designers want to place the explosions in that vulnerable spot, and countermeasures designers want to hide the magnetic signature of a vessel. Signature is especially relevant here, as mines may be made selective for warships, merchant vessels unlikely to be hardened against underwater explosions, or submarines. A basic countermeasure, started in World War II, was degaussing, but it is impossible to remove all magnetic properties. Detecting landmines Landmines often contain enough ferrous metal to be detectable with appropriate magnetic sensors. Sophisticated mines, however, may also sense a metal-detection oscillator, and, under preprogrammed conditions, detonate to deter demining personnel. Not all landmines have enough metal to activate a magnetic detector. While, unfortunately, the greatest number of unmapped minefields are in parts of the world that cannot afford high technology, a variety of MASINT sensors could help demining. These would include ground-mapping radar, thermal and multispectral imaging, and perhaps synthetic aperture radar to detect disturbed soil. Gravitimetric MASINT Gravity is a function of mass. While the average value of Earth's surface gravity is approximately 9.8 meters per second squared, given sufficiently sensitive instrumentation, it is possible to detect local variations in gravity from the different densities of natural materials: the value of gravity will be greater on top of a granite monolith than over a sand beach. Again with sufficiently sensitive instrumentation, it should be possible to detect gravitational differences between solid rock, and rock excavated for a hidden facility. Streland 2003 points out that the instrumentation indeed must be sensitive: variations of the force of gravity on the earth’s surface are on the order of 106 of the average value. A practical gravimetric detector of buried facilities would need to be able to measure "less than one one millionth of the force that caused the apple to fall on Sir Isaac Newton’s head." To be practical, it would be necessary for the sensor to be able to be used while in motion, measuring the change in gravity between locations. This change over distance is called the gravity gradient, which can be measured with a gravity gradiometer. Developing an operationally useful gravity gradiometer is a major technical challenge. One type, the SQUID Superconducting Quantum Interference Device gradiometer, may have adequate sensitivity, but it needs extreme cryogenic cooling, even if in space, a logistic nightmare. Another technique, far more operationally practical but lacking the necessary sensitivity, is the Gravity Recovery and Climate Experiment (GRACE) technique, currently using radar to measure the distance between pairs of satellites, whose orbits will change based on gravity. Substituting lasers for radar will make GRACE more sensitive, but probably not sensitive enough. A more promising technique, although still in the laboratory, is quantum gradiometry, which is an extension of atomic clock techniques, much like those in GPS. Off-the-shelf atomic clocks measure changes in atomic waves over time rather than the spatial changes measured in a quantum gravity gradiometer. One advantage of using GRACE in satellites is that measurements can be made from a number of points over time, with a resulting improvement as seen in synthetic aperture radar and sonar. Still, finding deeply buried structures of human scale is a tougher problem than the initial goals of finding mineral deposits and ocean currents. To make this operationally feasible, there would have to be a launcher to put fairly heavy satellites into polar orbits, and as many earth stations as possible to reduce the need for large on-board storage of the large amounts of data the sensors will produce. Finally, there needs to be a way to convert the measurements into a form that can be compared against available signatures in geodetic databases. Those databases would need significant improvement, from measured data, to become sufficiently precise that a buried facility signature would stand out. References Measurement and signature intelligence Acoustics Signal processing Military intelligence Sonar Anti-submarine warfare Navigational equipment Surveillance Ultrasound Effects of gravity Geophysics Synthetic aperture radar
Geophysical MASINT
[ "Physics", "Technology", "Engineering" ]
11,180
[ "Telecommunications engineering", "Applied and interdisciplinary physics", "Computer engineering", "Signal processing", "Classical mechanics", "Acoustics", "Geophysics" ]
13,793,909
https://en.wikipedia.org/wiki/Penman%E2%80%93Monteith%20equation
The Penman-Monteith equation approximates net evapotranspiration (ET) from meteorological data as a replacement for direct measurement of evapotranspiration. The equation is widely used, and was derived by the United Nations Food and Agriculture Organization for modeling reference evapotranspiration ET0. Significance Evapotranspiration contributions are significant in a watershed's water balance, yet are often not emphasized in results because the precision of this component is often weak relative to more directly measured phenomena, e.g., rain and stream flow. In addition to weather uncertainties, the Penman-Monteith equation is sensitive to vegetation-specific parameters, e.g., stomatal resistance or conductance. Various forms of crop coefficients (Kc) account for differences between specific vegetation modeled and a reference evapotranspiration (RET or ET0) standard. Stress coefficients (Ks) account for reductions in ET due to environmental stress (e.g. soil saturation reduces root-zone O2, low soil moisture induces wilt, air pollution effects, and salinity). Models of native vegetation cannot assume crop management to avoid recurring stress. Equation Per Monteith's Evaporation and Environment, the equation is: λv = Latent heat of vaporization. The energy required per unit mass of water vaporized. (J g−1) Lv = Volumetric latent heat of vaporization. The energy required per unit volume of water vaporized. (Lv = 2453 MJ m−3) E = Mass water evapotranspiration rate (g s−1 m−2) ET = Water volume evapotranspired (mm s−1) Δ = Rate of change of saturation specific humidity with air temperature. (Pa K−1) Rn = Net irradiance (W m−2), the external source of energy flux G = Ground heat flux (W m−2), usually difficult to measure cp = Specific heat capacity of air (J kg−1 K−1) ρa = dry air density (kg m−3) δe = vapor pressure deficit (Pa) ga = Conductivity of air, atmospheric conductance (m s−1) gs = Conductivity of stoma, surface or stomatal conductance (m s−1) γ = Psychrometric constant (γ ≈ 66 Pa K−1) Note: Often, resistances are used rather than conductivities. where rc refers to the resistance to flux from a vegetation canopy to the extent of some defined boundary layer. The atmospheric conductance ga accounts for aerodynamic effects like the zero plane displacement height and the roughness length of the surface. The stomatal conductance gs accounts for the effect of leaf density (Leaf Area Index), water stress, and concentration in the air, that is to say plant reaction to external factors. Different models exist to link the stomatal conductance to these vegetation characteristics, like the ones from P.G. Jarvis (1976) or Jacobs et al. (1996). Accuracy While the Penman-Monteith method is widely considered accurate for practical purposes and is recommended by the Food and Agriculture Organization of the United Nations, errors when compared to direct measurement or other techniques can range from -9 to 40%. Variations and alternatives FAO 56 Penman-Monteith equation To avoid the inherent complexity of determining stomatal and atmospheric conductance, the Food and Agriculture Organization proposed in 1998 a simplified equation for the reference evapotranspiration ET0. It is defined as the evapotranpiration for "[an] hypothetical reference crop with an assumed crop height of 0.12 m, a fixed surface resistance of 70 s m-1 and an albedo of 0.23." This reference surface is defined to represent "an extensive surface of green grass of uniform height, actively growing, completely shading the ground and with adequate water". The corresponding equation is: ET0 = Reference evapotranspiration, Water volume evapotranspired (mm day−1) Δ = Rate of change of saturation specific humidity with air temperature. (Pa K−1) Rn = Net irradiance (MJ m−2 day−1), the external source of energy flux G = Ground heat flux (MJ m−2 day−1), usually equivalent to zero on a day T = Air temperature at 2m (K) u2 = Wind speed at 2m height (m/s) δe = vapor pressure deficit (kPa) γ = Psychrometric constant (γ ≈ 66 Pa K−1) N.B.: The coefficients 0.408 and 900 are not unitless but account for the conversion from energy values to equivalent water depths: radiation [mm day−1] = 0.408 radiation [MJ m−2 day−1]. This reference evapotranspiration ET0 can then be used to evaluate the evapotranspiration rate ET from unstressed plants through crop coefficients Kc: ET = Kc * ET0. Variations The standard methods of the American Society of Civil Engineers modify the standard Penman-Monteith equation for use with an hourly time step. The SWAT model is one of many GIS-integrated hydrologic models estimating ET using Penman-Monteith equations. Priestley–Taylor The Priestley–Taylor equation was developed as a substitute for the Penman-Monteith equation to remove dependence on observations. For Priestley–Taylor, only radiation (irradiance) observations are required. This is done by removing the aerodynamic terms from the Penman-Monteith equation and adding an empirically derived constant factor, . The underlying concept behind the Priestley–Taylor model is that an air mass moving above a vegetated area with abundant water would become saturated with water. In these conditions, the actual evapotranspiration would match the Penman rate of reference evapotranspiration. However, observations revealed that actual evaporation was 1.26 times greater than reference evaporation. Therefore, the equation for actual evaporation was found by taking reference evapotranspiration and multiplying it by . The assumption here is for vegetation with an abundant water supply (i.e. the plants have low moisture stress). Areas like arid regions with high moisture stress are estimated to have higher values. The assumption that an air mass moving over a vegetated surface with abundant water saturates has been questioned later. The atmosphere's lowest and most turbulent part, the atmospheric boundary layer, is not a closed box but constantly brings in dry air from higher up in the atmosphere towards the surface. As water evaporates more readily into a dry atmosphere, evapotranspiration is enhanced. This explains the larger-than-unity value of the Priestley-Taylor parameter . The proper equilibrium of the system has been derived. It involves the characteristics of the interface of the atmospheric boundary layer and the overlying free atmosphere. History The equation is named after Howard Penman and John Monteith. Penman published his equation in 1948, and Monteith revised it in 1965. References External links Derivation of the equation Equations Hydrology Agronomy Meteorological concepts
Penman–Monteith equation
[ "Chemistry", "Mathematics", "Engineering", "Environmental_science" ]
1,485
[ "Hydrology", "Mathematical objects", "Equations", "Environmental engineering" ]
13,794,001
https://en.wikipedia.org/wiki/Janus-faced%20molecule
A Janus molecule (or Janus-faced molecule) is a molecule which can represent both beneficial and toxic effects. The term Janus-faced molecule is derived from the ancient Roman god, Janus. Janus is depicted as having two faces; one facing the past and one facing the future. This is synonymous to a Janus molecule having two distinct purposes: a beneficial and a toxic purpose depending on its quantity. Examples Examples of a Janus-faced molecule are nitric oxide and cholesterol. In the case of cholesterol, the property that makes cholesterol useful in cell membranes, namely its absolute insolubility in water, also makes it lethal. When cholesterol accumulates in the wrong place, for example within the walls of an artery, it cannot be readily mobilized, and its presence eventually leads to the development of an atherosclerotic plaque. One such example of a Janus-faced molecule is S100A8/A9 protein complex; this complex is associated with autoimmune and abnormal growth of cells disorders. S100 is integral in the fight against cancer, S100 induces phagocytes that phagocytize malignant tumor cells which results in apoptosis. Proteoglycans are another class of molecules that display this duality, under certain chemical conditions these molecules can emerge as inhibitors or promoters. Recent studies have shown that proteoglycans can play an integral role in the metastasis of cancer. Another molecule that falls within this class of molecules is DKK1. This molecule's presence can trigger cancers to display both metastatic as well as anti-metastatic properties especially pertaining to breast cancers. It has been studied that DKK1 secretion can be associated with promoting breast cancer metastasis to the bone as well as the suppression of metastasis to the lungs. Botulinum neurotoxins also portray these dichotomous roles. This specific molecule is formed by Clostridium Botulinum, a spore forming bacteria. If this bacteria contaminates food, the results can be fatal and can lead to death. Yet, despite their toxicity which is lethal even in small doses, these molecules can be used in a wide array of pharmacological applications; one such application is the one utilized in cosmetology . Gamma peptide nucleic acid (PNA) (synthetic DNA and RNA analogs) is another Janus molecule which slips between DNA strands. The gamma PNA could be inserted between strands of DNA or RNA to recognize sequences or elements that could potentially cause known diseases through its bifacial recognition. It does so by inserting itself when the DNA or RNA strand is undergoing transcription to conduction transcriptional regulation. However, there are still ongoing challenges with this Janus molecule that requires further research and experimentation. Some fungi are capable of producing secondary metabolites called mycotoxins which are toxic and affect human and animal health. Mycotoxins are often found in farmed ingredients such as corn and rice while it is being harvested or kept in storage; When these ingredients are largely manufactured towards humans and animals, there is the possibly of consumption of these toxins. The toxicity of these mycotoxins were intensively studied and appeared to be affective in killing microbes as well as inhibiting/killing tumor cell growth. This exhibits janus-faced molecule characteristics because it kills indiscriminitely. A consequence of using mycotoxins against tumor cell growth in cancer treatment is an increase risk of mutations. See also Janus Toxicity References Molecules
Janus-faced molecule
[ "Physics", "Chemistry" ]
737
[ "Molecular physics", "Molecules", "Physical objects", "nan", "Atoms", "Matter" ]
13,794,087
https://en.wikipedia.org/wiki/Apamin
Apamin is an 18 amino acid globular peptide neurotoxin found in apitoxin (bee venom). Dry bee venom consists of 2–3% of apamin. Apamin selectively blocks SK channels, a type of Ca2+-activated K+ channel expressed in the central nervous system. Toxicity is caused by only a few amino acids, in particular cysteine1, lysine4, arginine13, arginine14 and histidine18. These amino acids are involved in the binding of apamin to the Ca2+-activated K+ channel. Due to its specificity for SK channels, apamin is used as a drug in biomedical research to study the electrical properties of SK channels and their role in the afterhyperpolarizations occurring immediately following an action potential. Origin The first symptoms of apitoxin (bee venom), that are now thought to be caused by apamin, were described back in 1936 by Hahn and Leditschke. Apamin was first isolated by Habermann in 1965 from Apis mellifera, the Western honey bee. Apamin was named after this bee. Bee venom contains many other compounds, like histamine, phospholipase A2, hyaluronidase, MCD peptide, and the main active component melittin. Apamin was separated from the other compounds by gel filtration and ion exchange chromatography. Structure and active site Apamin is a polypeptide possessing an amino acid sequence of H-Cys-Asn-Cys-Lys-Ala-Pro-Glu-Thr-Ala-Leu-Cys-Ala-Arg-Arg-Cys-Gln-Gln-His-NH2 (one-letter sequence CNCKAPETALCARRCQQH-NH2, with disulfide bonds between Cys1-Cys11 and Cys3-Cys15). Apamin is very rigid because of the two disulfide bridges and seven hydrogen bonds. The three-dimensional structure of apamin has been studied with several spectroscopical techniques: HNMR, Circular Dichroism, Raman spectroscopy, FT-IR. The structure is presumed to consist of an alpha-helix and beta-turns, but the exact structure is still unknown. By local alterations it is possible to find the amino acids that are involved in toxicity of apamin. It was found by Vincent et al. that guanidination of the ε-amino group of lysine4 does not decrease toxicity. When the ε-amino group of lysine4 and the α-amino group of cysteine1 are acetylated or treated with fluorescamine, toxicity decreases with a factor of respectively 2.5 and 2.8. This is only a small decrease, which indicates that neither the ε-amino group of lysine4 nor the α-amino group of cysteine1 is essential for the toxicity of apamin. Glutamine7 was altered by formation of an amide bond with glycine ethyl ester, this resulted in a decrease in toxicity of a factor 2.0. Glutamine7 also doesn't appear to be essential for toxicity. When histidine18 is altered by carbethoxylation, toxicity decreases only by a factor 2.6. But when histidine18, the ε-amino group of lysine4 and the α-amino group of cysteine1 all are carbethoxylated and acetylated toxicity decreases drastically. This means that these three amino acids are not essential for toxicity on their own, but the three of them combined are. Chemical alteration of arginine13 and arginine14 by treatment of 1,2-cyclohexanedione and cleavage by trypsin decreases toxicity by a factor greater than 10. The amino acids that cause toxicity of apamin are cysteine1, lysine4, arginine13, arginine14 and histidine18. Toxicodynamics Apamin is the smallest neurotoxin polypeptide known, and the only one that passes the blood-brain barrier. Apamin thus reaches its target organ, the central nervous system. Here it inhibits small-conductance Ca2+-activated K+ channels (SK channels) in neurons. These channels are responsible for the afterhyperpolarizations that follow action potentials, and therefore regulate the repetitive firing frequency. Three different types of SK channels show different characteristics. Only SK2 and SK3 are blocked by apamin, whereas SK1 is apamin insensitive. SK channels function as a tetramer of subunits. Heteromers have intermediate sensitivity. SK channels are activated by the binding of intracellular Ca2+ to the protein calmodulin, which is constitutively associated to the channel. Transport of potassium ions out of the cell along their concentration gradient causes the membrane potential to become more negative. The SK channels are present in a wide range of excitable and non-excitable cells, including cells in the central nervous system, intestinal myocytes, endothelial cells, and hepatocytes. Binding of apamin to SK channels is mediated by amino acids in the pore region as well as extracellular amino acids of the SK channel. It is likely that the inhibition of SK channels is caused by blocking of the pore region, which hinders the transport of potassium ions. This will increase the neuronal excitability and lower the threshold for generating an action potential. Other toxins that block SK channels are tamapin and scyllatoxin. Toxicokinetics The kinetics of labeled derivatives of apamin were studied in vitro and in vivo in mice by Cheng-Raude et al. This shed some light on the kinetics of apamin itself. The key organ for excretion is likely to be the kidney, since enrichment of the labeled derivatives was found there. The peptide apamin is small enough to pass the glomerular barrier, facilitating renal excretion. The central nervous system, contrarily, was found to contain only very small amounts of apamin. This is unexpected, as this is the target organ for neurotoxicity caused by apamin. This low concentration thus appeared to be sufficient to cause the toxic effects. However, these results disagree with a study of Vincent et al. After injection of a supralethal dose of radioactive acetylated apamin in mice, enrichment was found in the spinal cord, which is part of the target organ. Some other organs, including kidney and brain, contained only small amounts of the apamin derivative. Symptoms Symptoms following bee sting may include: Local effects: burning or stinging pain, swelling, redness. Severe systemic reactions: swelling of the tongue and throat, difficulty breathing, and shock. Development of optic neuritis and atrophy. Atrial fibrillation, cerebral infarction, acute myocardial infarction, Fisher's syndrome, acute inflammatory polyradiculopathy (Guillain–Barré syndrome), claw hand (through a central action of apamin on the spinal cord and a peripheral action in the form of median and ulnar neuritis, causing spasms of the long flexors in the forearm). Patients poisoned with bee venom can be treated with anti-inflammatory medication, antihistamines and oral prednisolone. Apamin is an element in bee venom. A person can come into contact with apamin through bee venom, so the symptoms that are known are not caused by apamin directly, but by the venom as a whole. Apamin is the only neurotoxin acting purely on the central nervous system. The symptoms of apamin toxicity are not well known, because people are not easily exposed to the toxin alone. Through research about the neurotoxicity of apamin some symptoms were discovered. In mice, the injection of apamin produces convulsions and long-lasting spinal spasticity. Also, it is known that the polysynaptic spinal reflexes are disinhibited in cats. Polysynaptic reflex is a reflex action that transfers an impulse from a sensory neuron to a motor neuron via an interneuron in the spinal cord. In rats, apamin was found to cause tremor and ataxia, as well as dramatic haemorrhagic effects in the lungs. Furthermore, apamin has been found to be 1000 times more efficient when applied into the ventricular system instead of the peripheral nervous system. The ventricular system is a set of structures in the brain containing cerebrospinal fluid. The peripheral nervous system contains the nerves and ganglia outside of the brain and spinal cord. This difference in efficiency can easily be explained. Apamin binds to the SK channels, which differ slightly in different tissues. So apamin binding is probably stronger in SK channels in the ventricular system than in other tissues. Toxicity rates In earlier years it was thought that apamin was a rather nontoxic compound (LD50 = 15 mg/kg in mice) compared to the other compounds in bee venom. The current lethal dose values of apamin measured in mice are given below. There are no data known specific for humans. Intraperitoneal (mouse) LD50: 3.8 mg/kg Subcutaneous (mouse) LD50: 2.9 mg/kg Intravenous (mouse) LD50: 4 mg/kg Intracerebral (mouse) LD50: 1800 ng/kg Parenteral (mouse) LD50: 600 mg/kg Therapeutic use Recent studies have shown that SK channels do not only regulate afterhyperpolarization, they also have an effect on synaptic plasticity. This is the activity-dependent adaptation of the strength of synaptic transmission. Synaptic plasticity is an important mechanism underlying learning and memory processes. Apamin is expected to influence these processes by inhibiting SK channels. It has been shown that apamin enhances learning and memory in rats and mice. This may provide a basis for the use of apamin as a treatment for memory disorders and cognitive dysfunction. However, due to the risk of toxic effects, the therapeutic window is very narrow. SK channel blockers may have a therapeutic effect on Parkinson's disease. Dopamine, which is depleted in this disease, will be released from midbrain dopaminergic neurons when these SK channels are inhibited. SK channels have also been proposed as targets for the treatment of epilepsy, emotional disorders and schizophrenia. References External links Neurochemistry Neurotoxins Ion channel toxins Peptides Cyclic peptides
Apamin
[ "Chemistry", "Biology" ]
2,274
[ "Biomolecules by chemical classification", "Molecular biology", "Biochemistry", "Neurochemistry", "Neurotoxins", "Peptides" ]
13,794,111
https://en.wikipedia.org/wiki/Syndite
Syndite is a composite material which combines the hardness, abrasion resistance and thermal conductivity of diamond with the toughness of tungsten carbide. Applications cutting tools for machining a wide variety of abrasive materials wear part applications Advantages improved life of the tool, or wear part improved process reliability improved frictional behaviour Grades Syndite is produced in five standard grades: CTB002 CTC 002 CTB010 CTB025 CTH 025 The numbers refer to the average dimensions in micrometres of the starting diamond material. The designation CTB indicates standard Polycrystalline diamond (PCD) products, whereas CTC and CTH indicate modified PCD grades. Syndite CTB010 may, in most cases, be regarded as the general-purpose grade. Sources Syndite Superhard materials
Syndite
[ "Physics" ]
176
[ "Materials", "Superhard materials", "Matter" ]
13,794,551
https://en.wikipedia.org/wiki/Polar%20see-saw
The polar see-saw (also: bipolar seesaw) is the phenomenon that temperature changes in the northern and southern hemispheres may be out of phase. The hypothesis states that large changes, for example when the glaciers are intensely growing or depleting, in the formation of ocean bottom water in both poles take a long time to exert their effect in the other hemisphere. Estimates of the period of delay vary; one typical estimate is 1,500 years. This is usually studied in the context of ice cores taken from Antarctica and Greenland. See also Polar amplification Climate of the Arctic Climate of Antarctica References Meteorological phenomena Environment of Antarctica Climate and weather statistics
Polar see-saw
[ "Physics" ]
133
[ "Physical phenomena", "Earth phenomena", "Weather", "Climate and weather statistics", "Meteorological phenomena" ]
13,796,065
https://en.wikipedia.org/wiki/Freudenthal%20magic%20square
In mathematics, the Freudenthal magic square (or Freudenthal–Tits magic square) is a construction relating several Lie algebras (and their associated Lie groups). It is named after Hans Freudenthal and Jacques Tits, who developed the idea independently. It associates a Lie algebra to a pair of division algebras A, B. The resulting Lie algebras have Dynkin diagrams according to the table at the right. The "magic" of the Freudenthal magic square is that the constructed Lie algebra is symmetric in A and B, despite the original construction not being symmetric, though Vinberg's symmetric method gives a symmetric construction. The Freudenthal magic square includes all of the exceptional Lie groups apart from G2, and it provides one possible approach to justify the assertion that "the exceptional Lie groups all exist because of the octonions": G2 itself is the automorphism group of the octonions (also, it is in many ways like a classical Lie group because it is the stabilizer of a generic 3-form on a 7-dimensional vector space – see prehomogeneous vector space). Constructions See history for context and motivation. These were originally constructed circa 1958 by Freudenthal and Tits, with more elegant formulations following in later years. Tits' approach Tits' approach, discovered circa 1958 and published in , is as follows. Associated with any normed real division algebra A (i.e., R, C, H or O) there is a Jordan algebra, J3(A), of 3 × 3 A-Hermitian matrices. For any pair (A, B) of such division algebras, one can define a Lie algebra where denotes the Lie algebra of derivations of an algebra, and the subscript 0 denotes the trace-free part. The Lie algebra L has as a subalgebra, and this acts naturally on . The Lie bracket on (which is not a subalgebra) is not obvious, but Tits showed how it could be defined, and that it produced the following table of compact Lie algebras. By construction, the row of the table with A=R gives , and similarly vice versa. Vinberg's symmetric method The "magic" of the Freudenthal magic square is that the constructed Lie algebra is symmetric in A and B. This is not obvious from Tits' construction. Ernest Vinberg gave a construction which is manifestly symmetric, in . Instead of using a Jordan algebra, he uses an algebra of skew-hermitian trace-free matrices with entries in A ⊗ B, denoted . Vinberg defines a Lie algebra structure on When A and B have no derivations (i.e., R or C), this is just the Lie (commutator) bracket on . In the presence of derivations, these form a subalgebra acting naturally on as in Tits' construction, and the tracefree commutator bracket on is modified by an expression with values in . Triality A more recent construction, due to Pierre Ramond and Bruce Allison and developed by Chris Barton and Anthony Sudbery, uses triality in the form developed by John Frank Adams; this was presented in , and in streamlined form in . Whereas Vinberg's construction is based on the automorphism groups of a division algebra A (or rather their Lie algebras of derivations), Barton and Sudbery use the group of automorphisms of the corresponding triality. The triality is the trilinear map obtained by taking three copies of the division algebra A, and using the inner product on A to dualize the multiplication. The automorphism group is the subgroup of SO(A1) × SO(A2) × SO(A3) preserving this trilinear map. It is denoted Tri(A). The following table compares its Lie algebra to the Lie algebra of derivations. Barton and Sudbery then identify the magic square Lie algebra corresponding to (A,B) with a Lie algebra structure on the vector space The Lie bracket is compatible with a Z2 × Z2 grading, with tri(A) and tri(B) in degree (0,0), and the three copies of A ⊗ B in degrees (0,1), (1,0) and (1,1). The bracket preserves tri(A) and tri(B) and these act naturally on the three copies of A ⊗ B, as in the other constructions, but the brackets between these three copies are more constrained. For instance when A and B are the octonions, the triality is that of Spin(8), the double cover of SO(8), and the Barton-Sudbery description yields where V, S+ and S− are the three 8-dimensional representations of (the fundamental representation and the two spin representations), and the hatted objects are an isomorphic copy. With respect to one of the Z2 gradings, the first three summands combine to give and the last two together form one of its spin representations Δ+128 (the superscript denotes the dimension). This is a well known symmetric decomposition of E8. The Barton–Sudbery construction extends this to the other Lie algebras in the magic square. In particular, for the exceptional Lie algebras in the last row (or column), the symmetric decompositions are: Generalizations Split composition algebras In addition to the normed division algebras, there are other composition algebras over R, namely the split-complex numbers, the split-quaternions and the split-octonions. If one uses these instead of the complex numbers, quaternions, and octonions, one obtains the following variant of the magic square (where the split versions of the division algebras are denoted by a prime). Here all the Lie algebras are the split real form except for so3, but a sign change in the definition of the Lie bracket can be used to produce the split form so2,1. In particular, for the exceptional Lie algebras, the maximal compact subalgebras are as follows: A non-symmetric version of the magic square can also be obtained by combining the split algebras with the usual division algebras. According to Barton and Sudbery, the resulting table of Lie algebras is as follows. The real exceptional Lie algebras appearing here can again be described by their maximal compact subalgebras. Arbitrary fields The split forms of the composition algebras and Lie algebras can be defined over any field K. This yields the following magic square. There is some ambiguity here if K is not algebraically closed. In the case K = C, this is the complexification of the Freudenthal magic squares for R discussed so far. More general Jordan algebras The squares discussed so far are related to the Jordan algebras J3(A), where A is a division algebra. There are also Jordan algebras Jn(A), for any positive integer n, as long as A is associative. These yield split forms (over any field K) and compact forms (over R) of generalized magic squares. For n = 2, J2(O) is also a Jordan algebra. In the compact case (over R) this yields a magic square of orthogonal Lie algebras. The last row and column here are the orthogonal algebra part of the isotropy algebra in the symmetric decomposition of the exceptional Lie algebras mentioned previously. These constructions are closely related to hermitian symmetric spaces – cf. prehomogeneous vector spaces. Symmetric spaces Riemannian symmetric spaces, both compact and non-compact, can be classified uniformly using a magic square construction, in . The irreducible compact symmetric spaces are, up to finite covers, either a compact simple Lie group, a Grassmannian, a Lagrangian Grassmannian, or a double Lagrangian Grassmannian of subspaces of for normed division algebras A and B. A similar construction produces the irreducible non-compact symmetric spaces. History Rosenfeld projective planes Following Ruth Moufang's discovery in 1933 of the Cayley projective plane or "octonionic projective plane" P2(O), whose symmetry group is the exceptional Lie group F4, and with the knowledge that G2 is the automorphism group of the octonions, it was proposed by that the remaining exceptional Lie groups E6, E7, and E8 are isomorphism groups of projective planes over certain algebras over the octonions: the , C ⊗ O, the , H ⊗ O, the , O ⊗ O. This proposal is appealing, as there are certain exceptional compact Riemannian symmetric spaces with the desired symmetry groups and whose dimension agree with that of the putative projective planes (dim(P2(K ⊗ K′)) = 2 dim(K)dim(K′)), and this would give a uniform construction of the exceptional Lie groups as symmetries of naturally occurring objects (i.e., without an a priori knowledge of the exceptional Lie groups). The Riemannian symmetric spaces were classified by Cartan in 1926 (Cartan's labels are used in sequel); see classification for details, and the relevant spaces are: the octonionic projective plane – FII, dimension 16 = 2 × 8, F4 symmetry, Cayley projective plane P2(O), the bioctonionic projective plane – EIII, dimension 32 = 2 × 2 × 8, E6 symmetry, complexified Cayley projective plane, P2(C ⊗ O), the "" – EVI, dimension 64 = 2 × 4 × 8, E7 symmetry, P2(H ⊗ O), the "" – EVIII, dimension 128 = 2 × 8 × 8, E8 symmetry, P2(O ⊗ O). The difficulty with this proposal is that while the octonions are a division algebra, and thus a projective plane is defined over them, the bioctonions, quateroctonions and octooctonions are not division algebras, and thus the usual definition of a projective plane does not work. This can be resolved for the bioctonions, with the resulting projective plane being the complexified Cayley plane, but the constructions do not work for the quateroctonions and octooctonions, and the spaces in question do not obey the usual axioms of projective planes, hence the quotes on "(putative) projective plane". However, the tangent space at each point of these spaces can be identified with the plane (H ⊗ O)2, or (O ⊗ O)2 further justifying the intuition that these are a form of generalized projective plane. Accordingly, the resulting spaces are sometimes called Rosenfeld projective planes and notated as if they were projective planes. More broadly, these compact forms are the Rosenfeld elliptic projective planes, while the dual non-compact forms are the Rosenfeld hyperbolic projective planes. A more modern presentation of Rosenfeld's ideas is in , while a brief note on these "planes" is in . The spaces can be constructed using Tits' theory of buildings, which allows one to construct a geometry with any given algebraic group as symmetries, but this requires starting with the Lie groups and constructing a geometry from them, rather than constructing a geometry independently of a knowledge of the Lie groups. Magic square While at the level of manifolds and Lie groups, the construction of the projective plane P2(K ⊗ K′) of two normed division algebras does not work, the corresponding construction at the level of Lie algebras does work. That is, if one decomposes the Lie algebra of infinitesimal isometries of the projective plane P2(K) and applies the same analysis to P2(K ⊗ K′), one can use this decomposition, which holds when P2(K ⊗ K′) can actually be defined as a projective plane, as a definition of a "magic square Lie algebra" M(K,K′). This definition is purely algebraic, and holds even without assuming the existence of the corresponding geometric space. This was done independently circa 1958 in and by Freudenthal in a series of 11 papers, starting with and ending with , though the simplified construction outlined here is due to . See also E6 (mathematics) E7 (mathematics) E8 (mathematics) F4 (mathematics) G2 (mathematics) Euclidean Hurwitz algebra Euclidean Jordan algebra Jordan triple system Notes References – 4.3: The Magic Square (reprint of 1951 article) Lie groups Representation theory
Freudenthal magic square
[ "Mathematics" ]
2,595
[ "Lie groups", "Mathematical structures", "Fields of abstract algebra", "Algebraic structures", "Representation theory" ]
13,796,485
https://en.wikipedia.org/wiki/Caretaker%20gene
Caretaker genes encode products that stabilize the genome. Fundamentally, mutations in caretaker genes lead to genomic instability. Tumor cells arise from two distinct classes of genomic instability: mutational instability arising from changes in the nucleotide sequence of DNA and chromosomal instability arising from improper rearrangement of chromosomes. Changes in the genome that allow uncontrolled cell proliferation or cell immortality are responsible for cancer. It is believed that the major changes in the genome that lead to cancer arise from mutations in tumor suppressor genes. In 1997, Kinzler and Bert Vogelstein grouped these cancer susceptibility genes into two classes: "caretakers" and "gatekeepers". In 2004, a third classification of tumor suppressor genes was proposed by Franziska Michor, Yoh Iwasa, and Martin Nowak; "landscaper" genes. In contrast to caretaker genes, gatekeeper genes encode gene products that act to prevent growth of potential cancer cells and prevent accumulation of mutations that directly lead to increased cellular proliferation. The third classification of genes, the landscapers, encode products that, when mutated, contribute to the neoplastic growth of cells by fostering a stromal environment conducive to unregulated cell proliferation. Genes in context Pathways to cancer via the caretakers The process of DNA replication inherently places cells at risk of acquiring mutations. Thus, caretaker genes are vitally important to cellular health. Rounds of cell replication allow fixation of mutated genes into the genome. Caretaker genes provide genome stability by preventing the accumulation of these mutations. Factors that contribute to genome stabilization include proper cell-cycle checkpoints, DNA repair pathways, and other actions that ensure cell survival following DNA damage. Specific DNA maintenance operations encoded by caretaker genes include nucleotide excision repair, base excision repair, non-homologous end joining recombination pathways, mismatch repair pathways, and telomere metabolism. Loss of function mutations in caretaker genes allow mutations in other genes to survive that can result in increased conversion of a normal cell to a neoplastic cell, a cell that; (1) divides more often than it should or (2) does not die when conditions warrant cell death. Thus, caretaker genes do not directly regulate cell proliferation. Instead, they prevent other mutations from surviving for example by slowing the cell division process to enable DNA repair to complete, or by initiating apoptosis of the cell. In genetic knock-out and rescue experiments, restoration of a caretaker gene from the mutated form to the wildtype version does not limit tumorigenesis. This is because caretaker genes only indirectly contribute to the pathway to cancer. Cells deficient in a DNA repair process tend to accumulate unrepaired DNA damages. Cells defective in apoptosis tend to survive even with excessive DNA damage, thus permitting replication of the damaged DNA and consequently carcinogenic mutations. Some key caretaker proteins that contribute to cell survival by acting in DNA repair processes when the level of damage is manageable, become executioners by inducing apoptosis when there is excess DNA damage. Inactivation of caretaker genes is environmentally equivalent to exposing the cell to mutagens incessantly. For example, a mutation in a caretaker gene coding for a DNA repair pathway that leads to the inability to properly repair DNA damage could allow uncontrolled cell growth. This is the result of mutations of other genes that accumulate unchecked as a result of faulty gene products encoded by the caretakers. In addition to providing genomic stability, caretakers also provide chromosomal stability. Chromosomal instability resulting from dysfunctional caretaker genes is the most common form of genetic instability that leads to cancer in humans. In fact, it has been proposed that these caretaker genes are responsible for many hereditary predispositions to cancers. In individuals predisposed to cancer via mutations in caretaker genes, a total of three subsequent somatic mutations are required to acquire the cancerous phenotype. Mutations must occur in the remaining normal caretaker allele in addition to both alleles of gatekeeper genes within that cell for the said cell to turn to neoplasia. Thus, the risk of cancer in these affected populations is much less when compared to cancer risk in families predisposed to cancer via the gatekeeper pathway. Pathways to cancer via the gatekeepers In many cases, gatekeeper genes encode a system of checks and balances that monitor cell division and death. When tissue damage occurs, for example, products of gatekeeper genes ensure that balance of cell growth over cellular death remains in check. In the presence of competent gatekeeper genes, mutations of other genes do not lead to on-going growth imbalances. Mutations altering these genes lead to irregular growth regulation and differentiation. Each cell type has only one, or at least only very few, gatekeeper genes. If a person is predisposed to cancer, they have inherited a mutation in one of two copies of a gatekeeper gene. Mutation of the alternate allele leads to progression to neoplasia. Historically, the term gatekeeper gene was first coined in association with the APC gene, a tumor suppressor that is consistently found to be mutated in colorectal tumors. Gatekeeper genes are in fact specific to the tissues in which they reside. The probability that mutations occur in other genes increases when DNA repair pathway mechanisms are damaged as a result of mutations in caretaker genes. Thus, the probability that a mutation will take place in a gatekeeper gene increases when the caretaker gene has been mutated. Apoptosis, or induced cell suicide, usually serves as a mechanism to prevent excessive cellular growth. Gatekeeper genes regulate apoptosis. However, in instances where tissue growth or regrowth is warranted, these signals must be inactivated or net tissue regeneration would be impossible. Thus, mutations in growth-controlling genes would lead to the characteristics of uncontrolled cellular proliferation, neoplasia, while in a parallel cell that had no mutations in the gatekeeper function, simple cell death would ensue. Pathways to cancer via the landscapers A third group of genes in which mutations lead to a significant susceptibility to cancer is the class of landscaper genes. Products encoded by landscaper genes do not directly affect cellular growth, but when mutated, contribute to the neoplastic growth of cells by fostering stromal environments conducive to unregulated cell proliferation. Landscaper genes encode gene products that control the microenvironment in which cells grow. Growth of cells depends both on cell-to-cell interactions and cell-to-extracellular matrix (ECM) interactions. Mechanisms of control via regulation of extracellular matrix proteins, cellular surface markers, cellular adhesion molecules, and growth factors have been proposed. Cells communicate with each other via the ECM through both direct contact and through signaling molecules. Stromal cell abnormalities arising from gene products coded by faulty landscaper genes could induce abnormal cell growth on the epithelium, leading to cancer of that tissue. Biochemical cascades consisting of signaling proteins occur in the ECM and play an important role to the regulation of many aspects of cell life. Landscaper genes encode products that determine the composition of the membranes in which cells live. For example, large molecular weight glycoproteins and proteoglycans have been found to in association with signaling and structural roles. There exist proteolytic molecules in the ECM that are essential for clearing unwanted molecules, such as growth factors, cell adhesion molecules, and others from the space surrounding cells. It is proposed that landscaper genes control the mechanisms by which these factors are properly cleared. Different characteristics of these membranes lead to different cellular effects, such as differing rates of cell proliferation or differentiation. If, for example, the ECM is disrupted, incoming cells, such as those of the immune system, can overload the area and release chemical signals that induce abnormal cell proliferation. These conditions lead to an environment conducive to tumor growth and the cancerous phenotype. Gatekeepers, caretakers, and cellular aging Because mechanisms that control the accumulation of damage through the lifetime of a cell are essential to longevity, it is logical that caretaker and gatekeeper genes play a significant role in cellular aging. Increased activity of caretaker genes postpones aging, increasing lifespan. This is because of the regulatory function associated with caretaker genes in maintaining the stability of the genome. The actions of caretaker genes contribute to increasing lifespan of the cell. A specific purpose of caretaker genes has been outlined in chromosomal duplication. Caretakers have been identified as crucial to encoding products that maintain the telomeres. It is believed that degradation of telomeres, the ends of chromosomes, through repeated cell cycle divisions, is a main component of cellular aging and death. It has been suggested that gatekeeper genes confer beneficial anti-cancer affects but may provide deleterious effects that increase aging. This is because young organisms experiencing times of rapid growth necessitate significant anti-cancer mechanisms. As the organism ages, however, these formerly beneficial pathways become deleterious by inducing apoptosis in cells of renewable tissues, causing degeneration of the structure. Studies have shown an increased expression of pro-apoptotic genes in age-related pathologies. This is because the products of gatekeeper genes are directly involved in coding for cellular growth and proliferation. However, dysfunctional caretaker genes do not always lead to a cancerous phenotype. For example, defects in nucleotide excision repair pathways are associated with premature aging phenotypes in diseases such as Xeroderma pigmentosum and Trichothiodystrophy. These patients exhibit brittle hair, nails, scaly skin, and hearing loss – characteristics associated with simple human aging. This is important because the nucleotide excision repair pathway is a mechanism thought to be encoded by a caretaker gene. Geneticists studying these premature-aging syndromes propose that caretaker genes that determine cell fate also play a significant role in aging. Accumulation of DNA damage with age may be especially prevalent in the central nervous system because of low DNA repair capability in postmitotic brain tissue. Similarly, gatekeeper genes have been identified as having a role in aging disorders that exhibit mutations in such genes without an increased susceptibility to cancer. Experiments with mice that have increased gatekeeper function in the p53 gene show reduced cancer incidence (due to the protective activities of products encoded by p53) but a faster rate of aging. Cellular senescence, also encoded by a gatekeeper gene, is arrest of the cell cycle in the G1 phase. Qualitative differences have been found between senescent cells and normal cells, including differential expression of cytokines and other factors associated with inflammation. It is believed that this may contribute, in part, to cellular aging. In sum, although mechanisms encoded by gatekeeper and caretaker genes to protect individuals from cancer early in life, namely induction of apoptosis or senescence, later in life these functions may promote the aging phenotype. Mutations in context It has been proposed that mutations in gatekeeper genes could, to an extent, offer a sort of selective advantage to the individual in which the change occurs. This is because cells with these mutations are able to replicate at a faster rate than nearby cells. This is known as "increased somatic fitness". Caretaker genes, on the other hand, confer selective disadvantage because the result is inherently decreased cellular success. However, increased somatic fitness could also arise from a mutation in a caretaker gene if mutations in tumor suppressor genes increase the net reproductive rate of the cell. Although mutations in gatekeeper genes may lead to the same result as those of caretaker genes, namely cancer, the transcripts that gatekeeper genes encode are significantly different from those encoded by caretaker genes. In many cases, gatekeeper genes encode a system of checks and balances that monitor cell division and death. In cases of tissue damage, for example, gatekeeper genes would ensure that balance of cell growth over cellular death remains in check. In the presence of competent gatekeeper genes, mutations of other genes would not lead to on-going growth imbalances. Whether or not mutations in these genes confer beneficial or deleterious effects to the animal depends partially on the environmental context in which these changes occur, a context encoded by the landscaper genes. For example, tissues of the skin and colon reside in compartments of cells that rarely mix with one another. These tissues are replenished by stem cells. Mutations that occur within these cell lineages remain confined to the compartment in which they reside, increasing the future risk of cancer. This is also protective, however, because the cancer will remain confined to that specific area, rather than invading the rest of the body, a phenomenon known as metastasis. In areas of the body compartmentalized into small subsets of cells, mutations that lead to cancer most often begin with caretaker genes. On the other hand, cancer progression in non-compartmentalized or large cell populations may be a result of initial mutations in gatekeepers. These delineations offer a suggestion why different types of tissue within the body progress to cancer by differing mechanisms. Notes Although the classification of tumor suppressor genes into these categories is helpful to the scientific community, the potential role of many genes cannot be reliably identified as the functions of many genes are rather ill-defined. In some contexts, genes exhibit discrete caretaker function while in other situations gatekeeper characteristics are recognized. An example of one such gene is p53. Patients with Li-Fraumeni syndrome, for example, have mutations in the p53 gene that suggest caretaker function. p53 has an identified role, however, in regulating the cell cycle as well, which is an essential gatekeeper function. Sources Gene expression
Caretaker gene
[ "Chemistry", "Biology" ]
2,821
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
1,657,665
https://en.wikipedia.org/wiki/Edge-of-the-wedge%20theorem
In mathematics, Bogoliubov's edge-of-the-wedge theorem implies that holomorphic functions on two "wedges" with an "edge" in common are analytic continuations of each other provided they both give the same continuous function on the edge. It is used in quantum field theory to construct the analytic continuation of Wightman functions. The formulation and the first proof of the theorem were presented by Nikolay Bogoliubov at the International Conference on Theoretical Physics, Seattle, USA (September, 1956) and also published in the book Problems in the Theory of Dispersion Relations. Further proofs and generalizations of the theorem were given by Res Jost and Harry Lehmann (1957), Freeman Dyson (1958), H. Epstein (1960), and by other researchers. The one-dimensional case Continuous boundary values In one dimension, a simple case of the edge-of-the-wedge theorem can be stated as follows. Suppose that f is a continuous complex-valued function on the complex plane that is holomorphic on the upper half-plane, and on the lower half-plane. Then it is holomorphic everywhere. In this example, the two wedges are the upper half-plane and the lower half plane, and their common edge is the real axis. This result can be proved from Morera's theorem. Indeed, a function is holomorphic provided its integral round any contour vanishes; a contour which crosses the real axis can be broken up into contours in the upper and lower half-planes and the integral round these vanishes by hypothesis. Distributional boundary values on a circle The more general case is phrased in terms of distributions. This is technically simplest in the case where the common boundary is the unit circle in the complex plane. In that case holomorphic functions f, g in the regions and have Laurent expansions absolutely convergent in the same regions and have distributional boundary values given by the formal Fourier series Their distributional boundary values are equal if for all n. It is then elementary that the common Laurent series converges absolutely in the whole region . Distributional boundary values on an interval In general given an open interval on the real axis and holomorphic functions defined in and satisfying for some non-negative integer N, the boundary values of can be defined as distributions on the real axis by the formulas Existence can be proved by noting that, under the hypothesis, is the -th complex derivative of a holomorphic function which extends to a continuous function on the boundary. If f is defined as above and below the real axis and F is the distribution defined on the rectangle by the formula then F equals off the real axis and the distribution is induced by the distribution on the real axis. In particular if the hypotheses of the edge-of-the-wedge theorem apply, i.e. , then By elliptic regularity it then follows that the function F is holomorphic in . In this case elliptic regularity can be deduced directly from the fact that is known to provide a fundamental solution for the Cauchy–Riemann operator . Using the Cayley transform between the circle and the real line, this argument can be rephrased in a standard way in terms of Fourier series and Sobolev spaces on the circle. Indeed, let and be holomorphic functions defined exterior and interior to some arc on the unit circle such that locally they have radial limits in some Sobolev space, Then, letting the equations can be solved locally in such a way that the radial limits of G and F tend locally to the same function in a higher Sobolev space. For k large enough, this convergence is uniform by the Sobolev embedding theorem. By the argument for continuous functions, F and G therefore patch to give a holomorphic function near the arc and hence so do f and g. The general case A wedge is a product of a cone with some set. Let be an open cone in the real vector space , with vertex at the origin. Let E be an open subset of , called the edge. Write W for the wedge in the complex vector space , and write W' for the opposite wedge . Then the two wedges W and W' meet at the edge E, where we identify E with the product of E with the tip of the cone. Suppose that f is a continuous function on the union that is holomorphic on both the wedges W and W' . Then the edge-of-the-wedge theorem says that f is also holomorphic on E (or more precisely, it can be extended to a holomorphic function on a neighborhood of E). The conditions for the theorem to be true can be weakened. It is not necessary to assume that f is defined on the whole of the wedges: it is enough to assume that it is defined near the edge. It is also not necessary to assume that f is defined or continuous on the edge: it is sufficient to assume that the functions defined on either of the wedges have the same distributional boundary values on the edge. Application to quantum field theory In quantum field theory the Wightman distributions are boundary values of Wightman functions W(z1, ..., zn) depending on variables zi in the complexification of Minkowski spacetime. They are defined and holomorphic in the wedge where the imaginary part of each zi−zi−1 lies in the open positive timelike cone. By permuting the variables we get n! different Wightman functions defined in n! different wedges. By applying the edge-of-the-wedge theorem (with the edge given by the set of totally spacelike points) one can deduce that the Wightman functions are all analytic continuations of the same holomorphic function, defined on a connected region containing all n! wedges. (The equality of the boundary values on the edge that we need to apply the edge-of-the-wedge theorem follows from the locality axiom of quantum field theory.) Connection with hyperfunctions The edge-of-the-wedge theorem has a natural interpretation in the language of hyperfunctions. A hyperfunction is roughly a sum of boundary values of holomorphic functions, and can also be thought of as something like a "distribution of infinite order". The analytic wave front set of a hyperfunction at each point is a cone in the cotangent space of that point, and can be thought of as describing the directions in which the singularity at that point is moving. In the edge-of-the-wedge theorem, we have a distribution (or hyperfunction) f on the edge, given as the boundary values of two holomorphic functions on the two wedges. If a hyperfunction is the boundary value of a holomorphic function on a wedge, then its analytic wave front set lies in the dual of the corresponding cone. So the analytic wave front set of f lies in the duals of two opposite cones. But the intersection of these duals is empty, so the analytic wave front set of f is empty, which implies that f'' is analytic. This is the edge-of-the-wedge theorem. In the theory of hyperfunctions there is an extension of the edge-of-the-wedge theorem to the case when there are several wedges instead of two, called Martineau's edge-of-the-wedge theorem. See the book by Hörmander for details. Notes References Further reading . The connection with hyperfunctions is described in: . For the application of the edge-of-the-wedge theorem to quantum field theory see: Axiomatic quantum field theory Theorems in complex analysis Theorems in mathematical physics
Edge-of-the-wedge theorem
[ "Physics", "Mathematics" ]
1,613
[ "Theorems in mathematical analysis", "Functions and mappings", "Mathematical theorems", "Equations of physics", "Several complex variables", "Theorems in complex analysis", "Mathematical objects", "Theorems in mathematical physics", "Mathematical relations", "Mathematical problems", "Physics the...
1,659,215
https://en.wikipedia.org/wiki/Log%E2%80%93log%20plot
In science and engineering, a log–log graph or log–log plot is a two-dimensional graph of numerical data that uses logarithmic scales on both the horizontal and vertical axes. Power functions – relationships of the form – appear as straight lines in a log–log graph, with the exponent corresponding to the slope, and the coefficient corresponding to the intercept. Thus these graphs are very useful for recognizing these relationships and estimating parameters. Any base can be used for the logarithm, though most commonly base 10 (common logs) are used. Relation with monomials Given a monomial equation taking the logarithm of the equation (with any base) yields: Setting and which corresponds to using a log–log graph, yields the equation where m = k is the slope of the line (gradient) and b = log a is the intercept on the (log y)-axis, meaning where log x = 0, so, reversing the logs, a is the y value corresponding to x = 1. Equations The equation for a line on a log–log scale would be: where m is the slope and b is the intercept point on the log plot. Slope of a log–log plot To find the slope of the plot, two points are selected on the x-axis, say x1 and x2. Using the below equation: and The slope m is found taking the difference: where F1 is shorthand for F(x1) and F2 is shorthand for F(x2). The figure at right illustrates the formula. Notice that the slope in the example of the figure is negative. The formula also provides a negative slope, as can be seen from the following property of the logarithm: Finding the function from the log–log plot The above procedure now is reversed to find the form of the function F(x) using its (assumed) known log–log plot. To find the function F, pick some fixed point (x0, F0), where F0 is shorthand for F(x0), somewhere on the straight line in the above graph, and further some other arbitrary point (x1, F1) on the same graph. Then from the slope formula above: which leads to Notice that 10log10(F1) = F1. Therefore, the logs can be inverted to find: or which means that In other words, F is proportional to x to the power of the slope of the straight line of its log–log graph. Specifically, a straight line on a log–log plot containing points (x0, F0) and (x1, F1) will have the function: Of course, the inverse is true too: any function of the form will have a straight line as its log–log graph representation, where the slope of the line is m. Finding the area under a straight-line segment of log–log plot To calculate the area under a continuous, straight-line segment of a log–log plot (or estimating an area of an almost-straight line), take the function defined previously and integrate it. Since it is only operating on a definite integral (two defined endpoints), the area A under the plot takes the form Rearranging the original equation and plugging in the fixed point values, it is found that Substituting back into the integral, you find that for A over x0 to x1 Therefore, For m = −1, the integral becomes Log-log linear regression models Log–log plots are often use for visualizing log-log linear regression models with (roughly) log-normal, or Log-logistic, errors. In such models, after log-transforming the dependent and independent variables, a Simple linear regression model can be fitted, with the errors becoming homoscedastic. This model is useful when dealing with data that exhibits exponential growth or decay, while the errors continue to grow as the independent value grows (i.e., heteroscedastic error). As above, in a log-log linear model the relationship between the variables is expressed as a power law. Every unit change in the independent variable will result in a constant percentage change in the dependent variable. The model is expressed as: Taking the logarithm of both sides, we get: This is a linear equation in the logarithms of and , with as the intercept and as the slope. In which , and . Figure 1 illustrates how this looks. It presents two plots generated using 10,000 simulated points. The left plot, titled 'Concave Line with Log-Normal Noise', displays a scatter plot of the observed data (y) against the independent variable (x). The red line represents the 'Median line', while the blue line is the 'Mean line'. This plot illustrates a dataset with a power-law relationship between the variables, represented by a concave line. When both variables are log-transformed, as shown in the right plot of Figure 1, titled 'Log-Log Linear Line with Normal Noise', the relationship becomes linear. This plot also displays a scatter plot of the observed data against the independent variable, but after both axes are on a logarithmic scale. Here, both the mean and median lines are the same (red) line. This transformation allows us to fit a Simple linear regression model (which can then be transformed back to the original scale - as the median line). The transformation from the left plot to the right plot in Figure 1 also demonstrates the effect of the log transformation on the distribution of noise in the data. In the left plot, the noise appears to follow a log-normal distribution, which is right-skewed and can be difficult to work with. In the right plot, after the log transformation, the noise appears to follow a normal distribution, which is easier to reason about and model. This normalization of noise is further analyzed in Figure 2, which presents a line plot of three error metrics (Mean Absolute Error - MAE, Root Mean Square Error - RMSE, and Mean Absolute Logarithmic Error - MALE) calculated over a sliding window of size 28 on the x-axis. The y-axis gives the error, plotted against the independent variable (x). Each error metric is represented by a different color, with the corresponding smoothed line overlaying the original line (since this is just simulated data, the error estimation is a bit jumpy). These error metrics provide a measure of the noise as it varies across different x values. Log-log linear models are widely used in various fields, including economics, biology, and physics, where many phenomena exhibit power-law behavior. They are also useful in regression analysis when dealing with heteroscedastic data, as the log transformation can help to stabilize the variance. Applications These graphs are useful when the parameters a and b need to be estimated from numerical data. Specifications such as this are used frequently in economics. One example is the estimation of money demand functions based on inventory theory, in which it can be assumed that money demand at time t is given by where M is the real quantity of money held by the public, R is the rate of return on an alternative, higher yielding asset in excess of that on money, Y is the public's real income, U is an error term assumed to be lognormally distributed, A is a scale parameter to be estimated, and b and c are elasticity parameters to be estimated. Taking logs yields where m = log M, a = log A, r = log R, y = log Y, and u = log U with u being normally distributed. This equation can be estimated using ordinary least squares. Another economic example is the estimation of a firm's Cobb–Douglas production function, which is the right side of the equation in which Q is the quantity of output that can be produced per month, N is the number of hours of labor employed in production per month, K is the number of hours of physical capital utilized per month, U is an error term assumed to be lognormally distributed, and A, , and are parameters to be estimated. Taking logs gives the linear regression equation where q = log Q, a = log A, n = log N, k = log K, and u = log U. Log–log regression can also be used to estimate the fractal dimension of a naturally occurring fractal. However, going in the other direction – observing that data appears as an approximate line on a log–log scale and concluding that the data follows a power law – is not always valid. In fact, many other functional forms appear approximately linear on the log–log scale, and simply evaluating the goodness of fit of a linear regression on logged data using the coefficient of determination (R2) may be invalid, as the assumptions of the linear regression model, such as Gaussian error, may not be satisfied; in addition, tests of fit of the log–log form may exhibit low statistical power, as these tests may have low likelihood of rejecting power laws in the presence of other true functional forms. While simple log–log plots may be instructive in detecting possible power laws, and have been used dating back to Pareto in the 1890s, validation as a power laws requires more sophisticated statistics. These graphs are also extremely useful when data are gathered by varying the control variable along an exponential function, in which case the control variable x is more naturally represented on a log scale, so that the data points are evenly spaced, rather than compressed at the low end. The output variable y can either be represented linearly, yielding a lin–log graph (log x, y), or its logarithm can also be taken, yielding the log–log graph (log x, log y). Bode plot (a graph of the frequency response of a system) is also log–log plot. In chemical kinetics, the general form of the dependence of the reaction rate on concentration takes the form of a power law (law of mass action), so a log-log plot is useful for estimating the reaction parameters from experiment. See also Semi-log plot (lin–log or log–lin) Power law Zipf law Log-linear model Log-normal distribution Log-logistic distribution Data transformation (statistics) Variance-stabilizing transformation References External links Non-Newtonian calculus website Logarithmic scales of measurement Statistical charts and diagrams Non-Newtonian calculus de:Logarithmenpapier#Doppeltlogarithmisches Papier
Log–log plot
[ "Physics", "Mathematics" ]
2,168
[ "Physical quantities", "Calculus", "Quantity", "Non-Newtonian calculus", "Logarithmic scales of measurement" ]
1,659,281
https://en.wikipedia.org/wiki/Asbestos%20cement
Asbestos cement, genericized as fibro, fibrolite (short for "fibrous (or fibre) cement sheet"; but different from the natural mineral fibrolite), or AC sheet, is a composite building material consisting of cement and asbestos fibres pressed into thin rigid sheets and other shapes. Invented at the end of the 19th century, the material was adopted extensively during World War II to make easily-built, sturdy and inexpensive structures for military purposes. It continued to be used widely following the war as an affordable external cladding for buildings. Advertised as a fireproof alternative to other roofing materials such as asphalt, asbestos-cement roofs were popular, not only for safety but also for affordability. Due to asbestos cement's imitation of more expensive materials such as wood siding and shingles, brick, slate, and stone, the product was marketed as an affordable renovation material. Asbestos cement competed with aluminum alloy, available in large quantities after WWII, and the reemergence of wood clapboard and vinyl siding in the mid to late 20th century. Asbestos cement is usually formed into flat or corrugated sheets or into pipes, but can be molded into any shape that can be formed using wet cement. In Europe, cement sheets came in a wide variety of shapes, while there was less variation in the US, due to labor and production costs. Although fibro was used in a number of countries, in Australia and New Zealand its use was most widespread. Predominantly manufactured and sold by James Hardie until the mid-1980s, fibro in all its forms was a popular building material, largely due to its durability. The reinforcing fibres used in the product were almost always asbestos. The use of fibro that contains asbestos has been banned in several countries, including Australia, but the material was discovered in new components sold for construction projects. Health effects When exposed to weathering and erosion, particularly when used on roofs, the surface deterioration of asbestos cement can release toxic airborne fibres. Exposure to asbestos causes or increases the risk of several life-threatening diseases, including asbestosis, pleural mesothelioma (lung), and peritoneal mesothelioma (abdomen). Safer asbestos-free fibre cement sheet is still readily available, but the reinforcing fibres are cellulose. The name "fibro" is still traditionally applied to fibre cement. Products used in the building industry Roofs - most usually on industrial or farmyard buildings and domestic garages. Flat sheets for house walls and ceilings were usually thick, wide, and from long. Battens  wide ×  thick, used to cover the joints in fibro sheets. "Super Six" corrugated roof sheeting and fencing. Internal wet area sheeting, "Tilux". Pipes of various sizes for water reticulation and drainage. Drainage pipes tend to be made of pitch fibre, with asbestos cement added to strengthen. Moulded products ranging from plant pots to outdoor telephone cabinet roofs and cable pits. Cleaning of asbestos cement Some Australian states, such as Queensland, prohibit the cleaning of fibro with pressure washers, because it can spread the embedded asbestos fibres over a wide area. Safer cleaning methods involve using a fungicide and a sealant. In popular culture The 1973 song, "Way Out West", by The Dingoes, later covered by James Blundell & James Reyne, mentions living in a "house made of fibro cement". Fibro is also referred to several times on the Australian TV show Housos. See also Cemesto Eternit Fibre cement Transite, a brand of fibre cement originally produced as asbestos cement References External links Fibro and Asbestos - A Renovator and Homeowner's Guide, NSW (archived 2013) Advice if you have FAC in your home Building materials Asbestos
Asbestos cement
[ "Physics", "Engineering", "Environmental_science" ]
782
[ "Toxicology", "Building engineering", "Construction", "Materials", "Building materials", "Asbestos", "Matter", "Architecture" ]
12,197,312
https://en.wikipedia.org/wiki/Chromatin%20remodeling
Chromatin remodeling is the dynamic modification of chromatin architecture to allow access of condensed genomic DNA to the regulatory transcription machinery proteins, and thereby control gene expression. Such remodeling is principally carried out by 1) covalent histone modifications by specific enzymes, e.g., histone acetyltransferases (HATs), deacetylases, methyltransferases, and kinases, and 2) ATP-dependent chromatin remodeling complexes which either move, eject or restructure nucleosomes. Besides actively regulating gene expression, dynamic remodeling of chromatin imparts an epigenetic regulatory role in several key biological processes, egg cells DNA replication and repair; apoptosis; chromosome segregation as well as development and pluripotency. Aberrations in chromatin remodeling proteins are found to be associated with human diseases, including cancer. Targeting chromatin remodeling pathways is currently evolving as a major therapeutic strategy in the treatment of several cancers. Overview The transcriptional regulation of the genome is controlled primarily at the preinitiation stage by binding of the core transcriptional machinery proteins (namely, RNA polymerase, transcription factors, and activators and repressors) to the core promoter sequence on the coding region of the DNA. However, DNA is tightly packaged in the nucleus with the help of packaging proteins, chiefly histone proteins to form repeating units of nucleosomes which further bundle together to form condensed chromatin structure. Such condensed structure occludes many DNA regulatory regions, not allowing them to interact with transcriptional machinery proteins and regulate gene expression. To overcome this issue and allow dynamic access to condensed DNA, a process known as chromatin remodeling alters nucleosome architecture to expose or hide regions of DNA for transcriptional regulation. By definition, chromatin remodeling is the enzyme-assisted process to facilitate access of nucleosomal DNA by remodeling the structure, composition and positioning of nucleosomes. Classification Access to nucleosomal DNA is governed by two major classes of protein complexes: Covalent histone-modifying complexes. ATP-dependent chromatin remodeling complexes. Covalent histone-modifying complexes Specific protein complexes, known as histone-modifying complexes catalyze addition or removal of various chemical elements on histones. These enzymatic modifications include acetylation, methylation, phosphorylation, and ubiquitination and primarily occur at N-terminal histone tails. Such modifications affect the binding affinity between histones and DNA, and thus loosening or tightening the condensed DNA wrapped around histones, e.g., Methylation of specific lysine residues in H3 and H4 causes further condensation of DNA around histones, and thereby prevents binding of transcription factors to the DNA that lead to gene repression. On the contrary, histone acetylation relaxes chromatin condensation and exposes DNA for TF binding, leading to increased gene expression. Known modifications Well characterized modifications to histones include: Methylation Both lysine and arginine residues are known to be methylated. Methylated lysines are the best understood marks of the histone code, as specific methylated lysine match well with gene expression states. Methylation of lysines H3K4 and H3K36 is correlated with transcriptional activation while demethylation of H3K4 is correlated with silencing of the genomic region. Methylation of lysines H3K9 and H3K27 is correlated with transcriptional repression. Particularly, H3K9me3 is highly correlated with constitutive heterochromatin. Acetylation - by HAT (histone acetyl transferase); deacetylation - by HDAC (histone deacetylase) Acetylation tends to define the 'openness' of chromatin as acetylated histones cannot pack as well together as deacetylated histones. Phosphorylation Ubiquitination However, there are many more histone modifications, and sensitive mass spectrometry approaches have recently greatly expanded the catalog. Histone code hypothesis The histone code is a hypothesis that the transcription of genetic information encoded in DNA is in part regulated by chemical modifications to histone proteins, primarily on their unstructured ends. Together with similar modifications such as DNA methylation it is part of the epigenetic code. Cumulative evidence suggests that such code is written by specific enzymes which can (for example) methylate or acetylate DNA ('writers'), removed by other enzymes having demethylase or deacetylase activity ('erasers'), and finally readily identified by proteins ('readers') that are recruited to such histone modifications and bind via specific domains, e.g., bromodomain, chromodomain. These triple action of 'writing', 'reading' and 'erasing' establish the favorable local environment for transcriptional regulation, DNA-damage repair, etc. The critical concept of the histone code hypothesis is that the histone modifications serve to recruit other proteins by specific recognition of the modified histone via protein domains specialized for such purposes, rather than through simply stabilizing or destabilizing the interaction between histone and the underlying DNA. These recruited proteins then act to alter chromatin structure actively or to promote transcription. A very basic summary of the histone code for gene expression status is given below (histone nomenclature is described here): ATP-dependent chromatin remodeling ATP-dependent chromatin-remodeling complexes regulate gene expression by either moving, ejecting or restructuring nucleosomes. These protein complexes have a common ATPase domain and energy from the hydrolysis of ATP allows these remodeling complexes to reposition nucleosomes (often referred to as "nucleosome sliding") along the DNA, eject or assemble histones on/off of DNA or facilitate exchange of histone variants, and thus creating nucleosome-free regions of DNA for gene activation. Also, several remodelers have DNA-translocation activity to carry out specific remodeling tasks. All ATP-dependent chromatin-remodeling complexes possess a sub unit of ATPase that belongs to the SNF2 superfamily of proteins. In association to the sub unit's identity, two main groups have been classified for these proteins. These are known as the SWI2/SNF2 group and the imitation SWI (ISWI) group. The third class of ATP-dependent complexes that has been recently described contains a Snf2-like ATPase and also demonstrates deacetylase activity. Known chromatin remodeling complexes There are at least four families of chromatin remodelers in eukaryotes: SWI/SNF, ISWI, NuRD/Mi-2/CHD, and INO80 with first two remodelers being very well studied so far, especially in the yeast model. Although all of remodelers share common ATPase domain, their functions are specific based on several biological processes (DNA repair, apoptosis, etc.). This is due to the fact that each remodeler complex has unique protein domains (Helicase, bromodomain, etc.) in their catalytic ATPase region and also has different recruited subunits. Specific functions Several in-vitro experiments suggest that ISWI remodelers organize nucleosome into proper bundle form and create equal spacing between nucleosomes, whereas SWI/SNF remodelers disorder nucleosomes. The ISWI-family remodelers have been shown to play central roles in chromatin assembly after DNA replication and maintenance of higher-order chromatin structures. INO80 and SWI/SNF-family remodelers participate in DNA double-strand break (DSB) repair and nucleotide-excision repair (NER) and thereby plays crucial role in TP53 mediated DNA-damage response. NuRD/Mi-2/CHD remodeling complexes primarily mediate transcriptional repression in the nucleus and are required for the maintenance of pluripotency of embryonic stem cells. Significance In normal biological processes Chromatin remodeling plays a central role in the regulation of gene expression by providing the transcription machinery with dynamic access to an otherwise tightly packaged genome. Further, nucleosome movement by chromatin remodelers is essential to several important biological processes, including chromosome assembly and segregation, DNA replication and repair, embryonic development and pluripotency, and cell-cycle progression. Deregulation of chromatin remodeling causes loss of transcriptional regulation at these critical check-points required for proper cellular functions, and thus causes various disease syndromes, including cancer. Response to DNA damage Chromatin relaxation is one of the earliest cellular responses to DNA damage. Several experiments have been performed on the recruitment kinetics of proteins involved in the response to DNA damage. The relaxation appears to be initiated by PARP1, whose accumulation at DNA damage is half complete by 1.6 seconds after DNA damage occurs. This is quickly followed by accumulation of chromatin remodeler Alc1, which has an ADP-ribose–binding domain, allowing it to be quickly attracted to the product of PARP1. The maximum recruitment of Alc1 occurs within 10 seconds of DNA damage. About half of the maximum chromatin relaxation, presumably due to action of Alc1, occurs by 10 seconds. PARP1 action at the site of a double-strand break allows recruitment of the two DNA repair enzymes MRE11 and NBS1. Half maximum recruitment of these two DNA repair enzymes takes 13 seconds for MRE11 and 28 seconds for NBS1. Another process of chromatin relaxation, after formation of a DNA double-strand break, employs γH2AX, the phosphorylated form of the H2AX protein. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (phosphorylated on serine 139 of H2AX) was detected at 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurred in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, by itself, cause chromatin decondensation, but within seconds of irradiation the protein "Mediator of the DNA damage checkpoint 1" (MDC1) specifically attaches to γH2AX. This is accompanied by simultaneous accumulation of RNF8 protein and the DNA repair protein NBS1 which bind to MDC1 as MDC1 attaches to γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4 protein, a component of the nucleosome remodeling and deacetylase complex NuRD. CHD4 accumulation at the site of the double-strand break is rapid, with half-maximum accumulation occurring by 40 seconds after irradiation. The fast initial chromatin relaxation upon DNA damage (with rapid initiation of DNA repair) is followed by a slow recondensation, with chromatin recovering a compaction state close to its pre-damage level in ~ 20 min. Cancer Chromatin remodeling provides fine-tuning at crucial cell growth and division steps, like cell-cycle progression, DNA repair and chromosome segregation, and therefore exerts tumor-suppressor function. Mutations in such chromatin remodelers and deregulated covalent histone modifications potentially favor self-sufficiency in cell growth and escape from growth-regulatory cell signals - two important hallmarks of cancer. Inactivating mutations in SMARCB1, formerly known as hSNF5/INI1 and a component of the human SWI/SNF remodeling complex have been found in large number of rhabdoid tumors, commonly affecting pediatric population. Similar mutations are also present in other childhood cancers, such as choroid plexus carcinoma, medulloblastoma and in some acute leukemias. Further, mouse knock-out studies strongly support SMARCB1 as a tumor suppressor protein. Since the original observation of SMARCB1 mutations in rhabdoid tumors, several more subunits of the human SWI/SNF chromatin remodeling complex have been found mutated in a wide range of neoplasms. The SWI/SNF ATPase BRG1 (or SMARCA4) is the most frequently mutated chromatin remodeling ATPase in cancer. Mutations in this gene were first recognized in human cancer cell lines derived from lung. In cancer, mutations in BRG1 show an unusually high preference for missense mutations that target the ATPase domain. Mutations are enriched at highly conserved ATPase sequences, which lie on important functional surfaces such as the ATP pocket or DNA-binding surface. These mutations act in a genetically dominant manner to alter chromatin regulatory function at enhancers and promoters. Inactivating mutations in BCL7A in Diffuse large B-cell lymphoma (DLBCL) and in other haematological malignancies PML-RARA fusion protein in acute myeloid leukemia recruits histone deacetylases. This leads to repression of genes responsible for myelocytes to differentiate, leading to leukemia. Tumor suppressor Rb protein functions by the recruitment of the human homologs of the SWI/SNF enzymes BRG1, histone deacetylase and DNA methyltransferase. Mutations in BRG1 are reported in several cancers causing loss of tumor suppressor action of Rb. Recent reports indicate DNA hypermethylation in the promoter region of major tumor suppressor genes in several cancers. Although few mutations are reported in histone methyltransferases yet, correlation of DNA hypermethylation and histone H3 lysine-9 methylation has been reported in several cancers, mainly in colorectal and breast cancers. Mutations in Histone Acetyl Transferases (HAT) p300 (missense and truncating type) are most commonly reported in colorectal, pancreatic, breast and gastric carcinomas. Loss of heterozygosity in coding region of p300 (chromosome 22q13) is present in large number of glioblastomas. Further, HATs have diverse role as transcription factors beside having histone acetylase activity, e.g., HAT subunit, hADA3 may act as an adaptor protein linking transcription factors with other HAT complexes. In the absence of hADA3, TP53 transcriptional activity is significantly reduced, suggesting role of hADA3 in activating TP53 function in response to DNA damage. Similarly, TRRAP, the human homolog to yeast Tra1, has been shown to directly interact with c-Myc and E2F1, known oncoproteins. Cancer genomics Rapid advance in cancer genomics and high-throughput ChIP-chip, ChIP-Seq and Bisulfite sequencing methods are providing more insight into role of chromatin remodeling in transcriptional regulation and role in cancer. Therapeutic intervention Epigenetic instability caused by deregulation in chromatin remodeling is studied in several cancers, including breast cancer, colorectal cancer, pancreatic cancer. Such instability largely cause widespread silencing of genes with primary impact on tumor-suppressor genes. Hence, strategies are now being tried to overcome epigenetic silencing with synergistic combination of HDAC inhibitors or HDI and DNA-demethylating agents. HDIs are primarily used as adjunct therapy in several cancer types. HDAC inhibitors can induce p21 (WAF1) expression, a regulator of p53's tumor suppressoractivity. HDACs are involved in the pathway by which the retinoblastoma protein (pRb) suppresses cell proliferation. Estrogen is well-established as a mitogenic factor implicated in the tumorigenesis and progression of breast cancer via its binding to the estrogen receptor alpha (ERα). Recent data indicate that chromatin inactivation mediated by HDAC and DNA methylation is a critical component of ERα silencing in human breast cancer cells. Approved usage: Vorinostat was licensed by the U.S. FDA in October 2006 for the treatment of cutaneous T cell lymphoma (CTCL). Romidepsin (trade name Istodax) was licensed by the US FDA in Nov 2009 for cutaneous T-cell lymphoma (CTCL). Phase III Clinical trials: Panobinostat (LBH589) is in clinical trials for various cancers including a phase III trial for cutaneous T cell lymphoma (CTCL). Valproic acid (as Mg valproate) in phase III trials for cervical cancer and ovarian cancer. Started pivotal phase II clinical trials: Belinostat (PXD101) has had a phase II trial for relapsed ovarian cancer, and reported good results for T cell lymphoma. HDAC inhibitors. Current front-runner candidates for new drug targets are Histone Lysine Methyltransferases (KMT) and Protein Arginine Methyltransferases (PRMT). Other disease syndromes ATRX-syndrome (α-thalassemia X-linked mental retardation) and α-thalassemia myelodysplasia syndrome are caused by mutations in ATRX, a SNF2-related ATPase with a PHD finger domain. CHARGE syndrome, an autosomal dominant disorder, has been linked recently to haploinsufficiency of CHD7, which encodes the CHD family ATPase CHD7. Senescence Chromatin architectural remodeling is implicated in the process of cellular senescence, which is related to, and yet distinct from, organismal aging. Replicative cellular senescence refers to a permanent cell cycle arrest where post-mitotic cells continue to exist as metabolically active cells but fail to proliferate. Senescence can arise due to age associated degradation, telomere attrition, progerias, pre-malignancies, and other forms of damage or disease. Senescent cells undergo distinct repressive phenotypic changes, potentially to prevent the proliferation of damaged or cancerous cells, with modified chromatin organization, fluctuations in remodeler abundance, and changes in epigenetic modifications. Senescent cells undergo chromatin landscape modifications as constitutive heterochromatin migrates to the center of the nucleus and displaces euchromatin and facultative heterochromatin to regions at the edge of the nucleus. This disrupts chromatin-lamin interactions and inverts of the pattern typically seen in a mitotically active cell. Individual Lamin-Associated Domains (LADs) and Topologically Associating Domains (TADs) are disrupted by this migration which can affect cis interactions across the genome. Additionally, there is a general pattern of canonical histone loss, particularly in terms of the nucleosome histones H3 and H4 and the linker histone H1. Histone variants with two exons are upregulated in senescent cells to produce modified nucleosome assembly which contributes to chromatin permissiveness to senescent changes. Although transcription of variant histone proteins may be elevated, canonical histone proteins are not expressed as they are only made during the S phase of the cell cycle and senescent cells are post-mitotic. During senescence, portions of chromosomes can be exported from the nucleus for lysosomal degradation which results in greater organizational disarray and disruption of chromatin interactions. Chromatin remodeler abundance may be implicated in cellular senescence as knockdown or knockout of ATP-dependent remodelers such as NuRD, ACF1, and SWI/SNP can result in DNA damage and senescent phenotypes in yeast, C. elegans, mice, and human cell cultures. ACF1 and NuRD are downregulated in senescent cells which suggests that chromatin remodeling is essential for maintaining a mitotic phenotype. Genes involved in signaling for senescence can be silenced by chromatin confirmation and polycomb repressive complexes as seen in PRC1/PCR2 silencing of p16. Specific remodeler depletion results in activation of proliferative genes through a failure to maintain silencing. Some remodelers act on enhancer regions of genes rather than the specific loci to prevent re-entry into the cell cycle by forming regions of dense heterochromatin around regulatory regions. Senescent cells undergo widespread fluctuations in epigenetic modifications in specific chromatin regions compared to mitotic cells. Human and murine cells undergoing replicative senescence experience a general global decrease in methylation; however, specific loci can differ from the general trend. Specific chromatin regions, especially those around the promoters or enhancers of proliferative loci, may exhibit elevated methylation states with an overall imbalance of repressive and activating histone modifications. Proliferative genes may show increases in the repressive mark H3K27me3 while genes involved in silencing or aberrant histone products may be enriched with the activating modification H3K4me3. Additionally, upregulating histone deacetylases, such as members of the sirtuin family, can delay senescence by removing acetyl groups that contribute to greater chromatin accessibility. General loss of methylation, combined with the addition of acetyl groups results in a more accessible chromatin conformation with a propensity towards disorganization when compared to mitotically active cells. General loss of histones precludes addition of histone modifications and contributes changes in enrichment in some chromatin regions during senescence. See also Epigenetics Histone Nucleosomes Chromatin Histone acetyltransferase Transcription factors CAF-1 (Chromatin assembly factor-1) - histone chaperone that execute a coordinating role in сhromatin remodeling. References Further reading External links MBInfo - Chromatin MBInfo - DNA Packaging YouTube - Chromatin, Histones and Modifications YouTube - Epigenetics Overview Gene expression Cancer Epigenetics Nuclear organization
Chromatin remodeling
[ "Chemistry", "Biology" ]
4,840
[ "Gene expression", "Nuclear organization", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
12,198,134
https://en.wikipedia.org/wiki/Abiraterone%20acetate
Abiraterone acetate, sold under the brand name Zytiga among others, is a medication used to treat prostate cancer. Specifically it is used together with a corticosteroid for metastatic castration-resistant prostate cancer (mCRPC) and metastatic high-risk castration-sensitive prostate cancer (mCSPC). It should either be used following removal of the testicles or along with a gonadotropin-releasing hormone (GnRH) analog. It is taken by mouth. Common side effects include tiredness, vomiting, headache, joint pain, high blood pressure, swelling, low blood potassium, high blood sugar, hot flashes, diarrhea, and cough. Other severe side effects may include liver failure and adrenocortical insufficiency. In males whose partners can become pregnant, birth control is recommended. Supplied as abiraterone acetate it is converted in the body to abiraterone. Abiraterone acetate works by suppressing the production of androgens – specifically it inhibits CYP17A1 – and thereby decreases the production of testosterone. In doing so, it prevents the effects of these hormones in prostate cancer. Abiraterone acetate was described in 1995, and approved for medical use in the United States and the European Union in 2011. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. Medical uses Abiraterone acetate is used in combination with prednisone, a corticosteroid, as a treatment for mCRPC (previously called hormone-resistant or hormone-refractory prostate cancer). This is a form of prostate cancer that is not responding to first-line androgen deprivation therapy or treatment with androgen receptor antagonists. Abiraterone acetate has received Food and Drug Administration (FDA) (28 April 2011), European Medicines Agency (EMA) (23 September 2011), Medicines and Healthcare products Regulatory Agency (MHRA) (5 September 2011) and Therapeutic Goods Administration (TGA) (1 March 2012) approval for this indication. In Australia it is covered by the Pharmaceutical Benefits Scheme when being used to treat castration-resistant prostate cancer and given in combination with prednisone/prednisolone (subject to the conditions that the patient is not currently receiving chemotherapy, is either resistant or intolerant of docetaxel, has a WHO performance status of <2, and his disease has not since become progressive since treatment with PBS-subsidised abiraterone acetate has commenced). Abiraterone acetate/methylprednisolone, sold under the brand name Yonsa Mpred, is a composite package that contains both abiraterone acetate (Yonsa) and methylprednisolone. It was approved for medical use in Australia in March 2022. Contraindications Contraindications include hypersensitivity to abiraterone acetate. Although documents state that it should not be taken by women who are or who may become pregnant, there is no medical reason that any woman should take it. Women who are pregnant should not even touch the pills unless they are wearing gloves. Other cautions include severe baseline hepatic impairment, mineralocorticoid excess, cardiovascular disease including heart failure and hypertension, uncorrected hypokalemia, and adrenocorticoid insufficiency. Side effects Side effects by frequency: Very common (>10% frequency): Urinary tract infection Hypokalemia Hypertension Diarrhea Peripheral edema Common (1-10% frequency): Hypertriglyceridaemia Sepsis Cardiac failure Angina pectoris Arrhythmia Atrial fibrillation Tachycardia Dyspepsia (indigestion) Rash Alanine aminotransferase increased Aspartate aminotransferase increased Fractures Hematuria Uncommon (0.1-1% frequency): Adrenal insufficiency Myopathy Rhabdomyolysis Rare (<0.1% frequency): Allergic alveolitis Overdose Experience with overdose of abiraterone acetate is limited. There is no specific antidote for abiraterone acetate overdose, and treatment should consist of general supportive measures, including monitoring of cardiac and liver function. Interactions Abiraterone acetate is a CYP3A4 substrate and hence should not be administered concurrently with strong CYP3A4 inhibitors such as ketoconazole, itraconazole, clarithromycin, atazanavir, nefazodone, saquinavir, telithromycin, ritonavir, indinavir, nelfinavir, voriconazole) or inducers such as phenytoin, carbamazepine, rifampin, rifabutin, rifapentine, phenobarbital. It also inhibits CYP1A2, CYP2C9, and CYP3A4 and likewise should not be taken concurrently with substrates of any of these enzymes that have a narrow therapeutic index. Spironolactone generally exerts anti-androgenic effects, but experimental evidence exists that it acts as an androgen receptor agonist in an androgen-depleted environment, capable of inducing prostate cancer proliferation. This is supported by the observations described in several case reports. Pharmacology Pharmacodynamics Antiandrogenic activity Abiraterone, the active metabolite of abiraterone acetate, inhibits CYP17A1, which manifests as two enzymes, 17α-hydroxylase ( = 2.5 nM) and 17,20-lyase ( = 15 nM) (approximately 6-fold more selective for inhibition of 17α-hydroxylase over 17,20-lyase) that are expressed in testicular, adrenal, and prostatic tumor tissues. CYP17A1 catalyzes two sequential reactions: (a) the conversion of pregnenolone and progesterone to their 17α-hydroxy derivatives by its 17α-hydroxylase activity, and (b) the subsequent formation of dehydroepiandrosterone (DHEA) and androstenedione, respectively, by its 17,20-lyase activity. DHEA and androstenedione are androgens and precursors of testosterone. Inhibition of CYP17A1 activity by abiraterone acetate thus decreases circulating levels of androgens such as DHEA, testosterone, and dihydrotestosterone (DHT). Abiraterone acetate, via abiraterone, has the capacity to lower circulating testosterone levels to less than 1 ng/dL (i.e., undetectable) when added to castration. These concentrations are considerably lower than those achieved by castration alone (~20 ng/dL). The addition of abiraterone acetate to castration was found to reduce levels of DHT by 85%, DHEA by 97 to 98%, and androstenedione by 77 to 78% relative to castration alone. In accordance with its antiandrogenic action, abiraterone acetate decreases the weights of the prostate gland, seminal vesicles, and testes. Abiraterone also acts as a partial antagonist of the androgen receptor (AR), and as an inhibitor of the enzymes 3β-hydroxysteroid dehydrogenase (3β-HSD), CYP11B1 (steroid 11β-hydroxylase), CYP21A2 (Steroid 21-hydroxylase), and other CYP450s (e.g., CYP1A2, CYP2C9, and CYP3A4). In addition to abiraterone itself, part of the activity of the drug has been found to be due to a more potent active metabolite, δ4-abiraterone (D4A), which is formed from abiraterone by 3β-HSD. D4A is an inhibitor of CYP17A1, 3β-hydroxysteroid dehydrogenase/Δ5-4 isomerase, and 5α-reductase, and has also been found to act as a competitive antagonist of the AR reportedly comparable to the potent antagonist enzalutamide. However, the initial 5α-reduced metabolite of D4A, 3-keto-5α-abiraterone, is an agonist of the AR, and promotes prostate cancer progression. Its formation can be blocked by the coadministration of dutasteride, a potent and selective 5α-reductase inhibitor. Estrogenic activity There has been interest in the use of abiraterone acetate for the treatment of breast cancer due to its ability to lower estrogen levels. However, abiraterone has been found to act as a direct agonist of the estrogen receptor, and induces proliferation of human breast cancer cells in vitro. If abiraterone acetate is used in the treatment of breast cancer, it should be combined with an estrogen receptor antagonist like fulvestrant. In spite of its antiandrogenic and estrogenic properties, abiraterone acetate does not appear to produce gynecomastia as a side effect. Other activities Due to inhibition of glucocorticoid biosynthesis, abiraterone acetate can cause glucocorticoid deficiency, mineralocorticoid excess, and associated adverse effects. This is why the medication is combined with prednisone, a corticosteroid, which serves as a means of glucocorticoid replacement and prevents mineralocorticoid excess. Abiraterone acetate, along with galeterone, has been identified as an inhibitor of sulfotransferases (SULT2A1, SULT2B1b, SULT1E1), which are involved in the sulfation of DHEA and other endogenous steroids and compounds, with Ki values in the sub-micromolar range. Pharmacokinetics After oral administration, abiraterone acetate, the prodrug form in the commercial preparation, is converted into the active form, abiraterone. This conversion is likely to be esterase-mediated and not CYP-mediated. Administration with food increases absorption of the drug and thus has the potential to result in increased and highly variable exposures; the drug should be consumed on an empty stomach at least one hour before or two hours after food. The drug is highly protein bound (>99%), and is metabolized in the liver by CYP3A4 and SULT2A1 to inactive metabolites. The drug is excreted in feces (~88%) and urine (~5%), and has a terminal half-life of 12 ± 5 hours. Chemistry Abiraterone acetate, also known as 17-(3-pyridinyl)androsta-5,16-dien-3β-ol acetate, is a synthetic androstane steroid and a derivative of androstadienol (androsta-5,16-dien-3β-ol), an endogenous androstane pheromone. It is specifically a derivative of androstadienol with a pyridine ring attached at the C17 position and an acetate ester attached to the C3β hydroxyl group. Abiraterone acetate is the C3β acetate ester of abiraterone. History In the early 1990s, Mike Jarman, Elaine Barrie, and Gerry Potter of the Cancer Research UK Centre for Cancer Therapeutics in the Institute of Cancer Research in London set out to develop drug treatments for prostate cancer. With the nonsteroidal androgen synthesis inhibitor ketoconazole as a model, they developed abiraterone acetate, filing a patent in 1993 and publishing the first paper describing it the following year. Rights for commercialization of the drug were assigned to BTG, a UK-based specialist healthcare company. BTG then licensed the product to Cougar Biotechnology, which began development of the commercial product. In 2009, Cougar was acquired by Johnson & Johnson, which developed and sells the commercial product, and is conducting ongoing clinical trials to expand its clinical uses. Abiraterone acetate was approved by the United States Food and Drug Administration on 28 April 2011 for mCRPC. The FDA press release made reference to a phase III clinical trial in which abiraterone acetate use was associated with a median survival of 14.8 months versus 10.9 months with placebo; the study was stopped early because of the successful outcome. Abiraterone acetate was also licensed by the European Medicines Agency. Until May 2012 the National Institute for Health and Clinical Excellence (NICE) did not recommend use of the drug within the NHS on cost-effectiveness grounds. This position was reversed when the manufacturer submitted revised costs. The use is currently limited to men who have already received one docetaxel-containing chemotherapy regimen. It was subsequently approved for the treatment of mCSPC in 2018. Society and culture Names Abiraterone is the and of abiraterone acetate's major active metabolite abiraterone. Abiraterone acetate is the , , and of abiraterone acetate. It is also known by its developmental code names CB-7630 and JNJ-212082, while CB-7598 was the developmental code name of abiraterone. Abiraterone acetate is marketed by Janssen Biotech (a subsidiary of Johnson & Johnson) under the brand name Zytiga, and by Sun Pharmaceutical under the brand name Yonsa. Generic versions of abiraterone acetate have been approved in the United States. Generic versions of Yonsa are not available . In May 2019, the United States Court of Appeals for the Federal Circuit upheld a Patent Trial and Appeal Board decision invalidating a patent by Johnson & Johnson on abiraterone acetate. Intas Pharmaceuticals markets the drug under the brand name Abiratas, Cadila Pharmaceuticals markets the drug as Abretone, and Glenmark Pharmaceuticals as Abirapro. It is marketed as Yonsa by Sun Pharmaceutical Industries (licensed from Churchill Pharmaceuticals). Brand names Abiraterone acetate is marketed widely throughout the world, including in the United States, Canada, the United Kingdom, Ireland, elsewhere in Europe, Australia, New Zealand, Latin America, Asia, and Israel. Economics A generic version is available in India at a price of $238 a month . The National Centre for Pharmacoeconomics initially found abiraterone acetate to not be cost effective based on prices in 2012, however following an agreement to supply at a lower price it was accepted in 2014. A generic Zytiga version is available in India at a price of under $230 a month as of 2020. Research Abiraterone acetate is under development for the treatment of breast cancer and ovarian cancer and as of March 2018, is in phase II clinical trials for these indications. It was also under investigation for the treatment of congenital adrenal hyperplasia, but no further development has been reported for this potential use. Prostate cancer In people previously treated with docetaxel survival is increased by 3.9 months (14.8 months versus 10.9 months for placebo). In people with castration-refractory prostate cancer but who had not received chemotherapy those who received abiraterone acetate had a progression-free survival of 16.5 months rather than 8.3 months with placebo. After a median follow-up period of 22.2 months, overall survival was better with abiraterone acetate. Abiraterone acetate may be useful for prevention of the testosterone flare at the initiation of GnRH agonist therapy in men with prostate cancer. References 11β-Hydroxylase inhibitors 3β-Hydroxysteroid dehydrogenase inhibitors 5α-Reductase inhibitors Acetate esters Androstanes Antiestrogens Antiglucocorticoids Combination cancer drugs Conjugated dienes CYP2D6 inhibitors CYP17A1 inhibitors Hormonal antineoplastic drugs Drugs developed by Johnson & Johnson Prodrugs Prostate cancer 3-Pyridyl compounds Steroid sulfotransferase inhibitors Steroidal antiandrogens Synthetic estrogens World Health Organization essential medicines Wikipedia medicine articles ready to translate
Abiraterone acetate
[ "Chemistry" ]
3,513
[ "Chemicals in medicine", "Prodrugs" ]
12,199,703
https://en.wikipedia.org/wiki/Process%20manufacturing
Process manufacturing is a branch of manufacturing that is associated with formulas and manufacturing recipes, and can be contrasted with discrete manufacturing, which is concerned with discrete units, bills of materials and the assembly of components. Process manufacturing is also referred to as a 'process industry' which is defined as an industry, such as the chemical or petrochemical industry, that is concerned with the processing of bulk resources into other products. Process manufacturing is common in the food, beverage, chemical, pharmaceutical, nutraceutical, consumer packaged goods, cannabis, and biotechnology industries. In process manufacturing, the relevant factors are ingredients, not parts; formulas, not bills of materials; and bulk materials rather than individual units. Although there is invariably cross-over between the two branches of manufacturing, the major contents of the finished product and the majority of the resource intensity of the production process generally allow manufacturing systems to be classified as one or the other. For example, a bottle of juice is a discrete item, but juice is process manufactured. The plastic used in injection moulding is process manufactured, but the components it is shaped into are generally discrete, and subject to further assembly. Examples of process industries Bulk-drug pharmaceuticals Chemical, tire, and process industries (CTP) Cosmeceuticals and personal care Food and beverage, Food processing Nutraceuticals Paints and coatings Semiconductor fabrication Specialty chemicals Steel and aluminium processing Textiles Formulation Formulation is a simple concept, but it is often incorrectly equated with a bill of materials. Formulation specifies the ingredients and the amounts (e.g., pounds, gallons, liters) needed to make the product. The first thing to recognize is that to be able to work with a formula, the units of measure must correspond; a flexible unit of measure conversion engine running under an ERP software cover is needed. Furthermore, conversion rules must be specified to account for the unique requirements of the business in question. This formulation then needs to be scaled up to the development and then manufacturing scales, and must often be transferred and validated in different manufacturing sites around the world. The proportions of ingredients in a formula also highlight the need for another feature, namely scalability. A formula to make 500 liters of a chemical must be scalable to make 250 liters or 1,000 liters. Another aspect of scalability is that it makes possible manufacturing based on how much of an ingredient is available. An example will illustrate this point. If you are making a car and only have two of the required four tires, you cannot make half a car. In other words, you must have all the parts in the required quantities to make the finished product; they are not scalable. But in process manufacturing, if you want to make 1,000 gallons of soda and you only have 500 gallons of the required 1,000 gallons of carbonated water, you have the option of making half as much soda. In process manufacturing you can make as much of a finished product as is specified in the formula for the smallest quantity in stock of one of the ingredients. Packaging A packaging recipe is similar to a formula, but rather than describing the proportion of ingredients, it specifies how the finished product gets to its final assembly. A packaging recipe addresses such things as containers, labels, corrugated cartons, and shrink-wrapping. In process manufacturing, the finished product is usually produced in bulk, but is rarely delivered in bulk form to the customer. For example, the beverage manufacturer makes soda in batches of thousands of gallons. However, a consumer purchases soda in 12-ounce aluminum cans, or in 16-ounce plastic bottles, or in 1-liter bottles. And a restaurateur may have the option of getting a 5- or 50-gallon metal container with the beverage in syrup form, so that carbonated water can be added later. Why is this concept important? Compare how often Coca-Cola changes the formula for Coke with how often the packaging is changed. If the formula and packaging recipes are linked, then every time the packaging changes, the formula would need modification. Likewise, when the formula is changed, all of the packaging recipes would have to be changed. This increases maintenance costs and chances for error. In process manufacturing, the formula for making the product and the recipe for packaging the product exist in separate structures to reduce the ongoing maintenance function. There is a difference between discrete manufacturing and process manufacturing in terms of flow patterns. An example given is that discrete manufacturing follows an "A" type process and process manufacturing follows a “V” type process. In the production cycle, a work order or process order is issued to make the product in bulk. Separate pack orders are issued to signify how the bulk material is to be containerized and shipped to the customer. This is important in process industries that make “brite” stock or private labels. For example, large grocery chains sell products, such as soups, soda, and meats, under their own brand names, hence "private labels". But these chains do not have their own manufacturing plants; they contract for these products. In the case of soups, process manufacturers create and warehouse nondescript, unlabeled (hence “brite”) aluminum cans of soup. (Since the cans are filled, sealed, and then cooked under pressure, their shelf life is long.) By separating the product formula from a packaging recipe, a production or process order can be issued to make and store the cans of soup and later, when the customer is ready to order soup, a work order can be issued to label the cans according to customer specifications before they are shipped to the store. Thus segregation of the formula and pack recipe makes the world of process manufacturing efficient and effective. Process manufacturing systems and methodologies Enterprise resource planning Just like the products that they produce, discrete manufacturing and process manufacturing use different Enterprise resource planning (ERP) systems which have different focal points and solve different problems. For the same reason that the proverbial square peg does not fit in the round hole, ERP software geared toward discrete manufacturing, or even hybrid manufacturing will not work smoothly in a process manufacturing setting. With process manufacturing, the end-product is unable to be broken down to its original ingredients, for example beer or pasta sauce. Thus, the ERP software must be able to account for these intricacies in its ability to convert and transform raw materials to finished goods. Critical aspects such as recipe formulation, forward and backward lot traceability, handling of mixed units of measure and conversion, raw material calculations, and scalable batch tickets with revision tracking and recording of manufacturing steps and production notes are specific to process manufacturers and key functionality of process manufacturing ERP systems. An example is the SAP module, Production Planning - Process Industries (PP-PI). In Process Inspections and Statistical Process Control In process inspection for process manufacturing refers to inspection at any point in producing a product, and is also referred to as in process product verification. The objective of in process inspection is to ensure the requirements of the product are being met before they are finalized and continue to the next stage. Identifying a problem at an early stage in the production process allows for correction and preventative action to avoid wasted time and resources at the end of a production run. Statistical Process Control complements process manufacturing and in process inspections to ensure that the process operates efficiently, producing more specification-conforming products with less waste (rework or scrap). Process approach in Management Systems The process approach is one of seven quality management principles that ISO management system standards are based on, and includes establishing the organization’s processes to operate as an integrated and complete system. In Food processing, complying product has to come from a process to comply, in comparison to discrete manufacturing where a finished product is inspected to comply. An example how the process approach complements a process industry is implementation of ISO 22000 as a Food Safety Management System (FSMS). The process approach involves the systematic definition and management of processes, and their interactions, so as to achieve the intended results in accordance with the food safety policy and strategic direction of the organization. Management of the processes and the system as a whole can be achieved using the PDCA cycle, with an overall focus on risk-based thinking aimed at taking advantage of opportunities and preventing undesirable results. References Manufacturing Further reading Salimi, Fabienne; Salimi, Frederic, A Systems Approach to Managing the Complexities of Process Industries, 2017
Process manufacturing
[ "Engineering" ]
1,714
[ "Manufacturing", "Mechanical engineering" ]
12,200,213
https://en.wikipedia.org/wiki/Haj%C3%B3s%27s%20theorem
In group theory, Hajós's theorem states that if a finite abelian group is expressed as the Cartesian product of simplexes, that is, sets of the form where is the identity element, then at least one of the factors is a subgroup. The theorem was proved by the Hungarian mathematician György Hajós in 1941 using group rings. Rédei later proved the statement when the factors are only required to contain the identity element and be of prime cardinality. Rédei's proof of Hajós's theorem was simplified by Tibor Szele. An equivalent statement on homogeneous linear forms was originally conjectured by Hermann Minkowski. A consequence is Minkowski's conjecture on lattice tilings, which says that in any lattice tiling of space by cubes, there are two cubes that meet face to face. Keller's conjecture is the same conjecture for non-lattice tilings, which turns out to be false in high dimensions. References Theorems in group theory Conjectures that have been proved
Hajós's theorem
[ "Mathematics" ]
208
[ "Mathematical theorems", "Mathematical problems", "Conjectures that have been proved" ]
12,200,818
https://en.wikipedia.org/wiki/Dean%20number
The Dean number (De) is a dimensionless group in fluid mechanics, which occurs in the study of flow in curved pipes and channels. It is named after the British scientist W. R. Dean, who was the first to provide a theoretical solution of the fluid motion through curved pipes for laminar flow by using a perturbation procedure from a Poiseuille flow in a straight pipe to a flow in a pipe with very small curvature. Physical Context If a fluid is moving along a straight pipe that after some point becomes curved, then the flow entering a curved portion develops a centrifugal force in an asymmetrical geometry. Such asymmetricity affects the parabolic velocity profile and causes a shift in the location of the maximum velocity compared to a straight pipe. Therefore, the maximum velocity shifts from the centerline towards the concave outer wall and forms an asymmetric velocity profile. There will be an adverse pressure gradient generated from the curvature with an increase in pressure, therefore a decrease in velocity close to the convex wall, and the contrary occurring towards the concave outer wall of the pipe. This gives rise to a secondary motion superposed on the primary flow, with the fluid in the centre of the pipe being swept towards the outer side of the bend and the fluid near the pipe wall will return towards the inside of the bend. This secondary motion is expected to appear as a pair of counter-rotating cells, which are called Dean vortices. Definition The Dean number is typically denoted by De (or Dn). For a flow in a pipe or tube it is defined as: where is the density of the fluid is the dynamic viscosity is the axial velocity scale is the diameter (for non-circular geometry, an equivalent diameter is used; see Reynolds number) is the radius of curvature of the path of the channel. is the Reynolds number. The Dean number is therefore the product of the Reynolds number (based on axial flow through a pipe of diameter ) and the square root of the curvature ratio. Turbulence transition The flow is completely unidirectional for low Dean numbers (De < 40~60). As the Dean number increases between 40~60 to 64~75, some wavy perturbations can be observed in the cross-section, which evidences some secondary flow. At higher Dean numbers than that (De > 64~75) the pair of Dean vortices becomes stable, indicating a primary dynamic instability. A secondary instability appears for De > 75~200, where the vortices present undulations, twisting, and eventually merging and pair splitting. Fully turbulent flow forms for De > 400. Transition from laminar to turbulent flow has also been examined in a number of studies, even though no universal solution exists since the parameter is highly dependent on the curvature ratio. Somewhat unexpectedly, laminar flow can be maintained for larger Reynolds numbers (even by a factor of two for the highest curvature ratios studied) than for straight pipes, even though curvature is known to cause instability. Dean equations The Dean number appears in the so-called Dean equations. These are an approximation to the full Navier–Stokes equations for the steady axially uniform flow of a Newtonian fluid in a toroidal pipe, obtained by retaining just the leading order curvature effects (i.e. the leading-order equations for ). We use orthogonal coordinates with corresponding unit vectors aligned with the centre-line of the pipe at each point. The axial direction is , with being the normal in the plane of the centre-line, and the binormal. For an axial flow driven by a pressure gradient , the axial velocity is scaled with . The cross-stream velocities are scaled with , and cross-stream pressures with . Lengths are scaled with the tube radius . In terms of these non-dimensional variables and coordinates, the Dean equations are then where is the convective derivative. The Dean number De is the only parameter left in the system, and encapsulates the leading order curvature effects. Higher-order approximations will involve additional parameters. For weak curvature effects (small De), the Dean equations can be solved as a series expansion in De. The first correction to the leading-order axial Poiseuille flow is a pair of vortices in the cross-section carrying flow from the inside to the outside of the bend across the centre and back around the edges. This solution is stable up to a critical Dean number . For larger De, there are multiple solutions, many of which are unstable. References Further reading Dimensionless numbers of fluid mechanics Fluid dynamics
Dean number
[ "Chemistry", "Engineering" ]
936
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
12,201,130
https://en.wikipedia.org/wiki/Acid%20neutralizing%20capacity
Acid-neutralizing capacity or ANC in short is a measure for the overall buffering capacity against acidification of a solution, e.g. surface water or soil water. ANC is defined as the difference between cations of strong bases and anions of strong acids (see below), or dynamically as the amount of acid needed to change the pH value from the sample's value to a chosen different value. The concepts alkalinity are nowadays often used as a synonym to positive ANC and similarly acidity is often used to mean negative ANC. Alkalinity and acidity however also have definitions based on an experimental setup (titration). ANC is often used in models to calculate acidification levels from acid rain pollution in different geographical areas, and as a basis for calculating critical loads for forest soils and surface waters. The relation between pH and ANC in natural waters depends on three conditions: Carbon dioxide, organic acids and aluminium solubility. The amount of dissolved carbon dioxide is usually higher than would be the case if there was an equilibrium with the carbon dioxide pressure in the atmosphere. This is due to biological activity: Decomposition of organic material releases carbon dioxide and thus increases the amount of dissolved carbon dioxide. An increase in carbon dioxide decreases pH but has no effect on ANC. Organic acids, often expressed as dissolved organic carbon (DOC), also decrease pH and have no effect on ANC. Soil water in the upper layers usually have higher organic content than the lower soil layers. Surface waters with high DOC are typically found in areas where there is a lot of peat and bogs in the catchment. Aluminium solubility is a bit tricky and there are several curve fit variants used in modelling, one of the more common being: In the illustration to the right, the relation between pH and ANC is shown for four different solutions. In the blue line the solution has 1 mg/L DOC, a dissolved amount of carbon dioxide that is equivalent to a solution being in equilibrium with an atmosphere with twice the carbon dioxide pressure of our atmosphere. For the other lines, all three parameters except one is the same as for the blue line. Thus the orange line is a solution loaded with organic acids, having a DOC of 80 mg/L (typically very brown lake water or water in the top soil layer in a forest soil). The red line has a high amount of dissolved carbon dioxide (pCO2=20 times ambient), a level that is not uncommon in ground water. Finally the black dotted line is a water with a lower aluminium solubility. The reason why ANC is often defined as the difference between cations of strong bases and anions of strong acids is that ANC is derived from a charge balance: If we for simplicity consider a solution with only a few species and use the fact that a water solution is electrically neutral we get where R− denote an anion of an organic acid. ANC is then defined by collecting all species controlled by equilibrium (i.e. species related to weak acids and weak bases) on one side and species not controlled by equilibrium (i.e. species related to strong acids and strong bases) on the other side. Thus, with the species above we get or Note that a change in DOC or CO2 (or for that matter Aluminium solubility, but Aluminium solubility is not something that is easily controlled) does NOT have any effect on ANC. that once a pH-ANC relation for has been established for a lake the pH-ANC relation can be used to easily calculate the amount of limestone needed to raise lake pH to e.g. 5.5 not all acid lakes are acid due to human influence since high DOC gives low pH. that the concentrations are multiplied with the charge of the species, hence the unit mol charge per liter References Environmental chemistry Water pollution Acid–base chemistry
Acid neutralizing capacity
[ "Chemistry", "Environmental_science" ]
790
[ "Acid–base chemistry", "Environmental chemistry", "Water pollution", "Equilibrium chemistry", "nan" ]
12,201,337
https://en.wikipedia.org/wiki/Bracket%20algebra
In mathematics, a bracket algebra is an algebraic system that connects the notion of a supersymmetry algebra with a symbolic representation of projective invariants. Given that L is a proper signed alphabet and Super[L] is the supersymmetric algebra, the bracket algebra Bracket[L] of dimension n over the field K is the quotient of the algebra Brace{L} obtained by imposing the congruence relations below, where w, w, ..., w" are any monomials in Super[L]: {w} = 0 if length(w) ≠ n {w}{w'}...{w"} = 0 whenever any positive letter a of L occurs more than n times in the monomial {w}{w'''}...{w"}. Let {w}{w}...{w"} be a monomial in Brace{L} in which some positive letter a occurs more than n times, and let b, c, d, e, ..., f, g be any letters in L''. See also Bracket ring References . . Invariant theory Algebras
Bracket algebra
[ "Physics", "Mathematics" ]
245
[ "Symmetry", "Mathematical structures", "Group actions", "Algebras", "Algebraic structures", "Invariant theory" ]
12,201,787
https://en.wikipedia.org/wiki/Hutchinson%20metric
In mathematics, the Hutchinson metric otherwise known as Kantorovich metric is a function which measures "the discrepancy between two images for use in fractal image processing" and "can also be applied to describe the similarity between DNA sequences expressed as real or complex genomic signals". Formal definition Consider only nonempty, compact, and finite metric spaces. For such a space , let denote the space of Borel probability measures on , with the embedding associating to the point measure . The support of a measure in is the smallest closed subset of measure 1. If is Borel measurable then the induced map associates to the measure defined by for all Borel in . Then the Hutchinson metric is given by where the is taken over all real-valued functions with Lipschitz constant Then is an isometric embedding of into , and if is Lipschitz then is Lipschitz with the same Lipschitz constant. See also Wasserstein metric Acoustic metric Apophysis (software) Complete metric Fractal image compression Image differencing Metric tensor Multifractal system Sources and notes Metric geometry Topology
Hutchinson metric
[ "Physics", "Mathematics" ]
233
[ "Spacetime", "Topology", "Space", "Geometry" ]
5,777,979
https://en.wikipedia.org/wiki/Bond%20albedo
The Bond albedo (also called spheric albedo, planetary albedo, and bolometric albedo), named after the American astronomer George Phillips Bond (1825–1865), who originally proposed it, is the fraction of power in the total electromagnetic radiation incident on an astronomical body that is scattered back out into space. Because the Bond albedo accounts for all of the light scattered from a body at all wavelengths and all phase angles, it is a necessary quantity for determining how much energy a body absorbs. This, in turn, is crucial for determining the equilibrium temperature of a body. Because bodies in the outer Solar System are always observed at very low phase angles from the Earth, the only reliable data for measuring their Bond albedo comes from spacecraft. Phase integral The Bond albedo (A) is related to the geometric albedo (p) by the expression where q is termed the phase integral and is given in terms of the directional scattered flux I(α) into phase angle α (averaged over all wavelengths and azimuthal angles) as The phase angle α is the angle between the source of the radiation (usually the Sun) and the observing direction, and varies from zero for light scattered back towards the source, to 180° for observations looking towards the source. For example, during opposition or looking at the full moon, α is very small, while backlit objects or the new moon have α close to 180°. Examples The Bond albedo is a value strictly between 0 and 1, as it includes all possible scattered light (but not radiation from the body itself). This is in contrast to other definitions of albedo such as the geometric albedo, which can be above 1. In general, though, the Bond albedo may be greater or smaller than the geometric albedo, depending on the surface and atmospheric properties of the body in question. Some examples: See also Albedo Geometric albedo References External links discussion of Lunar albedo Concepts in astrophysics Electromagnetic radiation Radiometry Scattering, absorption and radiative transfer (optics)
Bond albedo
[ "Physics", "Chemistry", "Engineering" ]
418
[ "Physical phenomena", " absorption and radiative transfer (optics)", "Telecommunications engineering", "Concepts in astrophysics", "Electromagnetic radiation", "Astrophysics", "Scattering", "Radiation", "Radiometry" ]
5,778,255
https://en.wikipedia.org/wiki/Rabi%20frequency
The Rabi frequency is the frequency at which the probability amplitudes of two atomic energy levels fluctuate in an oscillating electromagnetic field. It is proportional to the transition dipole moment of the two levels and to the amplitude (not intensity) of the electromagnetic field. Population transfer between the levels of such a 2-level system illuminated with light exactly resonant with the difference in energy between the two levels will occur at the Rabi frequency; when the incident light is detuned from this energy difference (detuned from resonance) then the population transfer occurs at the generalized Rabi frequency. The Rabi frequency is a semiclassical concept since it treats the atom as an object with quantized energy levels and the electromagnetic field as a continuous wave. In the context of a nuclear magnetic resonance experiment, the Rabi frequency is the nutation frequency of a sample's net nuclear magnetization vector about a radio-frequency field. (Note that this is distinct from the Larmor frequency, which characterizes the precession of a transverse nuclear magnetization about a static magnetic field.) Derivation Consider two energy eigenstates of a quantum system with Hamiltonian (for example, this could be the Hamiltonian of a particle in a potential, like the Hydrogen atom or the Alkali atoms): We want to consider the time dependent Hamiltonian where is the potential of the electromagnetic field. Treating the potential as a perturbation, we can expect the eigenstates of the perturbed Hamiltonian to be some mixture of the eigenstates of the original Hamiltonian with time dependent coefficients: Plugging this into the time dependent Schrödinger equation taking the inner product with each of and , and using the orthogonality condition of eigenstates , we arrive at two equations in the coefficients and : where . The two terms in parentheses are dipole matrix elements dotted into the polarization vector of the electromagnetic field. In considering the spherically symmetric spatial eigenfunctions of the Hydrogen atom potential, the diagonal matrix elements go to zero, leaving us with or Here , where is the Rabi Frequency. Intuition In the numerator we have the transition dipole moment for the transition, whose squared amplitude represents the strength of the interaction between the electromagnetic field and the atom, and is the vector electric field amplitude, which includes the polarization. The numerator has dimensions of energy, so dividing by gives an angular frequency. By analogy with a classical dipole, it is clear that an atom with a large dipole moment will be more susceptible to perturbation by an electric field. The dot product includes a factor of , where is the angle between the polarization of the light and the transition dipole moment. When they are parallel the interaction is strongest, when they are perpendicular there is no interaction at all. If we rewrite the differential equations found above: and apply the rotating-wave approximation, which assumes that , such that we can discard the high frequency oscillating terms, we have where is called the detuning between the laser and the atomic frequencies. We can solve these equations, assuming at time the atom is in (i.e. ) to find This is the probability as a function of detuning and time of the population of state . A plot as a function of detuning and ramping the time from 0 to gives: We see that for the population will oscillate between the two states at the Rabi frequency. Generalized Rabi frequency The quantity is commonly referred to as the "generalized Rabi frequency." For cases in which , Rabi flopping actually occurs at this frequency, where is the detuning, a measure of how far the light is off-resonance relative to the transition. For instance, examining the above animation at an offset frequency of ±1.73, one can see that during the 1/2 Rabi cycle (at resonance) shown during the animation, the oscillation instead undergoes one full cycle, thus at twice the (normal) Rabi frequency , just as predicted by this equation. Also note that as the incident light frequency shifts further from the transition frequency, the amplitude of the Rabi oscillation decreases, as is illustrated by the dashed envelope in the above plot. Two-Photon Rabi Frequency Coherent Rabi oscillations may also be driven by two-photon transitions. In this case we consider a system with three atomic energy levels, , , and , where is an intermediate state with corresponding frequency , and an electromagnetic field with two frequency components: A two-photon transition is not the same as excitation from the ground to intermediate state, and then out of the intermediate state to the excited state. Instead, the atom absorbs two photons simultaneously and is promoted directly between the initial and final states. The beat note of the two photons must be resonant with the two-photon transition (difference between initial and final state frequencies): Delta determines the rate of scattering off of the intermediate state. The greater it is the longer the coherence time. We may derive the two-photon Rabi frequency by returning to the equations which now describe excitation between the ground and intermediate states. We know we have the solution where is the generalized Rabi frequency for the transition from the initial to intermediate state. Similarly for the intermediate to final state transition we have the equations Now we plug into the above equation for The coefficient is proportional to: This is the two-photon Rabi frequency. It is the product of the individual Rabi frequencies for the and transitions, divided by the detuning from the intermediate state . See also Rabi cycle Vacuum Rabi oscillation Rabi resonance method References Quantum optics Atomic physics Atomic, molecular, and optical physics Optical quantities
Rabi frequency
[ "Physics", "Chemistry", "Mathematics" ]
1,181
[ "Physical quantities", "Quantum optics", "Quantity", "Quantum mechanics", "Atomic physics", " molecular", "Atomic", "Optical quantities", " and optical physics" ]
5,781,055
https://en.wikipedia.org/wiki/Mitointeractome
Mitointeractome is a mitochondrial protein interactome database. References External links Mitointeractome Molecular biology
Mitointeractome
[ "Chemistry", "Biology" ]
25
[ "Biochemistry", "Molecular biology" ]
5,782,826
https://en.wikipedia.org/wiki/Fractional%20crystallization%20%28chemistry%29
In chemistry, fractional crystallization is a stage-wise separation technique that relies on the liquid–solid phase change. This technique fractionates via differences in crystallization temperature and enables the purification of multi-component mixtures, as long as none of the constituents can act as solvents to the others. Due to the high selectivity of the solid–liquid equilibrium, very high purities can be achieved for the selected component. Principle of separation The crystallization process starts with the partial freezing of the initial liquid mixture by slowly decreasing its temperature. The frozen solid phase subsequently has a different composition than the remaining liquid. This is the fundamental physical principle behind the melt fractionating process and quite comparable to distillation, which operates between a liquid and the gas phase. The crystals will grow on a cooled surface or alternatively as a suspension in the liquid. The heat released by the solidification process is withdrawn through a cooling surface or via the liquid. In theory, 100% of the product could be solidified and recovered. In practice, various strategies such as partial melting of the solid fraction (sweating) need to be applied in order to reach high purity levels. Advantages Fractional crystallization has various advantages over other separation technologies. First of all, it makes the purification of close boilers possible. This allows for very high purities even for challenging components. Furthermore, because of the lower operating temperature, the thermal stress applied to the product is very low. This is in particular relevant for products that would otherwise oligomerize or degrade. Next, fractional crystallization is usually an inherently safe technology, because it operates at low pressures and low temperatures. Also, it does not use any solvents and is emission-free. Finally, since the latent heat of solidification is 3–6x lower than the heat of evaporation, the energy consumption is – in comparison to distillation – much lower. Process steps Fractional crystallization involves several key steps: Crystallization: This is the initial phase where the material to be purified is cooled. As it cools, high-purity crystals begin to form on the cooling surface. The purity is achieved because the impurities tend to remain in the liquid phase rather than being incorporated into the crystal structure. Draining: After the formation of the crystals, the next step is to remove the residual liquid that contains a higher concentration of impurities. This process of draining helps to separate the pure crystals from the impure liquid. Sweating: This phase is a controlled partial melting process. It further purifies the product by melting only a small portion of the crystal. The melting causes the impurities trapped within or between the crystal structures to be released and separated. Total Melting: In the final step, the remaining crystallized material, which is now the purified product, is completely melted. This total melting facilitates the removal of the pure substance from the crystallization equipment and prepares it for downstream processing. Crystallizers There are three differenct fractional crystallization technologies available: Falling-film In the falling-film crystallizer, crystals grow from a melt that forms a thin film along the inside of cooled tubes. A concurrent cooling medium flows on the outside of these tubes. This arrangement allows for reproducible and high transfer rates of heat, facilitating the growth of crystals from the falling film of melt. The solid–liquid separation of the resulting slurry can be accomplished using a wash column or a centrifuge. This technology is more complex than others but offers the advantage of high separation efficiency and very high purities. A typical feed has concentrations between 90–99%, which is purified up to 99.99 wt.-% or greater. For example, glacial acrylic acid, optical grade bisphenol-A and battery grade ethylene carbonate can be purified to their highest grade using a falling-film crystallizer. Static The static crystallizer allows crystals to grow from a stagnant melt, making it a versatile and robust technology. It can purify highly challenging products, including those with most challenging properties, such as high viscosities and high or low melting points. Examples of applications include isopulegol, phosphoric acid, wax and paraffins, anthracene / carbazole and even satellite-grade hydrazine. Suspension In suspension crystallization, crystals are generated on a cooling surface and then scraped off to continue growing in size within a stirred vessel in suspension or slurry. The solid–liquid separation is performed either through a wash-column or a centrifuge. This method is more complex to operate, but offers the advantage of a high separation efficiency, which translates to considerable engery savings. Examples of applications include paraxylene, halogenated aromatics, and also aqueous feeds. See also Cold Water Extraction Fractional crystallization (geology) Fractional freezing Laser-heated pedestal growth Pumpable ice technology Recrystallization (chemistry) Seed crystal Single crystal References "Small Molecule Crystalization" (PDF) at Illinois Institute of Technology website "Fractional Solvent-Free Melt Crystallization" at Chemical Engineering website Sulzer Fractional Crystallization Technologies C. A. Soch, Fractional Crystallization, The Journal of Physical Chemistry 1898 2 (1), 43-50; DOI: 10.1021/j150001a002 Fractionation Phase transitions Methods of crystal growth
Fractional crystallization (chemistry)
[ "Physics", "Chemistry", "Materials_science" ]
1,105
[ "Fractionation", "Physical phenomena", "Phase transitions", "Separation processes", "Methods of crystal growth", "Phases of matter", "Critical phenomena", "Crystallography", "Statistical mechanics", "Matter" ]
17,749,634
https://en.wikipedia.org/wiki/Numerical%20sign%20problem
In applied mathematics, the numerical sign problem is the problem of numerically evaluating the integral of a highly oscillatory function of a large number of variables. Numerical methods fail because of the near-cancellation of the positive and negative contributions to the integral. Each has to be integrated to very high precision in order for their difference to be obtained with useful accuracy. The sign problem is one of the major unsolved problems in the physics of many-particle systems. It often arises in calculations of the properties of a quantum mechanical system with large number of strongly interacting fermions, or in field theories involving a non-zero density of strongly interacting fermions. Overview In physics the sign problem is typically (but not exclusively) encountered in calculations of the properties of a quantum mechanical system with large number of strongly interacting fermions, or in field theories involving a non-zero density of strongly interacting fermions. Because the particles are strongly interacting, perturbation theory is inapplicable, and one is forced to use brute-force numerical methods. Because the particles are fermions, their wavefunction changes sign when any two fermions are interchanged (due to the anti-symmetry of the wave function, see Pauli principle). So unless there are cancellations arising from some symmetry of the system, the quantum-mechanical sum over all multi-particle states involves an integral over a function that is highly oscillatory, hence hard to evaluate numerically, particularly in high dimension. Since the dimension of the integral is given by the number of particles, the sign problem becomes severe in the thermodynamic limit. The field-theoretic manifestation of the sign problem is discussed below. The sign problem is one of the major unsolved problems in the physics of many-particle systems, impeding progress in many areas: Condensed matter physics — It prevents the numerical solution of systems with a high density of strongly correlated electrons, such as the Hubbard model. Nuclear physics — It prevents the ab initio calculation of properties of nuclear matter and hence limits our understanding of nuclei and neutron stars. Quantum field theory — It prevents the use of lattice QCD to predict the phases and properties of quark matter. (In lattice field theory, the problem is also known as the complex action problem.) The sign problem in field theory In a field-theory approach to multi-particle systems, the fermion density is controlled by the value of the fermion chemical potential . One evaluates the partition function by summing over all classical field configurations, weighted by , where is the action of the configuration. The sum over fermion fields can be performed analytically, and one is left with a sum over the bosonic fields (which may have been originally part of the theory, or have been produced by a Hubbard–Stratonovich transformation to make the fermion action quadratic) where represents the measure for the sum over all configurations of the bosonic fields, weighted by where is now the action of the bosonic fields, and is a matrix that encodes how the fermions were coupled to the bosons. The expectation value of an observable is therefore an average over all configurations weighted by : If is positive, then it can be interpreted as a probability measure, and can be calculated by performing the sum over field configurations numerically, using standard techniques such as Monte Carlo importance sampling. The sign problem arises when is non-positive. This typically occurs in theories of fermions when the fermion chemical potential is nonzero, i.e. when there is a nonzero background density of fermions. If , there is no particle–antiparticle symmetry, and , and hence the weight , is in general a complex number, so Monte Carlo importance sampling cannot be used to evaluate the integral. Reweighting procedure A field theory with a non-positive weight can be transformed to one with a positive weight by incorporating the non-positive part (sign or complex phase) of the weight into the observable. For example, one could decompose the weighting function into its modulus and phase: where is real and positive, so Note that the desired expectation value is now a ratio where the numerator and denominator are expectation values that both use a positive weighting function . However, the phase is a highly oscillatory function in the configuration space, so if one uses Monte Carlo methods to evaluate the numerator and denominator, each of them will evaluate to a very small number, whose exact value is swamped by the noise inherent in the Monte Carlo sampling process. The "badness" of the sign problem is measured by the smallness of the denominator : if it is much less than 1, then the sign problem is severe. It can be shown that where is the volume of the system, is the temperature, and is an energy density. The number of Monte Carlo sampling points needed to obtain an accurate result therefore rises exponentially as the volume of the system becomes large, and as the temperature goes to zero. The decomposition of the weighting function into modulus and phase is just one example (although it has been advocated as the optimal choice since it minimizes the variance of the denominator). In general one could write where can be any positive weighting function (for example, the weighting function of the theory). The badness of the sign problem is then measured by which again goes to zero exponentially in the large-volume limit. Methods for reducing the sign problem The sign problem is NP-hard, implying that a full and generic solution of the sign problem would also solve all problems in the complexity class NP in polynomial time. If (as is generally suspected) there are no polynomial-time solutions to NP problems (see P versus NP problem), then there is no generic solution to the sign problem. This leaves open the possibility that there may be solutions that work in specific cases, where the oscillations of the integrand have a structure that can be exploited to reduce the numerical errors. In systems with a moderate sign problem, such as field theories at a sufficiently high temperature or in a sufficiently small volume, the sign problem is not too severe and useful results can be obtained by various methods, such as more carefully tuned reweighting, analytic continuation from imaginary to real , or Taylor expansion in powers of . List: Current Approaches There are various proposals for solving systems with a severe sign problem: Contour deformation: The field space is complexified and the path integral contour is deformed from to another -dimensional manifold embedded in complex space. Meron-cluster algorithms: These achieve an exponential speed-up by decomposing the fermion world lines into clusters that contribute independently. Cluster algorithms have been developed for certain theories, but not for the Hubbard model of electrons, nor for QCD i.e. the theory of quarks. Stochastic quantization: The sum over configurations is obtained as the equilibrium distribution of states explored by a complex Langevin equation. So far, the algorithm has been found to evade the sign problem in test models that have a sign problem but do not involve fermions. Majorana algorithms: Using Majorana fermion representation to perform Hubbard-Stratonovich transformations can help to solve the fermion sign problem in a class of fermionic many-body models. Fixed-node Monte Carlo: One fixes the location of nodes (zeros) of the multiparticle wavefunction, and uses Monte Carlo methods to obtain an estimate of the energy of the ground state, subject to that constraint. Diagrammatic Monte Carlo: Stochastically and strategically sampling Feynman diagrams can also render the sign problem more tractable for a Monte Carlo approach which would otherwise be computationally unworkable. See also Method of stationary phase Oscillatory integral Footnotes References Statistical mechanics Numerical artifacts Unsolved problems in physics
Numerical sign problem
[ "Physics" ]
1,633
[ "Statistical mechanics", "Unsolved problems in physics" ]
17,755,916
https://en.wikipedia.org/wiki/Xiaoliang%20Sunney%20Xie
Xiaoliang Sunney Xie (; born 24 June 1962) is a Chinese biophysicist well known for his contributions to the fields of single-molecule biophysical chemistry, coherent Raman Imaging and single-molecule genomics. In 2023, Xie renounced his U.S. citizenship in order to reclaim his Chinese citizenship. Early life Xie was born in Beijing in 1962 with ancestral roots in Hepu County, Guangxi. He received his B.Sc. in chemistry from Peking University in 1984, and his Ph.D. in physical chemistry in 1990 from University of California at San Diego. After a brief postdoctoral appointment at University of Chicago, he joined Pacific Northwest National Laboratory, where he rose from senior research scientist to chief scientist. In 1998, he became the first tenured professor recruited by Harvard University among Chinese scholars who came to the United States since Chinese economic reform. Research He had been the Mallinckrodt Professor of Chemistry and Chemical Biology at Harvard University until 2018, when he became the Lee Shau-kee Professor of Peking University. He was the Director of Biomedical Pioneering Innovation Center (BIOPIC) in 2010-2021, and the Director of Beijing Advanced Innovation Center for Genomics (ICG) in 2016-2021, both at Peking University. As a pioneer of single-molecule biophysical chemistry, Coherent Raman scattering microscopy, and single-cell genomics, he made major contributions to the emergence of these fields. Furthermore, he has made significant advances on medical applications of label-free optical imaging and single-cell genomics. In particular, his inventions in single-cell genomics have been used in in vitro fertilization benefited thousands of families by avoiding the transmission of monogenic diseases to their newborns. More than fifty of his students and post-doctorates have become professors at major universities around the world, and two are co-founders of start-up companies. Professor Xie’s current research interests include the following scientific, technological, and medical areas: Scientific: Single-molecule enzymology, Single-molecule biophysical chemistry, Gene expression and regulation, Epigenetics, Mechanism of cell differentiation and reprogramming, Chromosome structure and dynamics, and Genomic instability; Technological: Single-molecule imaging, Single-cell genomics, Coherent Raman scattering microscopy, DNA sequencing; Medical: Preimplantation genetic testing in in vitro fertilization, COVID19 vaccine and neutralizing antibody drugs for SARS-CoV-2 variants and Early cancer diagnosis. Honors and awards 2017: Qiu Shi Outstanding Scientist Award, Qiu Shi Science & Technologies Foundation 2017: Foreign member of the Chinese Academy of Sciences (As of 2023, member of the CAS after Chinese citizenship reclaimed) 2016: Member of the National Academy of Medicine 2015: Albany Medical Center Prize 2015: Peter Debye Award in Physical Chemistry, American Chemical Society 2014: Fellow of the Optical Society of America 2013: NIH Director's Pioneer Award 2013: Ellis R. Lippincott Award, Optical Society of America and Society for Applied Spectroscopy 2012: Edward Mack, Jr. Lecture, OSU 2012: Harrison Howe Award, Rochester Section of the American Chemical Society 2012: Fellow of the American Academy of Microbiology 2012: Biophysical Society Founders Award 2011: Member of the National Academy of Sciences 2009: Ernest Orlando Lawrence Award 2008: Fellow of the American Physical Society 2008: Berthold Leibinger Zukunftspreis for Applied Laser Technology 2008: Fellow of the American Academy of Arts and Sciences 2007: Willis E. Lamb Award for Laser Sciences and Quantum Optics 2006: Fellow of Biophysical Society 2006: Fellow of the American Association for the Advancement of Science 2004: NIH Director's Pioneer Award 2003: Raymond and Beverly Sackler Prize in the Physical Sciences 1996: Coblentz Award Selected Literature COVID-19 Research Single-Cell Genomics Gene Expression and Regulation Single Molecule Enzymology Coherent Raman Imaging Single Molecule Imaging See also Cho Minhaeng References External links Homepage of the Xie Group at Harvard 1962 births Living people Harvard University faculty Peking University alumni University of California, San Diego alumni University of Chicago alumni Fellows of the American Association for the Advancement of Science Fellows of the American Academy of Arts and Sciences Spectroscopists Members of the United States National Academy of Sciences Scientists from Beijing Chinese emigrants to the United States Foreign members of the Chinese Academy of Sciences Members of the National Academy of Medicine Members of the Standing Committee of the 14th Chinese People's Political Consultative Conference
Xiaoliang Sunney Xie
[ "Physics", "Chemistry" ]
921
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
9,058,508
https://en.wikipedia.org/wiki/Seconds%20pendulum
A seconds pendulum is a pendulum whose period is precisely two seconds; one second for a swing in one direction and one second for the return swing, a frequency of 0.5 Hz. Pendulum A pendulum is a weight suspended from a pivot so that it can swing freely. When a pendulum is displaced sideways from its resting equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back toward the equilibrium position. When released, the restoring force combined with the pendulum's mass causes it to oscillate about the equilibrium position, swinging back and forth. The time for one complete cycle, a left swing and a right swing, is called the period. The period depends on the length of the pendulum, and also to a slight degree on its weight distribution (the moment of inertia about its own center of mass) and the amplitude (width) of the pendulum's swing. For a point mass on a weightless string of length L swinging with an infinitesimally small amplitude, without resistance, the length of the string of a seconds pendulum is equal to L = g/2 where g is the acceleration due to gravity, with units of length per second squared, and L is the length of the string in the same units. Using the SI recommended acceleration due to gravity of g0 = 9.80665 m/s2, the length of the string will be approximately 993.6 millimetres, i.e. less than a centimetre short of one metre everywhere on Earth. This is because the value of g, expressed in m/s2, is very close to 2. Defining the second The pendulum clock was invented in 1656 by Dutch scientist and inventor Christiaan Huygens, and patented the following year. Huygens contracted the construction of his clock designs to clockmaker Salomon Coster, who actually built the clock. Huygens was inspired by investigations of pendulums by Galileo Galilei beginning around 1602. Galileo discovered the key property that makes pendulums useful timekeepers: isochronism, which means that the period of swing of a pendulum is approximately the same for different sized swings. Galileo had the idea for a pendulum clock in 1637, which was partly constructed by his son in 1649, but neither lived to finish it. The introduction of the pendulum, the first harmonic oscillator used in timekeeping, increased the accuracy of clocks enormously, from about 15 minutes per day to 15 seconds per day leading to their rapid spread as existing 'verge and foliot' clocks were retrofitted with pendulums. These early clocks, due to their verge escapements, had wide pendulum swings of 80–100°. In his 1673 analysis of pendulums, Horologium Oscillatorium, Huygens showed that wide swings made the pendulum inaccurate, causing its period, and thus the rate of the clock, to vary with unavoidable variations in the driving force provided by the movement. Clockmakers' realisation that only pendulums with small swings of a few degrees are isochronous motivated the invention of the anchor escapement around 1670, which reduced the pendulum's swing to 4–6°. The anchor became the standard escapement used in pendulum clocks. In addition to increased accuracy, the anchor's narrow pendulum swing allowed the clock's case to accommodate longer, slower pendulums, which needed less power and caused less wear on the movement. The seconds pendulum (also called the Royal pendulum), 0.994 m (39.1 in) long, in which each swing takes one second, became widely used in quality clocks. The long narrow clocks built around these pendulums, first made by William Clement around 1680, became known as grandfather clocks. The increased accuracy resulting from these developments caused the minute hand, previously rare, to be added to clock faces beginning around 1690. The 18th- and 19th-century wave of horological innovation that followed the invention of the pendulum brought many improvements to pendulum clocks. The deadbeat escapement invented in 1675 by Richard Towneley and popularised by George Graham around 1715 in his precision "regulator" clocks gradually replaced the anchor escapement and is now used in most modern pendulum clocks. The observation that pendulum clocks slowed down in summer brought the realisation that thermal expansion and contraction of the pendulum rod with changes in temperature was a source of error. This was solved by the invention of temperature-compensated pendulums; the mercury pendulum by George Graham in 1721 and the gridiron pendulum by John Harrison in 1726. With these improvements, by the mid-18th century precision pendulum clocks achieved accuracies of a few seconds per week. At the time the second was defined as a fraction of the Earth's rotation time or mean solar day and determined by clocks whose precision was checked by astronomical observations. Solar time is a calculation of the passage of time based on the position of the Sun in the sky. The fundamental unit of solar time is the day. Two types of solar time are apparent solar time (sundial time) and mean solar time (clock time). Mean solar time is the hour angle of the mean Sun plus 12 hours. This 12 hour offset comes from the decision to make each day start at midnight for civil purposes whereas the hour angle or the mean sun is measured from the zenith (noon). The duration of daylight varies during the year but the length of a mean solar day is nearly constant, unlike that of an apparent solar day. An apparent solar day can be 20 seconds shorter or 30 seconds longer than a mean solar day. Long or short days occur in succession, so the difference builds up until mean time is ahead of apparent time by about 14 minutes near February 6 and behind apparent time by about 16 minutes near November 3. The equation of time is this difference, which is cyclical and does not accumulate from year to year. Mean time follows the mean sun. Jean Meeus describes the mean sun as follows: "Consider a first fictitious Sun travelling along the ecliptic with a constant speed and coinciding with the true sun at the perigee and apogee (when the Earth is in perihelion and aphelion, respectively). Then consider a second fictitious Sun travelling along the celestial equator at a constant speed and coinciding with the first fictitious Sun at the equinoxes. This second fictitious sun is the mean Sun..." In 1936 French and German astronomers found that Earth's rotation speed is irregular. Since 1967 atomic clocks define the second. Usage in metrology The length of a seconds pendulum was determined (in toises) by Marin Mersenne in 1644. In 1660, the Royal Society proposed that it be the standard unit of length. In 1671 Jean Picard measured this length at the Paris observatory. He found the value of 440.5 lignes of the Toise of Châtelet which had been recently renewed. He proposed a universal toise (French: Toise universelle) which was twice the length of the seconds pendulum. However, it was soon discovered that the length of a seconds pendulum varies from place to place: French astronomer Jean Richer had measured the 0.3% difference in length between Cayenne (in what is now French Guiana) and Paris. Relationship to the figure of the Earth Jean Richer and Giovanni Domenico Cassini measured the parallax of Mars between Paris and Cayenne in French Guiana when Mars was at its closest to Earth in 1672. They arrived at a figure for the solar parallax of 9.5 arcseconds, equivalent to an Earth–Sun distance of about 22000 Earth radii. They were also the first astronomers to have access to an accurate and reliable value for the radius of Earth, which had been measured by their colleague Jean Picard in 1669 as 3269 thousand toises. Picard's geodetic observations had been confined to the determination of the magnitude of the Earth considered as a sphere, but the discovery made by Jean Richer turned the attention of mathematicians to its deviation from a spherical form. Christiaan Huygens found out the centrifugal force which explained variations of gravitational acceleration depending on latitude. He also discovered that the seconds pendulum length was a means to measure gravitational acceleration. In the 18th century, in addition of its significance for cartography, geodesy grew in importance as a means of empirically demonstrating the theory of gravity, which Émilie du Châtelet promoted in France in combination with Leibniz's mathematical work and because the radius of the Earth was the unit to which all celestial distances were to be referred. Indeed, Earth proved to be an oblate spheroid through geodetic surveys in Ecuador and Lapland and this new data called into question the value of Earth radius as Picard had calculated it. The English physicist Sir Isaac Newton, who used Picard's Earth measurement for establishing his law of universal gravity, explained this variation of the seconds pendulum's length in his Principia Mathematica (1687) in which he outlined his theory and calculations on the shape of the Earth. Newton theorised correctly that the Earth was not precisely a sphere but had an oblate ellipsoidal shape, slightly flattened at the poles due to the centrifugal force of its rotation. Since the surface of the Earth is closer to its centre at the poles than at the equator, gravity is stronger there. Using geometric calculations, he gave a concrete argument as to the hypothetical ellipsoid shape of the Earth. The goal of Principia was not to provide exact answers for natural phenomena, but to theorise potential solutions to these unresolved factors in science. Newton pushed for scientists to look further into the unexplained variables. Two prominent researchers whom he inspired were Alexis Clairaut and Pierre Louis Maupertuis. They both sought to prove the validity of Newton's theory on the shape of the Earth. In order to do so, they went on an expedition to Lapland in an attempt to accurately measure the meridian arc. From such measurements they could calculate the eccentricity of the Earth, its degree of departure from a perfect sphere. Clairaut confirmed that Newton's theory that the Earth was ellipsoidal was correct, but his calculations were in error; he wrote a letter to the Royal Society of London with his findings. The society published an article in Philosophical Transactions the following year in 1737 that revealed his discovery. Clairaut showed how Newton's equations were incorrect, and did not prove an ellipsoid shape to the Earth. However, he corrected problems with the theory, that in effect would prove Newton's theory correct. Clairaut believed that Newton had reasons for choosing the shape that he did, but he did not support it in Principia. Clairaut's article did not provide a valid equation to back up his argument. This created much controversy in the scientific community. It was not until Clairaut wrote Théorie de la figure de la terre in 1743 that a proper answer was provided. In it, he promulgated what is more formally known today as Clairaut's theorem. By applying Clairaut's theorem, Laplace found from 15 gravity values that the flattening of the Earth was . A modern estimate is . In 1790, one year before the metre was ultimately based on a quadrant of the Earth, Talleyrand proposed that the metre be the length of the seconds pendulum at a latitude of 45°. This option, with one-third of this length defining the foot, was also considered by Thomas Jefferson and others for redefining the yard in the United States shortly after gaining independence from the British Crown. Instead of the seconds pendulum method, the commission of the French Academy of Sciences – whose members included Lagrange, Laplace, Monge and Condorcet – decided that the new measure should be equal to one ten-millionth of the distance from the North Pole to the Equator (the quadrant of the Earth's circumference), measured along the meridian passing through Paris. Apart from the obvious consideration of safe access for French surveyors, the Paris meridian was also a sound choice for scientific reasons: a portion of the quadrant from Dunkirk to Barcelona (about 1000 km, or one-tenth of the total) could be surveyed with start- and end-points at sea level, and that portion was roughly in the middle of the quadrant, where the effects of the Earth's oblateness were expected to be the largest. The Spanish-French geodetic mission combined with an earlier measurement of the Paris meridian arc and the Lapland geodetic mission had confirmed that the Earth was an oblate spheroid. Moreover, observations were made with a pendulum to determine the local acceleration due to local gravity and centrifugal acceleration; and these observations coincided with the geodetic results in proving that the Earth is flattened at the poles. The acceleration of a body near the surface of the Earth, which is measured with the seconds pendulum, is due to the combined effects of local gravity and centrifugal acceleration. The gravity diminishes with the distance from the center of the Earth while the centrifugal force augments with the distance from the axis of the Earth's rotation, it follows that the resulting acceleration towards the ground is 0.5% greater at the poles than at the Equator and that the polar diameter of the Earth is smaller than its equatorial diameter. The Academy of Sciences planned to infer the flattening of the Earth from the length's differences between meridional portions corresponding to one degree of latitude. Pierre Méchain and Jean-Baptiste Delambre combined their measurements with the results of the Spanish-French geodetic mission and found a value of 1/334 for the Earth's flattening, and they then extrapolated from their measurement of the Paris meridian arc between Dunkirk and Barcelona the distance from the North Pole to the Equator which was 5 130 740 toises. As the metre had to be equal to one ten-millionth of this distance, it was defined as 0.513074 toise or 3 feet and 11.296 lines of the Toise of Peru. The Toise of Peru had been constructed in 1735 as the standard of reference in the Spanish-French Geodesic Mission, conducted in actual Ecuador from 1735 to 1744. Jean-Baptiste Biot and François Arago published in 1821 their observations completing those of Delambre and Mechain. It was an account of the length's variation of the degrees of latitude along the Paris meridian as well as the account of the variation of the seconds pendulum's length along the same meridian between Shetland and the Baleares. The seconds pendulum's length is a mean to measure g, the local acceleration due to local gravity and centrifugal acceleration, which varies depending on one's position on Earth (see Earth's gravity). The task of surveying the Paris meridian arc took more than six years (1792–1798). The technical difficulties were not the only problems the surveyors had to face in the convulsed period of the aftermath of the French Revolution: Méchain and Delambre, and later Arago, were imprisoned several times during their surveys, and Méchain died in 1804 of yellow fever, which he contracted while trying to improve his original results in northern Spain. In the meantime, the commission of the French Academy of Sciences calculated a provisional value from older surveys of 443.44 lignes. This value was set by legislation on 7 April 1795. While Méchain and Delambre were completing their survey, the commission had ordered a series of platinum bars to be made based on the provisional metre. When the final result was known, the bar whose length was closest to the meridional definition of the metre was selected and placed in the National Archives on 22 June 1799 (4 messidor An VII in the Republican calendar) as a permanent record of the result. This standard metre bar became known as the Committee metre (French : Mètre des Archives). See also Pendulum (mechanics) Kater's pendulum Metre Convention Notes References Units of time Units of length Timekeeping components Pendulums
Seconds pendulum
[ "Physics", "Mathematics", "Technology" ]
3,329
[ "Physical quantities", "Time", "Units of length", "Units of time", "Quantity", "Timekeeping components", "Spacetime", "Components", "Units of measurement" ]
9,065,428
https://en.wikipedia.org/wiki/Acoustic%20suspension
Acoustic suspension is a loudspeaker cabinet design that uses one or more loudspeaker drivers mounted in a sealed box. Acoustic suspension systems reduce bass distortion which can be caused by stiff suspensions required on drivers used for open cabinet designs. A compact acoustic suspension loudspeaker was described in 1954 by Edgar Villchur, and it was brought to commercial production by Villchur and Henry Kloss with the founding of Acoustic Research in Cambridge, Massachusetts. In 1960, Villchur reiterated that: The first aim of the acoustic suspension design, over and above uniformity of frequency response, compactness, and extension of response into the low-bass range, is to reduce significantly the level of bass distortion that had previously been tolerated in loudspeakers. This is accomplished by substituting an air-spring for a mechanical one. Subsequently, the theory of closed-box loudspeakers was extensively described by Richard H. Small. Speaker cabinets with acoustic suspension can provide well-controlled bass response, especially in comparison with an equivalently-sized speaker enclosure that has a bass reflex port or vent. The bass vent boosts low-frequency output, but with the tradeoff of introducing phase delay and accuracy problems in reproducing transient signals. Sealed boxes are generally less efficient than a bass-reflex cabinet for the same low-frequency cut-off and cabinet volume, so a sealed-box speaker cabinet will need more electrical power to deliver the same amount of acoustic low-frequency bass output. Theory The acoustic suspension woofer uses the elastic cushion of air within a sealed enclosure to provide the restoring force for the woofer diaphragm. The cushion of air acts like a compression spring. This is in contrast to the stiff physical suspension built into the driver of conventional speakers. Because the air in the cabinet serves to control the woofer's excursion, the physical stiffness of the driver can be reduced. The air suspension provides a more linear restoring force for the woofer's diaphragm, enabling it to oscillate a greater distance (excursion) in a linear fashion. This is a requirement for low distortion and loud reproduction of deep bass by drivers with relatively small cones. Even though acoustic suspension cabinets are often called sealed box designs, they are not entirely airtight. A small amount of airflow must be allowed so that the speaker can adjust to changes in atmospheric pressure. A semi-porous cone surround allows enough air movement for this purpose. Most Acoustic Research designs used a PVA sealer on the foam surrounds to enable a longer component life and enhance performance. The venting was via the cloth spider and cloth dust caps, and not so much through the cone surround. Acoustic suspension woofers remain popular in hi-fi systems due to their low distortion. They also have lower group delay at low frequencies compared to bass reflex designs, resulting in better transient response. However, the audibility of this benefit is somewhat contested. As noted by Small, an analysis performed by Thiele suggested that the differences among correctly adjusted systems of both types are likely to be inaudible. In the 2000s, most subwoofers, bass amplifier cabinets and sound reinforcement system speaker cabinets use bass reflex ports, rather than a sealed-box design, in order to obtain more extended low-frequency response and to get higher sound pressure level (SPL). The speaker enclosure designers and their customers view the risk of increased distortion and phase delay as an acceptable price to pay for increased bass output and higher maximum SPL. Acoustic performance The two most common types of speaker enclosure are acoustic suspension (sometimes called pneumatic suspension) and bass reflex. In both cases, the tuning affects the lower end of the driver's response, but above a certain frequency, the driver itself becomes the dominant factor and the size of the enclosure and ports (if any) become irrelevant. In general, acoustic suspension systems (driver plus enclosure) have a second-order acoustic (12 dB/octave) roll-off below the −3 dB point. Bass reflex designs have a fourth-order acoustic roll-off (24 dB/octave). Given a driver that is suitable for either type of enclosure, the ideal bass reflex cabinet will be larger, have a lower −3 dB point, but both systems will have equal voltage sensitivity in the passband. On the right is a simulation of the low-frequency response of a typical 5" mid-woofer, the FaitalPRO 5FE120 mid-woofer generated, obtained using WinISD, for ideal sealed (yellow) and ported (cyan) enclosure configurations. The ported version adds about an octave of bass extension, dropping the −3 dB point from 100 Hz to 50 Hz, but the tradeoff is that the cabinet size is more than twice as large, 8 litres of interior space versus 3.8 litres. It is also worth noting that above 200 Hz the simulations converge and there is no difference in output, and below 32 Hz the sealed enclosure produces more low-frequency output. Small presented the physical efficiency-bandwidth-volume limitation of closed-box system design. By considering the variation in the reference efficiency of the driver operating in the system enclosure, the relationship of maximum reference efficiency to cut-off frequency and enclosure volume for closed-box loudspeaker systems was determined. Subsequently, Small derived a similar relationship for vented-box loudspeaker systems. When Small compared these two sets of results, they revealed that the closed-box system has a maximum theoretical value of reference efficiency that is 2.9 dB lower than that of the vented-box system. This suggests that an acoustic suspension loudspeaker with the same enclosure volume and low-frequency −3 dB cut-off as a vented-box system will be up to 2.9 dB less sensitive than its counterpart. If the reference efficiency and cut-off frequency of the two systems is the same, then the enclosure volume of the acoustic suspension loudspeaker will be approximately twice as large as that of the vented system. In multi-driver speakers While boxed hi-fi speakers are often described as being acoustic suspension or ported (bass reflex), depending on the absence or presence of a port tube or vent, it is also true that, in typical box speakers with more than two drivers, the midrange drivers between the woofer and tweeter are usually designed as acoustic suspension, with a separate, sealed air-space, even if the woofer itself is not. However, one notable exception to this was the Sonus Faber Stradivari Homage, which used a ported enclosure for the midrange. See also Passive radiator (speaker) Transmission line loudspeaker References Acoustics Loudspeaker technology
Acoustic suspension
[ "Physics" ]
1,378
[ "Classical mechanics", "Acoustics" ]
9,068,275
https://en.wikipedia.org/wiki/Extinction%20risk%20from%20climate%20change
There are several plausible pathways that could lead to extinction from climate change. Every plant and animal species has evolved to exist within a certain ecological niche. But climate change leads to changes of temperature and average weather patterns. These changes can push climatic conditions outside of the species' niche, and ultimately render it extinct. Normally, species faced with changing conditions can either adapt in place through microevolution or move to another habitat with suitable conditions. However, the speed of recent climate change is very fast. Due to this rapid change, for example cold-blooded animals (a category which includes amphibians, reptiles and all invertebrates) may struggle to find a suitable habitat within 50 km of their current location at the end of this century (for a mid-range scenario of future global warming). Climate change also increases both the frequency and intensity of extreme weather events, which can directly wipe out regional populations of species. Those species occupying coastal and low-lying island habitats can also become extinct by sea level rise. This has already happened with Bramble Cay melomys in Australia. Finally, climate change has been linked with the increased prevalence and global spread of certain diseases affecting wildlife. This includes Batrachochytrium dendrobatidis, a fungus that is one of the main drivers of the worldwide decline in amphibian populations. So far, climate change has not yet been a major contributor to the ongoing holocene extinction. In fact, nearly all of the irreversible biodiversity loss to date has been caused by other anthropogenic pressures such as habitat destruction. Yet, its effects are certain to become more prevalent in the future. As of 2021, 19% of species on the IUCN Red List of Threatened Species are already being impacted by climate change. Out of 4000 species analyzed by the IPCC Sixth Assessment Report, half were found to have shifted their distribution to higher latitudes or elevations in response to climate change. According to IUCN, once a species has lost over half of its geographic range, it is classified as "endangered", which is considered equivalent to a >20% likelihood of extinction over the next 10–100 years. If it loses 80% or more of its range, it is considered "critically endangered", and has a very high (over 50%) likelihood of going extinct over the next 10–100 years. The IPCC Sixth Assessment Report projected that in the future, 9%-14% of the species assessed would be at a very high risk of extinction under of global warming over the preindustrial levels, and more warming means more widespread risk, with placing 12%-29% at very high risk, and 15%-48%. In particular, at , 15% of invertebrates (including 12% of pollinators), 11% of amphibians and 10% of flowering plants would be at a very high risk of extinction, while ~49% of insects, 44% of plants, and 26% of vertebrates would be at a high risk of extinction. In contrast, even the more modest Paris Agreement goal of limiting warming to reduces the fraction of invertebrates, amphibians and flowering plants at a very high risk of extinction to below 3%. However, while the more ambitious goal dramatically cuts the proportion of insects, plants, and vertebrates at high risk of extinction to 6%, 4% and 8%, the less ambitious target triples (to 18%) and doubles (8% and 16%) the proportion of respective species at risk. Causes Climate change has already adversely affected marine and terrestrial ecoregions, including tundras, mangroves, coral reefs, and caves. Consequently, increasing global temperatures have already been pushing some species out of their habitats for decades. When the IPCC Fourth Assessment Report was published in 2007, expert assessments concluded that over the last three decades, human-induced warming had likely had a discernible influence on many physical and biological systems, and that regional temperature trends had already affected species and ecosystems around the world. By the time of the Sixth Assessment Report, it was found that for all species for which long-term records are available, half have shifted their ranges poleward (and/or upward for mountain species), while two-thirds have had their spring events occur earlier. Many of the species at risk are Arctic and Antarctic fauna such as polar bears In the Arctic, the waters of Hudson Bay are ice-free for three weeks longer than they were thirty years ago, affecting polar bears, which prefer to hunt on sea ice. Species that rely on cold weather conditions such as gyrfalcons, and snowy owls that prey on lemmings that use the cold winter to their advantage may be negatively affected. Climate change is also leading to a mismatch between the snow camouflage of arctic animals such as snowshoe hares with the increasingly snow-free landscape. Then, many species of freshwater and saltwater plants and animals are dependent on glacier-fed waters to ensure a cold water habitat that they have adapted to. Some species of freshwater fish need cold water to survive and to reproduce, and this is especially true with salmon and cutthroat trout. Reduced glacier runoff can lead to insufficient stream flow to allow these species to thrive. Ocean krill, a cornerstone species, prefer cold water and are the primary food source for aquatic mammals such as the blue whale. Marine invertebrates achieve peak growth at the temperatures they have adapted to, and cold-blooded animals found at high latitudes and altitudes generally grow faster to compensate for the short growing season. Warmer-than-ideal conditions result in higher metabolism and consequent reductions in body size despite increased foraging, which in turn elevates the risk of predation. Indeed, even a slight increase in temperature during development impairs growth efficiency and survival rate in rainbow trout. Species of fish living in cold or cool water can see a reduction in population of up to 50% in the majority of U.S. freshwater streams, according to most climate change models. The increase in metabolic demands due to higher water temperatures, in combination with decreasing amounts of food will be the main contributors to their decline. Additionally, many fish species (such as salmon) use seasonal water levels of streams as a means of reproducing, typically breeding when water flow is high and migrating to the ocean after spawning. Because snowfall is expected to be reduced due to climate change, water runoff is expected to decrease which leads to lower flowing streams, affecting the spawning of millions of salmon. To add to this, rising seas will begin to flood coastal river systems, converting them from fresh water habitats to saline environments where indigenous species will likely perish. In southeast Alaska, the sea rises by 3.96 cm/year, redepositing sediment in various river channels and bringing salt water inland. This rise in sea level not only contaminates streams and rivers with saline water, but also the reservoirs they are connected to, where species such as sockeye salmon live. Although this species of Salmon can survive in both salt and fresh water, the loss of a body of fresh water stops them from reproducing in the spring, as the spawning process requires fresh water. Furthermore, climate change may disrupt ecological partnerships among interacting species, via changes on behaviour and phenology, or via climate niche mismatch. The disruption of species-species associations is a potential consequence of climate-driven movements of each individual species towards opposite directions. Climate change may, thus, lead to another extinction, more silent and mostly overlooked: the extinction of species' interactions. As a consequence of the spatial decoupling of species-species associations, ecosystem services derived from biotic interactions are also at risk from climate niche mismatch. Whole ecosystem disruptions will occur earlier under more intense climate change: under the high-emissions RCP8.5 scenario, ecosystems in the tropical oceans would be the first to experience abrupt disruption before 2030, with tropical forests and polar environments following by 2050. In total, 15% of ecological assemblages would have over 20% of their species abruptly disrupted if as warming eventually reaches ; in contrast, this would happen to fewer than 2% if the warming were to stay below . Extinctions attributed to climate change Besides Bramble Cay melomys (see below), few recorded species extinctions are thought to have been caused by climate change, as opposed to the other drivers of the Holocene extinction. For example, only 20 of 864 species extinctions are considered by the IUCN to potentially be the result of climate change, either wholly or in part, and the evidence linking them to climate change is typically considered as weak or insubstantial. These species’ extinctions are listed in the table below. However, there is abundant evidence for local extinctions from contractions at the warm edges of species' ranges. Hundreds of animal species have been documented to shift their range (usually polewards and upwards) as a signal of biotic change due to climate warming. Warm-edge populations tend to be the most logical place to search for causes of climate-related extinctions since these species may already be at the limits of their climatic tolerances. This pattern of warm-edge contraction provides indications that many local extinctions have already occurred as a result of climate change. Further, an Australian review of 519 observational studies over 74 years found more than 100 cases where extreme weather events reduced animal species abundance by over 25%, including 31 cases of complete local extirpation. 60% of the studies followed the ecosystem for over a year, and populations did not recover to pre-disturbance levels in 38% of the cases. Extinction risk estimates Early estimates The first major attempt to estimate the impact of climate change on generalized species' extinction risks was published in the journal Nature in 2004. It suggested that between 15% and 37% of 1103 endemic or near-endemic known plant and animal species around the world would be "committed to extinction" by 2050, as their habitat will no longer be able to support their survival range by then. However, there was limited knowledge at the time about the species' average ability to disperse or otherwise adapt in response to climate change, and about the minimum average area needed for their persistence, which limited the reliability of their estimate in the eyes of the scientific community. In response, another 2004 paper found that different, yet still plausible assumptions about those factors could result in as few as 5.6% or as many as 78.6% of those 1103 species being committed to extinction, although this was disputed by the original authors. Between 2005 and 2011, 74 studies analyzing the impact of climate change on various species' extinction risk were published. A 2011 review of those studies found that on average, they projected the loss of 11.2% of species by 2100. However, the average of predictions based on the extrapolation of observed responses was 14.7%, while the model-based estimates were at 6.7%. Further, when using IUCN criteria, 7.6% of species would become threatened based on model predictions, yet 31.7% based on extrapolated observations. The following year, this mismatch between models and observations was primarily attributed to the models failing to properly account for different rates of species relocation and for the emerging competition among species, thus causing them to underestimate extinction risk. A 2018 study from the University of East Anglia team analyzed the impacts of and of warming on 80,000 plant and animal species in 35 of the world's biodiversity hotspots. It found that these areas could lose up to 25% and 50% of their species, respectively: they may or may not be able to survive outside of them. Madagascar alone would lose 60% of its species under , while Fynbos in Western Cape region of South Africa would lose a third of its species. All species In 2019, the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) released the summary of its Global Assessment Report on Biodiversity and Ecosystem Services. The report estimated that there are 8 million animal and plant species, including 5.5 million insect species. It found that one million species, including 40 percent of amphibians, almost a third of reef-building corals, more than a third of marine mammals, and 10 percent of all insects are threatened with extinction due to five main stressors. The land use change and sea use change was considered the most important stressor, followed by direct exploitation of organisms (i.e. overfishing). Climate change ranked third, followed by pollution and invasive species. The report concluded that global warming of over the preindustrial levels would threaten an estimated 5% of all the Earth's species with extinction even in the absence of the other four factors, while if the warming reached , 16% of the Earth's species would be threatened with extinction. Finally, even the lower warming levels of would "profoundly" reduce geographical ranges of the majority of the world's species, thus making them more vulnerable then they would have been otherwise. In 2020, a paper studied 538 plant and animal species from around the world and how they responded to rising temperatures. From that sample, they estimated that 16% of all species could go extinct by 2070 under the "moderate" climate change scenario RCP4.5, but it could be one-third under RCP8.5, the scenario of continually increasing emissions. This finding was later cited in the IPCC Sixth Assessment Report. An August 2021 paper found that The "Big Five" mass extinctions were associated with a warming of around and estimated that this level of warming over the preindustrial occurring today would also result in a mass extinction event of the same magnitude (~75% of marine animals wiped out). The following year, this was disputed by the Tohoku University Earth science scholar Kunio Kaiho. Based on his reanalysis of sedimentary rock record, he estimated that the loss of over 60% of marine species and over 35% of marine genera was correlated to a > global cooling and a global warming, while for the terrestrial tetrapods, the same losses would be seen under ~ of global cooling or warming. Kaiho's follow-up paper estimated that under what he considered the most likely scenario of climate change, with of warming by 2100 and by 2500 (based on the average of Representative Concentration Pathways 4.5 and 6.0), would result in 8% marine species extinctions, 16–20% terrestrial animal species extinctions, and a combined average of 12–14% animal species extinctions. This was defined by the paper as a minor mass extinction, comparable to the end-Guadalupian and Jurassic–Cretaceous boundary events. It also cautioned that warming needed to be kept below to prevent an extinction of >10% of animal species. Finally, it estimated that a minor nuclear war (defined as a nuclear exchange between India and Pakistan or an event of equivalent magnitude) would cause extinctions of 10–20% of species on its own, while a major nuclear war (defined as a nuclear exchange between United States and Russia) would cause the extinctions of 40-50% species. In July 2022, a survey of 3331 biodiversity experts estimated that since the year 1500, around 30% (between 16% and 50%) of all species have been threatened with extinction – including the species which had already gone extinct. With regards to climate change, the experts estimated that threatens or drives to extinction about 25% of the species, although their estimates ranged from 15% to 40%. When asked about warming, they believed it would threaten or drive into extinction 50% of the species, with the range between 32 and 70%. February 2022 IPCC Sixth Assessment Report included median and maximum estimates of the percentage of species at high risk of extinction for every level of warming, with the maximum estimates increasing much more than the medians. For instance, for , the median was 9% and the maximum 14%, for the median was 10% and the maximum 18%, for the median was 12% and the maximum 29%, for the median was 13% and the maximum 39%, and for the median was 15% but the maximum 48%) at 5 °C. In January 2024 Wiens and Zelinka estimated that n 22.7–31.6% of species will be lost to extinction under RCP 8.5, with 23%–31% of plants, 23%–31% of insects, 36%–44% of Vertebrates, 3%–87% of Marine Animals and 23%–31% of Fungi species lost. This decreases to 13.9%–27.6% of species lost under RCP 4.5, with 8%–16% of Plants, 14%–27% insects, 19%–34% Vertebrates, and 8%–27% Fungi becoming extinct Vertebrates A 2013 paper looked at 12 900 islands in the Pacific Ocean and Southeast Asia which host over 3000 vertebrates, and how they would be affected by sea level rise of 1, 3 and 6 meters (with the last two levels not anticipated until after this century). Depending on the extent of sea level rise, 15–62% of islands studied would be completely underwater, and 19–24% will lose 50–99% of their area. This was correlated with the total habitat loss for 37 species under 1 meter of sea level rise, and for 118 species under 3 meters. A subsequent paper found that under RCP8.5, the scenario of continually increasing greenhouse gas emissions, numerous vulnerable and endangered vertebrate species living on the low-lying islands in the Pacific Ocean would be threatened by high waves at the end of the century, with the risk substantially reduced under the more moderate RCP4.5 scenario. A 2018 Science Magazine paper estimated that at , and , over half of climatically determined geographic range would be lost by 4%, 8% and 26% of vertebrate species. This estimate was later directly cited in the IPCC Sixth Assessment Report. According to the IUCN Red List criteria, such a range loss is sufficient to classify as species as "endangered", and it is considered equivalent to >20% likelihood of extinction over the 10–100 years. In 2022, a Science Advances paper estimated that local extinctions of 6% of vertebrates alone would occur by 2050 under the "intermediate" SSP2-4.5 scenario, and 10.8% under the pathway of continually increasing emissions SSP5-8.5. By 2100, those would increase to ~13% and ~27%, respectively. These estimates included local extinctions from all causes, not just climate change: however, it was estimated to account for the majority (~62%) of extinctions, followed by secondary extinctions or coextinctions (~20%), with land use change and invasive species combined accounting for less than 20%. In 2023, a study estimated the proportion of vertebrates which would exposed to extreme heat beyond what they were known to have experienced historically in at least half their distribution by the end of the century. Under the highest-emission pathway SSP5–8.5 (a warming of by 2100, according to the paper), this would include ~41% of all land vertebrates (31.1% mammals, 25.8% birds, 55.5% amphibians and 51% reptiles). On the other hand, SSP1–2.6 ( by 2100) would only see 6.1% of vertebrate species exposed to unprecedented heat in at least of their area, while SSP2–4.5 ( by 2100) and SSP3–7.0 ( by 2100) would see 15.1% and 28.8%, respectively. Another 2023 paper suggested that under SSP5-8.5, around 55.29% of terrestrial vertebrate species would experience some local habitat loss by 2100 due to unprecedented aridity alone, while 16.56% would lose over half of their original habitat to aridity. Around 7.18% of those species will find all of their original habitat too dry to survive in by 2100, presumably going extinct unless migration or some form of adaptation to a dryer environment can occur. Under SSP2-4.5, 41.22% of the terrestrial vertebrates will lose some habitat to aridity, 8.62% will lose over half, and 4.69% will lose all of it, and under SSP1-2.6, these figures go down to 25.16%, 4.62% and 3.04%, respectively. Amphibians A 2013 study estimated that 670–933 amphibian species (11–15%) are both highly vulnerable to climate change while already being on the IUCN Red List of threatened species. A further 698–1,807 (11–29%) amphibian species are not currently threatened, but could become threatened in the future due to their high vulnerability to climate change. The IPCC Sixth Assessment Report concluded that while at , fewer than 3% of most amphibian species would be at a very high risk of extinction, salamanders are more than twice as vulnerable, with nearly 7% of species highly threatened. At , 11% of amphibians and 24% of salamanders would be at a very high risk of extinction. A 2023 paper concluded that under the high-warming SSP5–8.5 scenario, 64.15% of amphibians would lose at least some habitat by 2100 purely due to an increase in aridity, with 33.26% losing over half of it, and 16.21% finding their entire current habitat too dry for them to survive in. These figures go down to 47.46%, 18.60% and 10.31% under the "intermediate" SSP2-4.5 scenario and to 31.69%, 11.18% and 7.36% under the high-mitigation SSP1-2.6. A 2022 study estimated that while right now, 14.8% of the global range of all anurans (frogs) is in an extinction risk area, this will increase to 30.7% by 2100 under Shared Socioeconomic Pathway SSP1-2.6 (low emission pathway), 49.9% under SSP2-4.5, 59.4% under SSP3-7.0 and 64.4% under the highest-emitting SSP5-8.5. Extreme-sized anuran species are disproportionately affected: while currently only 0.3% of these species have >70% of their range in a risk area, this number will increase to 3.9% under SSP1-2.6, 14.2% under SSP2-4.5, 21.5% under SSP3-7 and 26% under SSP5-8.5 A 2018 paper estimated that both Miombo Woodlands of South Africa and southwestern Australia would lose around 90% of their amphibians if the warming were to reach . Birds In 2012, it was estimated that on average, every degree of warming results in between 100 and 500 land bird extinctions. For a warming of by 2100, the same research estimated between 600 and 900 land bird extinctions, with 89% occurring in the tropical environments. A 2013 study estimated that 608–851 bird species (6–9%) are highly vulnerable to climate change while being on the IUCN Red List of threatened species, and 1,715–4,039 (17–41%) bird species are not currently threatened but could become threatened due to climate change in the future. A 2023 paper concluded that under the high-warming SSP5–8.5 scenario, 51.79% of birds would lose at least some habitat by 2100 as the conditions become more arid, but only 5.25% would lose over half of their habitat due to an increase in dryness alone, while 1.29% could be expected to lose their entire habitat. These figures go down to 38.65%, 2.02% and 0.95% under the "intermediate" SSP2-4.5 scenario and to 22.83%, 0.70% and 0.49% under the high-mitigation SSP1-2.6. In 2015, it was projected that native forest birds in Hawaii would be threatened with extinction due to the spread of avian malaria under the high-warming RCP8.5 scenario or a similar scenario from earlier modelling, but would persist under the "intermediate" RCP4.5. For the 604 bird species in mainland North America, 2020 research concluded that under warming, 207 would be moderately vulnerable to extinction and 47 would be highly vulnerable. At , this changes to 198 moderately vulnerable and 91 highly vulnerable. At , there are more highly vulnerable species (205) than moderately vulnerable species (140). Relative to , stabilizing the warming at represents a reduction in extinction risk for 76% of those species, and 38% stop being vulnerable. The Miombo Woodlands of South Africa are predicted to lose about 86% of their birds if the warming reaches . In 2019, it was also estimated that multiple bird species endemic to southern Africa's Kalahari Desert (Southern Pied Babblers, Southern Yellow-billed Hornbills and Southern Fiscals) would either be all-but-lost from it or reduced to its eastern fringes by the end of the century, depending on the emission scenario. While the temperatures are not projected to become so high as to kill the birds outright, they would still be high enough to prevent them from sustaining sufficient body mass and energy for breeding. By 2022, breeding success of the Southern Yellow-billed Hornbills was already observed to collapse in the hottest, southern parts of the desert. It was predicted that those particular subpopulations would disappear by 2027. Similarly, it was found that two Ethiopian bird species, White-tailed Swallow and Ethiopian Bush-crow, would lose 68-84% and >90% of their range by 2070. As their existing geographical range is already very limited, this means that it would likely end up too small to support a viable population even under the scenario of limited climate change, rendering these species extinct in the wild. Climate change is particularly threatening to penguins. As early as in 2008, it was estimated that every time Southern Ocean temperatures increase by , this reduces king penguin populations by 9%. Under the worst-case warming trajectory, king penguins will permanently lose at least two out of their current eight breeding sites, and 70% of the species (1.1 million pairs) will have to relocate to avoid disappearance. Emperor penguin populations may be at a similar risk, with 80% of populations being at risk of extinction by 2100 with no mitigation. With Paris Agreement temperature goals in place, however, that number may decline to 31% under the goal or 19% under the goal. A 27-year study of the largest colony of Magellanic penguins in the world, published in 2014, found that extreme weather caused by climate change kills 7% of penguin chicks in an average year, accounting for up to 50% of all chick deaths in some years. Since 1987, the number of breeding pairs in the colony has reduced by 24%. Chinstrap penguins are also known to be in decline, mainly due to corresponding declines of Antarctic krill. And it was estimated that while Adélie penguins will retain some of its habitat past 2099, one-third of colonies along the West Antarctic Peninsula (WAP) will be in decline by 2060. Those colonies are believed to represent about 20% of the entire species. Fish It has been projected in 2015 that many fish species will migrate towards the North and South poles as a result of climate change. Under the highest emission scenario RCP8.5, 2 new species would enter (invade) per 0.5° of latitude in the Arctic Ocean and 1.5 in the Southern Ocean. It woul also result in an average of 6.5 local extinctions per 0.5° of latitude outside of the poles. A 2022 paper found that 45% of all marine species at risk of extinction are affected by climate change, but it's currently less damaging to their survival than overfishing, transportation, urban development and water pollution. However, if the emissions were to rise unchecked, then by the end of the century climate change would become as important as all of them combined. Continued high emissions until 2300 would then risk a mass extinction equivalent to Permian-Triassic extinction event, or "The Great Dying". On the other hand, staying at low emissions would reduce future climate-driven extinctions in the oceans by over 70%. A 2021 study which analyzed around 11,500 freshwater fish species concluded that 1-4% of those species would be likely to lose over half of their current geographic range at and 1-9% at . A warming of would threaten 8-36% of freshwater fish species with such range loss and would threaten 24-63%. The different percentages represent different assumptions about how well freshwater fishes could disperse to new areas and thus offset past range losses, with the highest percentages assuming no dispersal is possible. According to the IUCN Red List criteria, such a range loss is sufficient to classify as species as "endangered", and it is considered equivalent to >20% likelihood of extinction over the 10–100 years. In 2023, a study looked at freshwater fish in 900 lakes of the American state of Minnesota. It found that if their water temperature increases by in July (said to occur under approximately the same amount of global warming), then cold-water fish species like cisco would disappear from 167 lakes, which represents 61% of their habitat in Minnesota. Cool-water yellow perch would see its numbers decline by about 7% across all of Minnesota's lakes, while warm-water bluegill would increase by around 10%. Mammals A 2023 paper concluded that under the high-warming SSP5–8.5 scenario, 50.29% of mammals would lose at least some habitat by 2100 as the conditions become more arid. Out of those, 9.50% would lose over half of their habitat due to an increase in dryness alone, while 3.21% could be expected to lose their entire habitat ad the result. These figures go down to 38.27%, 4.96% and 2.22% under the "intermediate" SSP2-4.5 scenario, and to 22.65%, 2.03% and 1.15% under the high-mitigation SSP1-2.6. In 2020, a study in Nature Climate Change estimated the effects of Arctic sea ice decline on polar bear populations (which rely on the sea ice to hunt seals) under two climate change scenarios. Under high greenhouse gas emissions, at most a few high-Arctic populations will remain by 2100: under more moderate scenario, the species will survive this century, but several major subpopulations will still be wiped out. In 2019, it was estimated that the current great ape range in Africa will decline massively under both the severe RCP8.5 scenario and the more moderate RCP4.5. The apes could potentially disperse to new habitats, but those would lie almost completely outside of their current protected areas, meaning that conservation planning needs to be "urgently" updated to account for this. A 2017 analysis found that the mountain goat populations of coastal Alaska would go extinct sometime between 2015 and 2085 in half of the considered scenarios of climate change. Another analysis found that the Miombo Woodlands of South Africa are predicted to lose about 80% of their mammal species if the warming reached . In 2008, the white lemuroid possum was reported to be the first known mammal species to be driven extinct by climate change. However, these reports were based on a misunderstanding. One population of these possums in the mountain forests of North Queensland is severely threatened by climate change as the animals cannot survive extended temperatures over . However, another population 100 kilometres south remains in good health. On the other hand, the Bramble Cay melomys, which lived on a Great Barrier Reef island, was reported as the first mammal to go extinct due to human-induced sea level rise, with the Australian government officially confirming its extinction in 2019. Another Australian species, the greater stick-nest rat (Leporillus conditor) may be next. Similarly, the 2019–20 Australian bushfire season caused a near-complete extirpation of Kangaroo Island dunnarts, as only one individual may have survived out of the population of 500. Those bushfires have also caused the loss of 8,000 koalas in New South Wales alone, further endangering the species. Reptiles A 2023 paper concluded that under the high-warming SSP5–8.5 scenario, 56.36% of reptiles would lose at least some habitat by 2100 as the conditions become more arid. Out of those, 23.97% would lose over half of their habitat due to an increase in dryness alone, while 10.94% could be expected to lose their entire habitat as the result. These figures go down to 41.69%, 12.35% and 7.15% under the "intermediate" SSP2-4.5 scenario, and to 24.59%, 6.56% and 4.43% under the high-mitigation SSP1-2.6. In a 2010 study led by Barry Sinervo, researchers surveyed 200 sites in Mexico which showed 24 local extinctions (also known as extirpations), of Sceloporus lizards since 1975. Using a model developed from these observed extinctions the researchers surveyed other extinctions around the world and found that the model predicted those observed extirpations, thus attributing the extirpations around the world to climate warming. These models predict that extinctions of the lizard species around the world will reach 20% by 2080, but up to 40% extinctions in tropical ecosystems where the lizards are closer to their ecophysiological limits than lizards in the temperate zone. A 2015 study looked at the persistence of common lizard populations in Europe under future climate change. It found that under , 11% of the lizard population would be threatened with local extinction around 2050 and 14% by 2100. At by 2100, 21% of the population are threatened, and at , 30% of the populations are. Following the 2019–20 Australian bushfire season, Kate's leaf tailed gecko lost over 80% of its available habitat. Sex ratios for sea turtles in the Caribbean are being affected because of climate change. Environmental data were collected from the annual rainfall and tide temperatures over the course of 200 years and showed an increase in air temperature (mean of 31.0 degree Celsius). These data were used to relate the decline of the sex ratios of sea turtles in the North East Caribbean and climate change. The species of sea turtles include Dermochelys coriacea, Chelonia myads, and Eretmochelys imbricata. Extinction is a risk for these species as the sex ratio is being afflicted causing a higher female to male ratio. Projections estimate the declining rate of male Chelonia myads as 2.4% hatchlings being male by 2030 and 0.4% by 2090. Invertebrates The IPCC Sixth Assessment Report estimates that while at , fewer than 3% of invertebrates would be at a very high risk of extinction, 15% would be at a very high risk at . This includes 12% of pollinator species. Spiders A 2018 study examined the impact of climate change on Troglohyphantes cave spiders in the Alps and found that even the low-emission scenario RCP2.6 would reduce their habitat by ~45% by 2050, while the high emission scenario would reduce it by ~55% by 2050 and ~70% by 2070. The authors suggested that this may be sufficient to drive the most restricted species to extinction. Corals Almost no other ecosystem is as vulnerable to climate change as coral reefs. Updated 2022 estimates show that even at a global average increase of 1.5 °C (2.7 °F) over pre-industrial temperatures, only 0.2% of the world's coral reefs would still be able to withstand marine heatwaves, as opposed to 84% being able to do so now, with the figure dropping to 0% by and beyond. However, it was found in 2021 that each square meter of coral reef area contains about 30 individual corals, and their total number is estimated at half a trillion - equivalent to all the trees in the Amazon, or all the birds in the world. As such, most individual coral reef species are predicted to avoid extinction even as coral reefs would cease to function as the ecosystems we know. A 2013 study found that 47–73 coral species (6–9%) are vulnerable to climate change while already threatened with extinction according to the IUCN Red List, and 74–174 (9–22%) coral species were not vulnerable to extinction at the time of publication, but could be threatened under continued climate change, making them a future conservation priority. The authors of the recent coral number estimates suggest that those older projections were too high, although this has been disputed. Insects Insects account for the vast majority of invertebrate species. One of the earliest studies to link insect extinctions to recent climate change was published in 2002, when observations of two populations of Bay checkerspot butterfly found that they were threatened by changes in precipitation. A 2020 long-term study of more than 60 bee species published in the journal Science found that climate change causes drastic declines in the population and diversity of bumblebees across the two continents studied, independent of land use change and at rates "consistent with a mass extinction." When 1901-1974 "baseline" period was compared with the 2000 to 2014 recent period, then North America's bumblebee populations were found to have fallen by 46%, while Europe's population fell by 14%. The strongest effects were seen in the southern regions, where rapid increases in frequency of extreme warm years had exceeded the species’ historical temperature ranges. A 2018 Science Magazine paper estimated that at , and , over half of climatically determined geographic range would be lost by 6%, 18% and ~49% of insect species, with this loss corresponding to >20% likelihood of extinction over the next 10–100 years according to the IUCN criteria. In 2022, it was found that the warming which occurred over the past 40 years in Germany's Bavaria region pushed out cold-adapted grasshoppers, butterfly and dragonfly species, while allowing warm-adapted species from those taxa to become more widespread. Altogether, 27% of dragonfly and 41% of butterfly and grasshopper species occupied less area, while 52% of dragonflies became more widespread, along with 27% of grasshoppers (41%, 20 species) and 20% of butterflies, with the rest showing no trend in area change. The study only measured geographic spread and not total abundance. While the paper looked at both climate and land use change, it suggested the latter was only a significant negative factor for specialist butterfly species. Around the same time, it was predicted that in Bangladesh, between 2% and 34% of the native butterfly species could lose their entire habitat under scenarios SSP1-2.6 and SSP5-8.5, respectively. Plants Data from 2018 found that at , and of global warming, over half of climatically determined geographic range would be lost by 8%, 16%, and 44% of plant species. This corresponds to more than 20% likelihood of extinction over the next 10–100 years under the IUCN criteria. The 2022 IPCC Sixth Assessment Report estimates that while at of global warming, fewer than 3% of flowering plants would be at a very high risk of extinction, this increases to 10% at . A 2020 meta-analysis found that while 39% of vascular plant species were likely threatened with extinction, only 4.1% of this figure could be attributed to climate change, with land use change activities predominating. However, the researchers suggested that this may be more representative of the slower pace of research on effects of climate change on plants. For fungi, it estimated that 9.4% are threatened due to climate change, while 62% are threatened by other forms of habitat loss. Alpine and mountain plant species are known to be some of the most vulnerable to climate change. In 2010, a study looking at 2,632 species located in and around European mountain ranges found that depending on the climate scenario, 36–55% of alpine species, 31–51% of subalpine species and 19–46% of montane species would lose more than 80% of their suitable habitat by 2070–2100. In 2012, it was estimated that for the 150 plant species in the European Alps, their range would, on average, decline by 44%-50% by the end of the century - moreover, lags in their shifts would mean that around 40% of their remaining range would soon become unsuitable as well, often leading to an extinction debt. In 2022, it was found that those earlier studies simulated abrupt, "stepwise" climate shifts, while more realistic gradual warming would see a rebound in alpine plant diversity after mid-century under the "intermediate" and most intense global warming scenarios RCP4.5 and RCP8.5. However, for RCP8.5, that rebound would be deceptive, followed by the same collapse in biodiversity at the end of the century as simulated in the earlier papers. This is because on average, every degree of warming reduces total species population growth by 7%, and the rebound was driven by colonization of niches left behind by most vulnerable species like Androsace chamaejasme and Viola calcarata going extinct by mid-century or earlier. It's been estimated that by 2050, climate change alone could reduce species richness of trees in the Amazon Rainforest by 31–37%, while deforestation alone could be responsible for 19–36%, and the combined effect might reach 58%. The paper's worst-case scenario for both stressors had only 53% of the original rainforest area surviving as a continuous ecosystem by 2050, with the rest reduced to a severely fragmented block. Another study estimated that the rainforest would lose 69% of its plant species under the warming of . Another estimate suggests that two prominent species of seagrasses in the Mediterranean Sea would be substantially affected under the worst-case greenhouse gas emission scenario, with Posidonia oceanica losing 75% of its habitat by 2050 and potentially becoming functionally extinct by 2100, while Cymodocea nodosa would lose ~46% of its habitat and then stabilize due to expansion into previously unsuitable areas. Impacts of species degradation on livelihoods The livelihoods of nature dependent communities depend on abundance and availability of certain species. Climate change conditions such as increase in atmospheric temperature and carbon dioxide concentration directly affect availability of biomass energy, food, fiber and other ecosystem services. Degradation of species supplying such products directly affect the livelihoods of people relying on them more so in Africa. The situation is likely to be exacerbated by changes in rainfall variability which is likely to give dominance to invasive species especially those that are spread across large latitudinal gradients. The effects that climate change has on both plant and animal species within certain ecosystems has the ability to directly affect the human inhabitants who rely on natural resources. Frequently, the extinction of plant and animal species create a cyclic relationship of species endangerment in ecosystems which are directly affected by climate change. Species adaptation Many species are already responding to climate change by moving into different areas. For instance, Antarctic hair grass is colonizing areas of Antarctica where previously their survival range was limited. Similarly, 5-20% of the United States land area is likely to end up with a different biome at the end of the century, as vegetation undergoes range shifts. However, such shifts can only go so far to protect species: globally, only 5% of ectotherm species' present locations are within 50 km of a location which would remain fully suitable and not impose evolutionary fitness costs on them by 2100, even under "mid-range" warming scenarios. Completely random dispersal may have an 87% chance of sending the species to a less suitable location. Species in the tropics have the least extensive dispersal options, while species in the temperate mountains face the greatest risks of moving to a wrong location. Similarly, an artificial selection experiment demonstrated that evolution of tolerance to warming can occur in fish, but the rate of evolution appears limited to per generation, which is too slow to protect the vulnerable species from impacts of climate change. Rising temperatures are beginning to have a noticeable impact on birds, and butterflies nearly 160 species from 10 different zones have shifted their ranges northward by 200 km in Europe and North America. The migration range of larger animals has been substantially constrained by human development. In Britain, spring butterflies are appearing an average of 6 days earlier than two decades ago. Climate change has affected the gene pool of the red deer population on Rùm, one of the Inner Hebrides islands, Scotland. Warmer temperatures resulted in deer giving birth on average three days earlier for each decade of the study. The gene which selects for earlier birth has increased in the population because those with the gene have more calves over their lifetime. Prevention In addition to reducing future warming to the lowest possible levels, preserving the current and likely near-future habitat of endangered species in protected areas in efforts like 30x30 is a crucial aspect of helping species survive. A more radical approach is the assisted migration of species endangered by climate change to new habitats, whether passively (through measures like the creation of wildlife corridors to allow them to move to a new area unimpeded), or their active transport to new areas. This is approach is more controversial, since some of the rescued species may end up invasive in their new locations. I.e. while it would be relatively easy to move polar bears, which are currently threatened by Arctic sea ice decline, to Antarctica, the damage to Antarctica's ecosystem is considered too great to allow this. Finally, species which are extinct in the wild may be kept alive in artificial surroundings until a suitable natural habitat may be restored. In cases where captive breeding fails, embryo cryopreservation has been proposed as an option of last resort. Apiculture initiatives to prevent human-wildlife conflict in Zimbabwe Women in rural communities in Hurungwe rural district Zimbabwe have resorted to placing beehives at the border of fields and villages (bio fencing) to protect themselves and their crops from elephants. Assisted migration Assisted migration is the act of moving plants or animals to a different habitat. It has been proposed as a way to rescue species which may not be able to disperse easily, have long generation times or have small populations. This strategy has already been implemented to save multiple tree species in North America. For instance, the Torreya Guardians have coordinated an assisted migration program to save the Torreya taxifolia from extinction. See also Atelopus varius Biodiversity loss Chytridiomycosis Ecosystem services Gastric-brooding frog Golden toad Global catastrophic risk Guajira stubfoot toad Keystone species Paleocene–Eocene Thermal Maximum References External links Environmental conservation Climate change, Risk from Effects of climate change
Extinction risk from climate change
[ "Biology" ]
9,806
[ "Evolution of the biosphere", "Extinction events" ]
646,088
https://en.wikipedia.org/wiki/Sparse%20grid
Sparse grids are numerical techniques to represent, integrate or interpolate high dimensional functions. They were originally developed by the Russian mathematician Sergey A. Smolyak, a student of Lazar Lyusternik, and are based on a sparse tensor product construction. Computer algorithms for efficient implementations of such grids were later developed by Michael Griebel and Christoph Zenger. Curse of dimensionality The standard way of representing multidimensional functions are tensor or full grids. The number of basis functions or nodes (grid points) that have to be stored and processed depend exponentially on the number of dimensions. The curse of dimensionality is expressed in the order of the integration error that is made by a quadrature of level , with points. The function has regularity , i.e. is times differentiable. The number of dimensions is . Smolyak's quadrature rule Smolyak found a computationally more efficient method of integrating multidimensional functions based on a univariate quadrature rule . The -dimensional Smolyak integral of a function can be written as a recursion formula with the tensor product. The index to is the level of the discretization. If a 1-dimension integration on level is computed by the evaluation of points, the error estimate for a function of regularity will be Further reading External links A memory efficient data structure for regular sparse grids Finite difference scheme on sparse grids Visualization on sparse grids Datamining on sparse grids, J.Garcke, M.Griebel (pdf) Numerical analysis
Sparse grid
[ "Mathematics" ]
329
[ "Mathematical analysis", "Mathematical analysis stubs", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Approximations" ]
646,120
https://en.wikipedia.org/wiki/Hyperk%C3%A4hler%20manifold
In differential geometry, a hyperkähler manifold is a Riemannian manifold endowed with three integrable almost complex structures that are Kähler with respect to the Riemannian metric and satisfy the quaternionic relations . In particular, it is a hypercomplex manifold. All hyperkähler manifolds are Ricci-flat and are thus Calabi–Yau manifolds. Hyperkähler manifolds were defined by Eugenio Calabi in 1979. Early history Marcel Berger's 1955 paper on the classification of Riemannian holonomy groups first raised the issue of the existence of non-symmetric manifolds with holonomy Sp(n)·Sp(1).Interesting results were proved in the mid-1960s in pioneering work by Edmond Bonan and Kraines who have independently proven that any such manifold admits a parallel 4-form .The long awaited analog of strong Lefschetz theorem was published in 1982 : Equivalent definition in terms of holonomy Equivalently, a hyperkähler manifold is a Riemannian manifold of dimension whose holonomy group is contained in the compact symplectic group . Indeed, if is a hyperkähler manifold, then the tangent space is a quaternionic vector space for each point of , i.e. it is isomorphic to for some integer , where is the algebra of quaternions. The compact symplectic group can be considered as the group of orthogonal transformations of which are linear with respect to , and . From this, it follows that the holonomy group of the Riemannian manifold is contained in . Conversely, if the holonomy group of a Riemannian manifold of dimension is contained in , choose complex structures , and on which make into a quaternionic vector space. Parallel transport of these complex structures gives the required complex structures on making into a hyperkähler manifold. Two-sphere of complex structures Every hyperkähler manifold has a 2-sphere of complex structures with respect to which the metric is Kähler. Indeed, for any real numbers such that the linear combination is a complex structures that is Kähler with respect to . If denotes the Kähler forms of , respectively, then the Kähler form of is Holomorphic symplectic form A hyperkähler manifold , considered as a complex manifold , is holomorphically symplectic (equipped with a holomorphic, non-degenerate, closed 2-form). More precisely, if denotes the Kähler forms of , respectively, then is holomorphic symplectic with respect to . Conversely, Shing-Tung Yau's proof of the Calabi conjecture implies that a compact, Kähler, holomorphically symplectic manifold is always equipped with a compatible hyperkähler metric. Such a metric is unique in a given Kähler class. Compact hyperkähler manifolds have been extensively studied using techniques from algebraic geometry, sometimes under the name holomorphically symplectic manifolds. The holonomy group of any Calabi–Yau metric on a simply connected compact holomorphically symplectic manifold of complex dimension with is exactly ; and if the simply connected Calabi–Yau manifold instead has , it is just the Riemannian product of lower-dimensional hyperkähler manifolds. This fact immediately follows from the Bochner formula for holomorphic forms on a Kähler manifold, together the Berger classification of holonomy groups; ironically, it is often attributed to Bogomolov, who incorrectly went on to claim in the same paper that compact hyperkähler manifolds actually do not exist! Examples For any integer , the space of -tuples of quaternions endowed with the flat Euclidean metric is a hyperkähler manifold. The first non-trivial example discovered is the Eguchi–Hanson metric on the cotangent bundle of the two-sphere. It was also independently discovered by Eugenio Calabi, who showed the more general statement that cotangent bundle of any complex projective space has a complete hyperkähler metric. More generally, Birte Feix and Dmitry Kaledin showed that the cotangent bundle of any Kähler manifold has a hyperkähler structure on a neighbourhood of its zero section, although it is generally incomplete. Due to Kunihiko Kodaira's classification of complex surfaces, we know that any compact hyperkähler 4-manifold is either a K3 surface or a compact torus . (Every Calabi–Yau manifold in 4 (real) dimensions is a hyperkähler manifold, because is isomorphic to .) As was discovered by Beauville, the Hilbert scheme of points on a compact hyperkähler 4-manifold is a hyperkähler manifold of dimension . This gives rise to two series of compact examples: Hilbert schemes of points on a K3 surface and generalized Kummer varieties. Non-compact, complete, hyperkähler 4-manifolds which are asymptotic to , where denotes the quaternions and is a finite subgroup of , are known as asymptotically locally Euclidean, or ALE, spaces. These spaces, and various generalizations involving different asymptotic behaviors, are studied in physics under the name gravitational instantons. The Gibbons–Hawking ansatz gives examples invariant under a circle action. Many examples of noncompact hyperkähler manifolds arise as moduli spaces of solutions to certain gauge theory equations which arise from the dimensional reduction of the anti-self dual Yang–Mills equations: instanton moduli spaces, monopole moduli spaces, spaces of solutions to Nigel Hitchin's self-duality equations on Riemann surfaces, space of solutions to Nahm equations. Another class of examples are the Nakajima quiver varieties, which are of great importance in representation theory. Cohomology show that the cohomology of any compact hyperkähler manifold embeds into the cohomology of a torus, in a way that preserves the Hodge structure. Notes See also Quaternion-Kähler manifold Hypercomplex manifold Quaternionic manifold Calabi–Yau manifold Gravitational instanton Hyperkähler quotient Twistor theory References Kieran G. O’Grady, (2011) "Higher-dimensional analogues of K3 surfaces." MR2931873 Structures on manifolds Complex manifolds Riemannian manifolds Differential geometry Quaternions
Hyperkähler manifold
[ "Mathematics" ]
1,319
[ "Riemannian manifolds", "Space (mathematics)", "Metric spaces" ]
646,257
https://en.wikipedia.org/wiki/Machine%20press
A forming press, commonly shortened to press, is a machine tool that changes the shape of a work-piece by the application of pressure. The operator of a forming press is known as a press-tool setter, often shortened to tool-setter. Presses can be classified according to their mechanism: hydraulic, mechanical, pneumatic; their function: forging presses, stamping presses, press brakes, punch press, etc. their structure, e.g. Knuckle-joint press, screw press, Expeller press their controllability: conventional vs. servo-presses Shop Press Typically consisting of a simple rectangular frame, often fabricated from C-channel or tubing, containing a bottle jack or hydraulic cylinder to apply pressure via a ram to a work-piece. Often used for general-purpose forming work in the auto mechanic shop, machine shop, garage or basement shops, etc. Typical shop presses are capable of applying between 1 and 30 tons pressure, depending on size and construction. Lighter-duty versions are often called arbor presses. A shop press is commonly used to press interference fit parts together, such as gears onto shafts or bearings into housings. Other presses by application A press brake is a special type of machine press that bends sheet metal into shape. A good example of the type of work a press brake can do is the back-plate of a computer case. Other examples include brackets, frame pieces and electronic enclosures. Some press brakes have CNC controls and can form parts with accuracy to a fraction of a millimeter. Bending forces can range up to 3,000 tons. A punch press is used to form holes. A screw press is also known as a fly press. A stamping press is a machine press used to shape or cut metal by deforming it with a die. It generally consists of a press frame, a bolster plate, and a ram. Capping presses form caps from rolls of aluminium foil at up to 660 per minute. An example of peculiar press control: servo-press A servomechanism press, also known as a servo press or an 'electro-press, is a press driven by an AC servo motor. The torque produced is converted to a linear force via a ball screw. Pressure and position are controlled through a load cell and an encoder. The main advantage of a servo press is its low energy consumption; its only 10-20% of other press machines. When stamping, it is really about maximizing energy as opposed to how the machine can deliver tonnage. Up until recently, the way to increase tonnage between the die and work-piece on a mechanical press was through bigger machines with bigger motors. Types of presses The press style used is in direct correlation to the end product. Press types are straight-side, BG (back geared), geared, gap, OBI (open back inclinable) and OBS (open back stationary). Hydraulic and mechanical presses are classified by the frame the moving elements are mounted on. The most common are the gap-frame, also known as C-frame, and the straight-side press. A straight-side press has vertical columns on either side of the machine and eliminates angular deflection. A C-frame allows easy access to the die area on three sides and require less floor space. A type of gap-frame, the OBI pivots the frame for easier scrap or part discharge. The OBS timed air blasts, devices or conveyor for scrap or part discharge. History Historically, metal was shaped by hand using a hammer. Later, larger hammers were constructed to press more metal at once, or to press thicker materials. Often a smith would employ a helper or apprentice to swing the hammer while the smith concentrated on positioning the work-piece. Drop hammers and trip hammers utilize a mechanism to lift the hammer, which then falls by gravity onto the work. In the mid 19th century, manual and rotary-cam hammers began to be replaced in industry by the steam hammer, which was first described in 1784 by James Watt, a British inventor and Mechanical Engineer who also contributed to the earliest steam engines and condensers, but not built until 1840 by British inventor James Nasmyth. By the late 19th century, steam hammers had increased greatly in size; in 1891 the Bethlehem Iron Company made an enhancement allowing a steam hammer to deliver a 125-ton blow. Most modern machine presses typically use a combination of electric motors and hydraulics to achieve the necessary pressure. Along with the evolution of presses came the evolution of the dies used within them. Safety Machine presses can be hazardous, so safety measures must always be taken. Bi-manual controls (controls the use of which requires both hands to be on the buttons to operate) are a very good way to prevent accidents, as are light curtains that keep the machine from working if the operator is in range of the die. References External links Metal forming Machine tools Articles containing video clips
Machine press
[ "Engineering" ]
1,021
[ "Machine tools", "Industrial machinery" ]
646,359
https://en.wikipedia.org/wiki/Bc%20%28programming%20language%29
bc, for basic calculator, is "an arbitrary-precision calculator language" with syntax similar to the C programming language. bc is typically used as either a mathematical scripting language or as an interactive mathematical shell. Overview A typical interactive usage is typing the command bc on a Unix command prompt and entering a mathematical expression, such as , whereupon will be output. While bc can work with arbitrary precision, it actually defaults to zero digits after the decimal point, so the expression yields (results are truncated, not rounded). This can surprise new bc users unaware of this fact. The option to bc sets the default scale (digits after the decimal point) to 20 and adds several additional mathematical functions to the language. History bc first appeared in Version 6 Unix in 1975. It was written by Lorinda Cherry of Bell Labs as a front end to dc, an arbitrary-precision calculator written by Robert Morris and Cherry. dc performed arbitrary-precision computations specified in reverse Polish notation. bc provided a conventional programming-language interface to the same capability via a simple compiler (a single yacc source file comprising a few hundred lines of code), which converted a C-like syntax into dc notation and piped the results through dc. In 1991, POSIX rigorously defined and standardized bc. Four implementations of this standard survive today: The first is the traditional Unix implementation, a front-end to dc, which survives in Unix and Plan 9 systems. The second is the free software GNU bc, first released in 1991 by Philip A. Nelson. The GNU implementation has numerous extensions beyond the POSIX standard and is no longer a front-end to dc (it is a bytecode interpreter). The third is a re-implementation by OpenBSD in 2003. The fourth is an independent implementation by Gavin Howard that is included in Android (operating system), FreeBSD as of 13.3-RELEASE, and macOS as of 13.0. Implementations POSIX bc The POSIX standardized bc language is traditionally written as a program in the dc programming language to provide a higher level of access to the features of the dc language without the complexities of dc's terse syntax. In this form, the bc language contains single-letter variable, array and function names and most standard arithmetic operators, as well as the familiar control-flow constructs (if(cond)..., while(cond)... and for(init;cond;inc)...) from C. Unlike C, an if clause may not be followed by an else. Functions are defined using a define keyword, and values are returned from them using a return followed by the return value in parentheses. The auto keyword (optional in C) is used to declare a variable as local to a function. All numbers and variable contents are arbitrary-precision numbers whose precision (in decimal places) is determined by the global scale variable. The numeric base of input (in interactive mode), output and program constants may be specified by setting the reserved ibase (input base) and obase (output base) variables. Output is generated by deliberately not assigning the result of a calculation to a variable. Comments may be added to bc code by use of the C /* and */ (start and end comment) symbols. Mathematical operators Exactly as C The following POSIX bc operators behave exactly like their C counterparts: + - * / += -= *= /= ++ -- < > == != <= >= ( ) [ ] { } Similar to C The modulus operators, % and %= behave exactly like their C counterparts only when the global scale variable is set to 0, i.e. all calculations are integer-only. Otherwise the computation is done with the appropriate scale. a%b is defined as a-(a/b)*b. Examples: $ bc bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale=0; 5%3 2 scale=1; 5%3 .2 scale=20; 5%3 .00000000000000000002 Conflicting with C The operators ^ ^= superficially resemble the C bitwise exclusive-or operators, but are in fact the bc integer exponentiation operators. Of particular note, the use of the ^ operator with negative numbers does not follow the C operator precedence. -2^2 gives the answer of 4 under bc rather than −4. "Missing" operators relative to C The bitwise, Boolean and conditional operators: & | ^ && || &= |= ^= &&= ||= << >> <<= >>= ?: are not available in POSIX bc. Built-in functions The sqrt() function for calculating square roots is POSIX bc's only built-in mathematical function. Other functions are available in an external standard library. The scale() function for determining the precision (as with the scale variable) of its argument and the length() function for determining the number of significant decimal digits in its argument are also built-in. Standard library functions bc's standard math library (defined with the -l option) contains functions for calculating sine, cosine, arctangent, natural logarithm, the exponential function and the two parameter Bessel function J. Most standard mathematical functions (including the other inverse trigonometric functions) can be constructed using these. See external links for implementations of many other functions. The -l option changes the scale to 20, so things such as modulo may work unexpectedly. For example, writing bc -l and then the command print 3%2 outputs 0. But writing scale=0 after bc -l and then the command print 3%2 will output 1. Plan 9 bc Plan 9 bc is identical to POSIX bc but for an additional print statement. GNU bc GNU bc derives from the POSIX standard and includes many extensions. It is entirely separate from dc-based implementations of the POSIX standard and is instead written in C. Nevertheless, it is fully backwards compatible as all POSIX bc programs will run unmodified as GNU bc programs. GNU bc variables, arrays and function names may contain more than one character, some more operators have been included from C, and notably, an if clause may be followed by an else. Output is achieved either by deliberately not assigning a result of a calculation to a variable (the POSIX way) or by using the added print statement. Furthermore, a read statement allows the interactive input of a number into a running calculation. In addition to C-style comments, a # character will cause everything after it until the next new-line to be ignored. The value of the last calculation is always stored within the additional built-in last variable. Extra operators The following logical operators are additional to those in POSIX bc: && || ! They are available for use in conditional statements (such as within an if statement). Note, however, that there are still no equivalent bitwise or assignment operations. Functions All functions available in GNU bc are inherited from POSIX. No further functions are provided as standard with the GNU distribution. Example code Since the bc ^ operator only allows an integer power to its right, one of the first functions a bc user might write is a power function with a floating-point exponent. Both of the below assume the standard library has been included: A "power" function in POSIX bc /* A function to return the integer part of x */ define i(x) { auto s s = scale scale = 0 x /= 1 /* round x down */ scale = s return (x) } /* Use the fact that x^y == e^(y*log(x)) */ define p(x,y) { if (y == i(y)) { return (x ^ y) } return ( e( y * l(x) ) ) } Calculating π to 10000 digits Calculate pi using the builtin arctangent function, : $ bc -lq scale=10000 4*a(1) # The atan of 1 is 45 degrees, which is pi/4 in radians. # This may take several minutes to calculate. A translated C function Because the syntax of bc is similar to that of C, published numerical functions written in C can often be translated into bc quite easily, which immediately provides the arbitrary precision of bc. For example, in the Journal of Statistical Software (July 2004, Volume 11, Issue 5), George Marsaglia published the following C code for the cumulative normal distribution: double Phi(double x) { long double s=x,t=0,b=x,q=x*x,i=1; while(s!=t) s=(t=s)+(b*=q/(i+=2)); return .5+s*exp(-.5*q-.91893853320467274178L); } With some necessary changes to accommodate bc's different syntax, and noting that the constant "0.9189..." is actually log(2*PI)/2, this can be translated to the following GNU bc code: define phi(x) { auto s,t,b,q,i,const s=x; t=0; b=x; q=x*x; i=1 while(s!=t) s=(t=s)+(b*=q/(i+=2)) const=0.5*l(8*a(1)) # 0.91893... return .5+s*e(-.5*q-const) } Using bc in shell scripts bc can be used non-interactively, with input through a pipe. This is useful inside shell scripts. For example: $ result=$(echo "scale=2; 5 * 7 /3;" | bc) $ echo $result 11.66 In contrast, note that the bash shell only performs integer arithmetic, e.g.: $ result=$((5 * 7 /3)) $ echo $result 11 One can also use the here-string idiom (in bash, ksh, csh): $ bc -l <<< "5*7/3" 11.66666666666666666666 See also dc programming language C programming language hoc programming language References GNU bc manual page POSIX bc manual page 7th Edition Unix bc manual page A comp.compilers article on the design and implementation of C-bc 6th Edition Unix bc source code, the first release of bc, from May 1975, compiling bc syntax into dc syntax GNU bc source code External links Dittmer, I. 1993. Error in Unix commands dc and bc for multiple-precision-arithmetic. SIGNUM Newsl. 28, 2 (Apr. 1993), 8–11. Collection of useful GNU bc functions GNU bc (and an alpha version) from the Free Software Foundation bc for Windows from GnuWin32 Gavin Howard bc - another open source implementation of bc by Gavin Howard with GNU and BSD extensions X-bc - A Graphical User Interface to bc extensions.bc - contains functions of trigonometry, exponential functions, functions of number theory and some mathematical constants scientific_constants.bc - contains particle masses, basic constants, such as speed of light in the vacuum and the gravitational constant Software calculators Cross-platform free software Free mathematics software Numerical programming languages Standard Unix programs Unix SUS2008 utilities Plan 9 commands
Bc (programming language)
[ "Mathematics", "Technology" ]
2,464
[ "Software calculators", "Standard Unix programs", "Free mathematics software", "Computing commands", "Plan 9 commands", "Mathematical software" ]
646,478
https://en.wikipedia.org/wiki/Abiogenic%20petroleum%20origin
The abiogenic petroleum origin hypothesis proposes that most of earth's petroleum and natural gas deposits were formed inorganically, commonly known as abiotic oil. Scientific evidence overwhelmingly supports a biogenic origin for most of the world's petroleum deposits. Mainstream theories about the formation of hydrocarbons on earth point to an origin from the decomposition of long-dead organisms, though the existence of hydrocarbons on extraterrestrial bodies like Saturn's moon Titan indicates that hydrocarbons are sometimes naturally produced by inorganic means. A historical overview of theories of the abiogenic origins of hydrocarbons has been published. Thomas Gold's "deep gas hypothesis" proposes that some natural gas deposits were formed out of hydrocarbons deep in the Earth's mantle. Earlier studies of mantle-derived rocks from many places have shown that hydrocarbons from the mantle region can be found widely around the globe. However, the content of such hydrocarbons is in low concentration. While there may be large deposits of abiotic hydrocarbons, globally significant amounts of abiotic hydrocarbons are deemed unlikely. Overview hypotheses Some abiogenic hypotheses have proposed that oil and gas did not originate from fossil deposits, but have instead originated from deep carbon deposits, present since the formation of the Earth. The abiogenic hypothesis regained some support in 2009 when researchers at the Royal Institute of Technology (KTH) in Stockholm reported they believed they had proven that fossils from animals and plants are not necessary for crude oil and natural gas to be generated. History An abiogenic hypothesis was first proposed by Georgius Agricola in the 16th century and various additional abiogenic hypotheses were proposed in the 19th century, most notably by Prussian geographer Alexander von Humboldt (1804), the Russian chemist Dmitri Mendeleev (1877) and the French chemist Marcellin Berthelot. Abiogenic hypotheses were revived in the last half of the 20th century by Soviet scientists who had little influence outside the Soviet Union because most of their research was published in Russian. The hypothesis was re-defined and made popular in the West by astronomer Thomas Gold, a prominent proponent of the abiogenic hypothesis, who developed his theories from 1979 to 1998 and published his research in English. Abraham Gottlob Werner and the proponents of neptunism in the 18th century regarded basaltic sills as solidified oils or bitumen. While these notions proved unfounded, the basic idea of an association between petroleum and magmatism persisted. Von Humboldt proposed an inorganic abiogenic hypothesis for petroleum formation after he observed petroleum springs in the Bay of Cumaux (Cumaná) on the northeast coast of Venezuela. He is quoted as saying, "the petroleum is the product of a distillation from great depth and issues from the primitive rocks beneath which the forces of all volcanic action lie". Other early prominent proponents of what would become the generalized abiogenic hypothesis included Dmitri Mendeleev and Berthelot. In 1951, the Soviet geologist Nikolai Alexandrovitch Kudryavtsev proposed the modern abiotic hypothesis of petroleum. On the basis of his analysis of the Athabasca Oil Sands in Alberta, Canada, he concluded that no "source rocks" could form the enormous volume of hydrocarbons, and therefore offered abiotic deep petroleum as the most plausible explanation. (Humic coals have since been proposed for the source rocks.) Others who continued Kudryavtsev's work included Petr N. Kropotkin, Vladimir B. Porfir'ev, Emmanuil B. Chekaliuk, Vladilen A. Krayushkin, Georgi E. Boyko, Georgi I. Voitov, Grygori N. Dolenko, Iona V. Greenberg, Nikolai S. Beskrovny, and Victor F. Linetsky. Following Thomas Gold's death in 2004, Jack Kenney of Gas Resources Corporation has recently come into prominence as a proponent of the theories, supported by studies by researchers at the Royal Institute of Technology (KTH) in Stockholm, Sweden. Foundations of abiogenic hypotheses Within the mantle, carbon may exist as hydrocarbons—chiefly methane—and as elemental carbon, carbon dioxide, and carbonates. The abiotic hypothesis is that the full suite of hydrocarbons found in petroleum can either be generated in the mantle by abiogenic processes, or by biological processing of those abiogenic hydrocarbons, and that the source-hydrocarbons of abiogenic origin can migrate out of the mantle into the crust until they escape to the surface or are trapped by impermeable strata, forming petroleum reservoirs. Abiogenic hypotheses generally reject the supposition that certain molecules found within petroleum, known as biomarkers, are indicative of the biological origin of petroleum. They contend that these molecules mostly come from microbes feeding on petroleum in its upward migration through the crust, that some of them are found in meteorites, which have presumably never contacted living material, and that some can be generated abiogenically by plausible reactions in petroleum. Some of the evidence used to support abiogenic theories includes: Recent investigation of abiogenic hypotheses , little research is directed towards establishing abiogenic petroleum or methane, although the Carnegie Institution for Science has reported that ethane and heavier hydrocarbons can be synthesized under conditions of the upper mantle. Research mostly related to astrobiology and the deep microbial biosphere and serpentinite reactions, however, continues to provide insight into the contribution of abiogenic hydrocarbons into petroleum accumulations. rock porosity and migration pathways for abiogenic petroleum mantle peridotite serpentinization reactions and other natural Fischer–Tropsch analogs Primordial hydrocarbons in meteorites, comets, asteroids and the solid bodies of the Solar System Primordial or ancient sources of hydrocarbons or carbon in Earth Primordial hydrocarbons formed from hydrolysis of metal carbides of the iron peak of cosmic elemental abundance (chromium, iron, nickel, vanadium, manganese, cobalt) isotopic studies of groundwater reservoirs, sedimentary cements, formation gases and the composition of the noble gases and nitrogen in many oil fields Common criticisms include: If oil was created in the mantle, it would be expected that oil would be most commonly found in fault zones, as that would provide the greatest opportunity for oil to migrate into the crust from the mantle. Additionally, the mantle near subduction zones tends to be more oxidizing than the rest. However, the locations of oil deposits have not been found to be correlated with fault zones, with some exceptions. Proposed mechanisms of abiogenic petroleum Primordial deposits Thomas Gold's work was focused on hydrocarbon deposits of primordial origin. Meteorites are believed to represent the major composition of material from which the Earth was formed. Some meteorites, such as carbonaceous chondrites, contain carbonaceous material. If a large amount of this material is still within the Earth, it could have been leaking upward for billions of years. The thermodynamic conditions within the mantle would allow many hydrocarbon molecules to be at equilibrium under high pressure and high temperature. Although molecules in these conditions may disassociate, resulting fragments would be reformed due to the pressure. An average equilibrium of various molecules would exist depending upon conditions and the carbon-hydrogen ratio of the material. Creation within the mantle Russian researchers concluded that hydrocarbon mixes would be created within the mantle. Experiments under high temperatures and pressures produced many hydrocarbons—including n-alkanes through C10H22—from iron oxide, calcium carbonate, and water. Because such materials are in the mantle and in subducted crust, there is no requirement that all hydrocarbons be produced from primordial deposits. Hydrogen generation Hydrogen gas and water have been found more than deep in the upper crust in the Siljan Ring boreholes and the Kola Superdeep Borehole. Data from the western United States suggests that aquifers from near the surface may extend to depths of to . Hydrogen gas can be created by water reacting with silicates, quartz, and feldspar at temperatures in the range of to . These minerals are common in crustal rocks such as granite. Hydrogen may react with dissolved carbon compounds in water to form methane and higher carbon compounds. One reaction not involving silicates which can create hydrogen is: Ferrous oxide + water → magnetite + hydrogen The above reaction operates best at low pressures. At pressures greater than almost no hydrogen is created. Thomas Gold reported that hydrocarbons were found in the Siljan Ring borehole and in general increased with depth, although the venture was not a commercial success. However, several geologists analysed the results and said that no hydrocarbon was found. Serpentinite mechanism In 1967, the Soviet scientist Emmanuil B. Chekaliuk proposed that petroleum could be formed at high temperatures and pressures from inorganic carbon in the form of carbon dioxide, hydrogen or methane . This mechanism is supported by several lines of evidence which are accepted by modern scientific literature. This involves synthesis of oil within the crust via catalysis by chemically reductive rocks. A proposed mechanism for the formation of inorganic hydrocarbons is via natural analogs of the Fischer–Tropsch process known as the serpentinite mechanism or the serpentinite process. (2n+1) Serpentinites are ideal rocks to host this process as they are formed from peridotites and dunites, rocks which contain greater than 80% olivine and usually a percentage of Fe-Ti spinel minerals. Most olivines also contain high nickel concentrations (up to several percent) and may also contain chromite or chromium as a contaminant in olivine, providing the needed transition metals. However, serpentinite synthesis and spinel cracking reactions require hydrothermal alteration of pristine peridotite-dunite, which is a finite process intrinsically related to metamorphism, and further, requires significant addition of water. Serpentinite is unstable at mantle temperatures and is readily dehydrated to granulite, amphibolite, talc–schist and even eclogite. This suggests that methanogenesis in the presence of serpentinites is restricted in space and time to mid-ocean ridges and upper levels of subduction zones. However, water has been found as deep as , so water-based reactions are dependent upon the local conditions. Oil being created by this process in intracratonic regions is limited by the materials and temperature. Serpentinite synthesis A chemical basis for the abiotic petroleum process is the serpentinization of peridotite, beginning with methanogenesis via hydrolysis of olivine into serpentine in the presence of carbon dioxide. Olivine, composed of Forsterite and Fayalite metamorphoses into serpentine, magnetite and silica by the following reactions, with silica from fayalite decomposition (reaction 1a) feeding into the forsterite reaction (1b). Reaction 1a: Fayalite + water → magnetite + aqueous silica + hydrogen Reaction 1b: Forsterite + aqueous silica → serpentinite When this reaction occurs in the presence of dissolved carbon dioxide (carbonic acid) at temperatures above Reaction 2a takes place. Reaction 2a: Olivine + water + carbonic acid → serpentine + magnetite + methane or, in balanced form: However, reaction 2(b) is just as likely, and supported by the presence of abundant talc-carbonate schists and magnesite stringer veins in many serpentinised peridotites; Reaction 2b: Olivine + water + carbonic acid → serpentine + magnetite + magnesite + silica The upgrading of methane to higher n-alkane hydrocarbons is via dehydrogenation of methane in the presence of catalyst transition metals (e.g. Fe, Ni). This can be termed spinel hydrolysis. Spinel polymerization mechanism Magnetite, chromite and ilmenite are Fe-spinel group minerals found in many rocks but rarely as a major component in non-ultramafic rocks. In these rocks, high concentrations of magmatic magnetite, chromite and ilmenite provide a reduced matrix which may allow abiotic cracking of methane to higher hydrocarbons during hydrothermal events. Chemically reduced rocks are required to drive this reaction and high temperatures are required to allow methane to be polymerized to ethane. Note that reaction 1a, above, also creates magnetite. Reaction 3: Methane + magnetite → ethane + hematite Reaction 3 results in n-alkane hydrocarbons, including linear saturated hydrocarbons, alcohols, aldehydes, ketones, aromatics, and cyclic compounds. Carbonate decomposition Calcium carbonate may decompose at around through the following reaction: Reaction 5: Hydrogen + calcium carbonate → methane + calcium oxide + water Note that CaO (lime) is not a mineral species found within natural rocks. Whilst this reaction is possible, it is not plausible. Evidence of abiogenic mechanisms Theoretical calculations by J.F. Kenney using scaled particle theory (a statistical mechanical model) for a simplified perturbed hard-chain predict that methane compressed to or kbar at (conditions in the mantle) is relatively unstable in relation to higher hydrocarbons. However, these calculations do not include methane pyrolysis yielding amorphous carbon and hydrogen, which is recognized as the prevalent reaction at high temperatures. Experiments in diamond anvil high pressure cells have resulted in partial conversion of methane and inorganic carbonates into light hydrocarbons. Biotic (microbial) hydrocarbons The "deep biotic petroleum hypothesis", similar to the abiogenic petroleum origin hypothesis, holds that not all petroleum deposits within the Earth's rocks can be explained purely according to the orthodox view of petroleum geology. Thomas Gold used the term "the deep hot biosphere" to describe the microbes which live underground. This hypothesis is different from biogenic oil in that the role of deep-dwelling microbes is a biological source for oil which is not of a sedimentary origin and is not sourced from surface carbon. Deep microbial life is only a contaminant of primordial hydrocarbons. Parts of microbes yield molecules as biomarkers. Deep biotic oil is considered to be formed as a byproduct of the life cycle of deep microbes. Shallow biotic oil is considered to be formed as a byproduct of the life cycles of shallow microbes. Microbial biomarkers Thomas Gold, in a 1999 book, cited the discovery of thermophile bacteria in the Earth's crust as new support for the postulate that these bacteria could explain the existence of certain biomarkers in extracted petroleum. A rebuttal of biogenic origins based on biomarkers has been offered by Kenney, et al. (2001). Isotopic evidence Methane is ubiquitous in crustal fluid and gas. Research continues to attempt to characterise crustal sources of methane as biogenic or abiogenic using carbon isotope fractionation of observed gases (Lollar & Sherwood 2006). There are few clear examples of abiogenic methane-ethane-butane, as the same processes favor enrichment of light isotopes in all chemical reactions, whether organic or inorganic. δ13C of methane overlaps that of inorganic carbonate and graphite in the crust, which are heavily depleted in 12C, and attain this by isotopic fractionation during metamorphic reactions. One argument for abiogenic oil cites the high carbon depletion of methane as stemming from the observed carbon isotope depletion with depth in the crust. However, diamonds, which are definitively of mantle origin, are not as depleted as methane, which implies that methane carbon isotope fractionation is not controlled by mantle values. Commercially extractable concentrations of helium (greater than 0.3%) are present in natural gas from the Panhandle-Hugoton fields in the US, as well as from some Algerian and Russian gas fields. Helium trapped within most petroleum occurrences, such as the occurrence in Texas, is of a distinctly crustal character with an Ra ratio of less than 0.0001 that of the atmosphere. Biomarker chemicals Certain chemicals found in naturally occurring petroleum contain chemical and structural similarities to compounds found within many living organisms. These include terpenoids, terpenes, pristane, phytane, cholestane, chlorins and porphyrins, which are large, chelating molecules in the same family as heme and chlorophyll. Materials which suggest certain biological processes include The presence of these chemicals in crude oil is a result of the inclusion of biological material in the oil; these chemicals are released by kerogen during the production of hydrocarbon oils, as these are chemicals highly resistant to degradation and plausible chemical paths have been studied. Abiotic defenders state that biomarkers get into oil during its way up as it gets in touch with ancient fossils. However a more plausible explanation is that biomarkers are traces of biological molecules from bacteria (archaea) that feed on primordial hydrocarbons and die in that environment. For example, hopanoids are just parts of the bacterial cell wall present in oil as a contaminant. Trace metals Nickel (Ni), vanadium (V), lead (Pb), arsenic (As), cadmium (Cd), mercury (Hg) and others metals frequently occur in oils. Some heavy crude oils, such as Venezuelan heavy crude have up to 45% vanadium pentoxide content in their ash, high enough that it is a commercial source for vanadium. Abiotic supporters argue that these metals are common in Earth's mantle, but relatively high contents of nickel, vanadium, lead and arsenic can be usually found in almost all marine sediments. Analysis of 22 trace elements in oils correlate significantly better with chondrite, serpentinized fertile mantle peridotite, and the primitive mantle than with oceanic or continental crust, and shows no correlation with seawater. Reduced carbon Sir Robert Robinson studied the chemical makeup of natural petroleum oils in great detail, and concluded that they were mostly far too hydrogen-rich to be a likely product of the decay of plant debris, assuming a dual origin for Earth hydrocarbons. However, several processes which generate hydrogen could supply kerogen hydrogenation which is compatible with the conventional explanation. Olefins, the unsaturated hydrocarbons, would have been expected to predominate by far in any material that was derived in that way. He also wrote: "Petroleum ... [seems to be] a primordial hydrocarbon mixture into which bio-products have been added." This hypothesis was later demonstrated to have been a misunderstanding by Robinson, related to the fact that only short duration experiments were available to him. Olefins are thermally very unstable (which is why natural petroleum normally does not contain such compounds) and in laboratory experiments that last more than a few hours, the olefins are no longer present. The presence of low-oxygen and hydroxyl-poor hydrocarbons in natural living media is supported by the presence of natural waxes (n=30+), oils (n=20+) and lipids in both plant matter and animal matter, for instance fats in phytoplankton, zooplankton and so on. These oils and waxes, however, occur in quantities too small to significantly affect the overall hydrogen/carbon ratio of biological materials. However, after the discovery of highly aliphatic biopolymers in algae, and that oil generating kerogen essentially represents concentrates of such materials, no theoretical problem exists anymore. Also, the millions of source rock samples that have been analyzed for petroleum yield by the petroleum industry have confirmed the large quantities of petroleum found in sedimentary basins. Empirical evidence Occurrences of abiotic petroleum in commercial amounts in the oil wells in offshore Vietnam are sometimes cited, as well as in the Eugene Island block 330 oil field, and the Dnieper-Donets Basin. However, the origins of all these wells can also be explained with the biotic theory. Modern geologists think that commercially profitable deposits of abiotic petroleum could be found, but no current deposit has convincing evidence that it originated from abiotic sources. The Soviet school of thought saw evidence of their hypothesis in the fact that some oil reservoirs exist in non-sedimentary rocks such as granite, metamorphic or porous volcanic rocks. However, opponents noted that non-sedimentary rocks served as reservoirs for biologically originated oil expelled from nearby sedimentary source rock through common migration or re-migration mechanisms. The following observations have been commonly used to argue for the abiogenic hypothesis, however each observation of actual petroleum can also be fully explained by biotic origin: Lost City hydrothermal vent field The Lost City hydrothermal field was determined to have abiogenic hydrocarbon production. Proskurowski et al. wrote, "Radiocarbon evidence rules out seawater bicarbonate as the carbon source for FTT reactions, suggesting that a mantle-derived inorganic carbon source is leached from the host rocks. Our findings illustrate that the abiotic synthesis of hydrocarbons in nature may occur in the presence of ultramafic rocks, water, and moderate amounts of heat." Siljan Ring crater The Siljan Ring meteorite crater, Sweden, was proposed by Thomas Gold as the most likely place to test the hypothesis because it was one of the few places in the world where the granite basement was cracked sufficiently (by meteorite impact) to allow oil to seep up from the mantle; furthermore it is infilled with a relatively thin veneer of sediment, which was sufficient to trap any abiogenic oil, but was modelled as not having been subjected to the heat and pressure conditions (known as the "oil window") normally required to create biogenic oil. However, some geochemists concluded by geochemical analysis that the oil in the seeps came from the organic-rich Ordovician Tretaspis shale, where it was heated by the meteorite impact. In 1986–1990 The Gravberg-1 borehole was drilled through the deepest rock in the Siljan Ring in which proponents had hoped to find hydrocarbon reservoirs. It stopped at the depth of due to drilling problems, after private investors spent $40 million. Some eighty barrels of magnetite paste and hydrocarbon-bearing sludge were recovered from the well; Gold maintained that the hydrocarbons were chemically different from, and not derived from, those added to the borehole, but analyses showed that the hydrocarbons were derived from the diesel fuel-based drilling fluid used in the drilling. This well also sampled over of methane-bearing inclusions. In 1991–1992, a second borehole, Stenberg-1, was drilled a few miles away to a depth of , finding similar results. Bacterial mats Direct observation of bacterial mats and fracture-fill carbonate and humin of bacterial origin in deep boreholes in Australia are also taken as evidence for the abiogenic origin of petroleum. Examples of proposed abiogenic methane deposits Panhandle-Hugoton field (Anadarko Basin) in the south-central United States is the most important gas field with commercial helium content. Some abiogenic proponents interpret this as evidence that both the helium and the natural gas came from the mantle. The Bạch Hổ oil field in Vietnam has been proposed as an example of abiogenic oil because it is 4,000 m of fractured basement granite, at a depth of 5,000 m. However, others argue that it contains biogenic oil which leaked into the basement horst from conventional source rocks within the Cuu Long basin. A major component of mantle-derived carbon is indicated in commercial gas reservoirs in the Pannonian and Vienna basins of Hungary and Austria. Natural gas pools interpreted as being mantle-derived are the Shengli Field and Songliao Basin, northeastern China. The Chimaera gas seep, near Çıralı, Antalya (southwest Turkey), has been continuously active for millennia and it is known to be the source of the first Olympic fire in the Hellenistic period. On the basis of chemical composition and isotopic analysis, the Chimaera gas is said to be about half biogenic and half abiogenic gas, the largest emission of biogenic methane discovered; deep and pressurized gas accumulations necessary to sustain the gas flow for millennia, posited to be from an inorganic source, may be present. Local geology of Chimaera flames, at exact position of flames, reveals contact between serpentinized ophiolite and carbonate rocks. Fischer–Tropsch process can be suitable reaction to form hydrocarbon gases. Geological arguments Incidental arguments for abiogenic oil Given the known occurrence of methane and the probable catalysis of methane into higher atomic weight hydrocarbon molecules, various abiogenic theories consider the following to be key observations in support of abiogenic hypotheses: the serpentinite synthesis, graphite synthesis and spinel catalysation models prove the process is viable the likelihood that abiogenic oil seeping up from the mantle is trapped beneath sediments which effectively seal mantle-tapping faults outdated mass-balance calculations for supergiant oilfields which argued that the calculated source rock could not have supplied the reservoir with the known accumulation of oil, implying deep recharge. the presence of hydrocarbons encapsulated in diamonds The proponents of abiogenic oil also use several arguments which draw on a variety of natural phenomena in order to support the hypothesis: the modeling of some researchers shows the Earth was accreted at relatively low temperature, thereby perhaps preserving primordial carbon deposits within the mantle, to drive abiogenic hydrocarbon production the presence of methane within the gases and fluids of mid-ocean ridge spreading centre hydrothermal fields. the presence of diamond within kimberlites and lamproites which sample the mantle depths proposed as being the source region of mantle methane (by Gold et al.). Incidental arguments against abiogenic oil Arguments against chemical reactions, such as the serpentinite mechanism, being a source of hydrocarbon deposits within the crust include: the lack of available pore space within rocks as depth increases. this is contradicted by numerous studies which have documented the existence of hydrologic systems operating over a range of scales and at all depths in the continental crust. the lack of any hydrocarbon within the crystalline shield areas of the major cratons, especially around key deep-seated structures which are predicted to host oil by the abiogenic hypothesis. See Siljan Lake. lack of conclusive proof that carbon isotope fractionation observed in crustal methane sources is entirely of abiogenic origin (Lollar et al. 2006) drilling of the Siljan Ring failed to find commercial quantities of oil, thus providing a counter example to Kudryavtsev's Rule and failing to locate the predicted abiogenic oil. helium in the Siljan Gravberg-1 well was depleted in 3He and not consistent with a mantle origin The Gravberg-1 well only produced of oil, which later was shown to derive from organic additives, lubricants and mud used in the drilling process. Kudryavtsev's Rule has been explained for oil and gas (not coal)—gas deposits which are below oil deposits can be created from that oil or its source rocks. Because natural gas is less dense than oil, as kerogen and hydrocarbons are generating gas the gas fills the top of the available space. Oil is forced down, and can reach the spill point where oil leaks around the edge(s) of the formation and flows upward. If the original formation becomes completely filled with gas then all the oil will have leaked above the original location. ubiquitous diamondoids in natural hydrocarbons such as oil, gas and condensates are composed of carbon from biological sources, unlike the carbon found in normal diamonds. Field test evidence What unites both theories of oil origin is the low success rate in predicting the locations of giant oil/gas fields: according to the statistics discovering a giant demands drilling 500+ exploration wells. A team of American-Russian scientists (mathematicians, geologists, geophysicists, and computer scientists) developed an Artificial Intelligence software and the appropriate technology for geological applications, and used it for predicting places of giant oil/gas deposits. In 1986 the team published a prognostic map for discovering giant oil and gas fields at the Andes in South America based on abiogenic petroleum origin theory. The model proposed by Prof. Yury Pikovsky (Moscow State University) assumes that petroleum moves from the mantle to the surface through permeable channels created at the intersection of deep faults. The technology uses 1) maps of morphostructural zoning, which outlines the morphostructural nodes (intersections of faults), and 2) pattern recognition program that identify nodes containing giant oil/gas fields. It was forecast that eleven nodes, which had not been developed at that time, contain giant oil or gas fields. These 11 sites covered only 8% of the total area of all the Andes basins. 30 years later (in 2018) was published the result of comparing the prognosis and the reality. Since publication of the prognostic map in 1986 six giant oil/gas fields were discovered in the Andes region: Caño Limón oilfield, Cusiana, Capiagua, Colombia, and Volcanera (Llanos basin, Colombia), Camisea (Ukayali basin, Peru), and Incahuasi (Chaco basin, Bolivia). All discoveries were made in places shown on the 1986 prognostic map as promising areas. During the 1960s, Donald Hings was issued numerous patents for developing practical methods for locating likely locations of the deep morphological nodes most likely to indicate the presence of abiogenic hydrocarbons. His methods and technologies are used to this day by geophysicists to locate deep hydrocarbon deposits. Extraterrestrial argument The presence of methane on Saturn's moon Titan and in the atmospheres of Jupiter, Saturn, Uranus and Neptune is cited as evidence of the formation of hydrocarbons without biological intermediate forms, for example by Thomas Gold. (Terrestrial natural gas is composed primarily of methane). Some comets contain massive amounts of organic compounds, the equivalent of cubic kilometers of such mixed with other material; for instance, corresponding hydrocarbons were detected during a probe flyby through the tail of Comet Halley in 1986. Drill samples from the surface of Mars taken in 2015 by the Curiosity rover's Mars Science Laboratory have found organic molecules of benzene and propane in 3 billion year old rock samples in Gale Crater. See also Eugene Island block 330 oil field Fischer–Tropsch process Fossil fuel Nikolai Alexandrovitch Kudryavtsev Peak oil Thomas Gold References Bibliography Kudryavtsev N.A., 1959. Geological proof of the deep origin of Petroleum. Trudy Vsesoyuz. Neftyan. Nauch. Issledovatel Geologoraz Vedoch. Inst. No.132, pp. 242–262 External links Deep Carbon Observatory "Geochemist Says Oil FieldsMay Be Refilled Naturally", New York Times article by Malcolm W. Browne, September 26, 1995 "No Free Lunch, Part 1: A Critique of Thomas Gold's Claims for Abiotic Oil", by Jean Laherrere, in From The Wilderness "No Free Lunch, Part 2: If Abiotic Oil Exists, Where Is It?", by Dale Allen Pfeiffer, in From The Wilderness The Origin of Methane (and Oil) in the Crust of the Earth, Thomas Gold abstracts from AAPG Origin of Petroleum Conference 06/18/05 Calgary Alberta, Canada Gas Origin Theories to be Studied, Abiogenic Gas Debate 11:2002 (AAPG Explorer) Gas Resources Corporation - J. F. Kenney's collection of documents Peak oil Extremophiles Biological hypotheses Petroleum geology Hypothetical processes Hypotheses
Abiogenic petroleum origin
[ "Chemistry", "Biology", "Environmental_science" ]
6,665
[ "Organisms by adaptation", "Petroleum", "Extremophiles", "Environmental microbiology", "Bacteria", "Biological hypotheses", "Petroleum geology" ]
646,745
https://en.wikipedia.org/wiki/Momo%20%28novel%29
Momo, also known as The Grey Gentlemen or The Men in Grey, is a fantasy novel by Michael Ende, published in 1973. It is about the concept of time and how it is used by humans in modern societies. The full title in German (Momo oder Die seltsame Geschichte von den Zeit-Dieben und von dem Kind, das den Menschen die gestohlene Zeit zurückbrachte) translates to Momo, or the strange story of the time-thieves and the child who brought the stolen time back to the people. The book won the Deutscher Jugendliteraturpreis in 1974. Plot In the ruins of an amphitheatre just outside an unnamed city lives Momo, a little girl of mysterious origin. She came to the ruin, parentless and wearing a long, used coat. She is illiterate and cannot count, and she doesn't know how old she is. When asked, she replies, "As far as I remember, I've always been around." She is remarkable in the neighbourhood because she has the extraordinary ability to listen—really listen. By simply being with people and listening to them, she can help them find answers to their problems, make up with each other, and think of fun games. The advice given to people "go and see Momo!" has become a household phrase and Momo makes many friends, especially an honest, silent street-cleaner, Beppo, and a poetic, extroverted tour guide, Gigi (Guido in some translations). This pleasant atmosphere is spoiled by the arrival of the Men in Grey, eventually revealed as a species of paranormal parasites stealing the time of humans. Appearing in the form of grey-clad, grey-skinned, bald men, these strange individuals present themselves as representing the Timesavings Bank and promote the idea of "timesaving" among the population: supposedly, time can be deposited in the Bank and returned to the client later with interest. After encountering the Men in Grey, people are made to forget all about them, but not about the resolution to save as much time as possible for later use. Gradually, the sinister influence of the Men in Grey affects the whole city: life becomes sterile, devoid of all things considered time-wasting, like social activities, recreation, art, imagination, or sleeping. Buildings and clothing are made exactly the same for everyone, and the rhythms of life become hectic. In reality, the more time people save, the less they have; the time they save is actually lost to them. Instead, it is consumed by the Men in Grey in the form of cigars made from the dried petals of the hour-lilies that represent time. Without these cigars, the Men in Grey cannot exist. Momo, however, is a wrench in the plans of the Men in Grey and the Timesavings Bank, thanks to her special personality. The Men in Grey try various plans to deal with her, to derail her from stopping their scheme, but they all fail. When even her closest friends fall under the influence of the Men in Grey in one way or another, Momo's only hope to save the time of mankind are the administrator of Time, Master Secundus Minutus Hora, and Cassiopeia, a tortoise who can communicate through writing on her shell and can see thirty minutes into the future. Momo's adventure takes her from the depths of her heart, which her own time flows from in the form of hour-lilies, to the lair of the Men in Grey themselves, where the time that people believe they are saving is hoarded. After Master Hora stops time, but gives Momo a single hour-lily to carry with her, she has exactly one hour to defeat the Men in Grey in a frozen world where only they and she are still moving. She surreptitiously follows them to their underground lair and observes as they decimate their own number in order to stretch their supply of time as far as possible. With the advice of Cassiopeia and by using the hour-lily, Momo is able to shut the door to the vault where the stolen lilies are kept. Now facing extinction as soon as their cigars are consumed, the few remaining Men in Grey pursue Momo, perishing one by one. The last Man in Grey finally begs her to give him the hour-lily so that he can open the vault. When she refuses, he too vanishes remarking that "it is good it is over". Using the last minute she has before her hour-lily crumbles, Momo opens the vault again, releasing the millions of hour-lilies stored within. The stolen time returns to its proper owners and goes back to their hearts, causing time to start again (without people knowing it had ever halted). Momo is reunited with her friends, and elsewhere Master Hora rejoices together with Cassiopeia. Major themes The main theme of Momo can be seen as a criticism of consumerism and stress. It describes the personal and social losses produced by unnecessary consumption, and the danger to be driven by a hidden interest group with enough power to induce people into this life style. Michael Ende has also stated to have had the concept of currency demurrage in mind when writing Momo. Childhood is also an important subject in many of Ende's books. In Momo it is used to offer contrast with the adult society. As children have "all the time in the world", they are a difficult target for the Men in Grey: children can't be convinced that their games are time-wasting. The author uses a mockery of Barbie dolls and other expensive toys as symbols to show how anyone can be persuaded, even indirectly, into consumerism. Robert N. Peck described that Momo has five principle elements: taking time, listening, imagining, persons and music. Literary significance An article by philosopher David Loy and literature professor Linda Goodhew called Momo "one of the most remarkable novels of the late twentieth century". They further state that: "One of the most amazing things about Momo is that it was published in 1973. Since then, the temporal nightmare it depicts has become our reality." Ende himself has said that "Momo is a tribute of gratitude to Italy and also a declaration of love," indicating that the author idealized the Italian way of life. Loy and Goodhew suggested that Ende's perspective on time coincided with his interest in Buddhism and that for example the deliberately slow character of Beppo might be regarded as a Zen master, even though Ende wrote the book long before his visits to Japan. When the book was published in the U.S. in 1985, Natalie Babbit from the Washington Post commented: "Is it a children's book? Not here in America." Momo was republished by Puffin Press on January 19, 2009. Then Norwegian Prime Minister Thorbjørn Jagland, in his New Year Address to the nation on January 1, 1997, referenced Ende's book and its plot: "People are persuaded to save time by eliminating everything not useful. One of the people so influenced cuts out his girlfriend, sells his pet, stops singing, reading and visiting friends. In this way he will supposedly become an efficient man getting something out of life. What is strange is that he is in a greater hurry than ever. The saved-up time disappears - and he never sees it again." Prime Minister Jagland went on to say that to many people, time has become the scarcest resource of all, contrary to their attempt at saving as much of it as possible. Adaptations Momo was made into a film of Italian/German production in 1986, in which Michael Ende himself played a small role as the narrator who encounters Professor Hora (performed by John Huston) at the beginning of the film (and at the end of the book). The role of Momo was performed by German actress and model Radost Bokel. Momo (2001) is an Italian animated film based on the novel. It is directed by Enzo D'Alò and features a soundtrack by Gianna Nannini, an Italian popular singer. In 2003, a 26-episode animated television series based directly on this adaptation was released, serving as an extended re-telling of the film. The book has also been acted in radio programmes. A German dramatized audiobook under the title Momo (Karussell/Universal Music Group 1984, directed by Anke Beckert, narrated by Harald Leipnitz, music by Frank Duval, 3 parts on LP and MC, 2 parts on CD) There have been a number of stage adaptations, including an opera written by Ende himself and an English-language version by Andy Thackeray. Legend of Raana (2014), animated miniseries directed by Majid Ahmady In 2015, the Royal Danish Opera commissioned composer Svitlana Azarova to write Momo and the Time Thieves. Its world premiere occurred at the Copenhagen Opera House in October, 2017. In January 2023, it was announced that Christian Ditter would direct an English-language film adaptation of the novel with Christian Becker producing. Translations The book was originally published in 1973 in Germany as Momo. Momo has been translated into various languages including Arabic, Asturian, Bulgarian, Croatian, Catalan, Chinese (Simplified), Chinese (Traditional), Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, Greek, Hebrew, Hungarian, Icelandic, Indonesian, Italian, Japanese, Korean, Latvian, Lithuanian, Mongolian, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Slovenian, Spanish, Swedish, Turkish, Thai, Ukrainian, Vietnamese and Sinhalese. The original English translation The Grey Gentlemen by Frances Lobb was published in 1974. A new English translation, Momo, was published in 1984. A newly translated, newly illustrated U.S. edition was released by McSweeney's in August 2013, in celebration of the book's fortieth anniversary. The McSweeney edition was scheduled for a new release in January 2017. However, some illustrations (created by Marcel Dzama) in the McSweeney release, such as Momo's appearance, do not conform to the text in the book. The Spanish translation Momo, o la extraña historia de los ladrones del tiempo y la niña que devolvió el tiempo a los hombres was made by Susana Constante in 1978 for Ediciones Alfaguara: it was a great success in Spain and Latin America, having dozens of reprints since. The Persian translation was published several times (first time in 1988) by Zarrin Publishers in Tehran. At the time of publication, it enjoyed great popularity in Iran, but due to the absence of any new printings since 1992, it is now inaccessible to the Iranian children. This, along with a stop in publishing other children's books by German and other European writers, is part of an ongoing trend in publishing American and English children's fiction in Iran. In popular culture An episode of the anime adaptation of Sailor Moon features a plot similar to the plot of the Men in Grey where the villain Jadeite steals the time of the people of Tokyo. The story of Momo plays a role in the Korean TV series My Lovely Sam Soon, where the main character's niece chooses to not speak due to post-traumatic stress of having both her parents killed in a car accident. The lead character buys the book and reads it (by himself and also to his niece) to try to understand his love interest more. The novel is referred to in each episode of the Japanese TV series A Girl of 35, starring Ko Shibasaki and Kentaro Sakaguchi (2020). The lead character wakes up after 25 years in a coma and feels her time has been stolen. She uses the Momo characters to learn how to grow up and come to terms with the tragic consequences her accident had for her whole family. The Korean girl group Momoland was named after this novel. One of the short stories within the Monogatari Series (Yotsugi Future) focuses on one of the characters' perception of the novel, which she compares the concept of time to herself, and adults who waste valuable time. K-pop singer and leader of the group Exo, Suho, was inspired by the novel in creating the concept and title for his sophomore album Grey Suit, as well at the title-track. References External links Quotations from the book 1973 German novels Novels by Michael Ende 1973 fantasy novels German fantasy novels German children's novels Novels about orphans Fiction about time German novels adapted into films 1973 children's books German novels adapted into operas German-language children's books
Momo (novel)
[ "Physics" ]
2,656
[ "Spacetime", "Fiction about time", "Physical quantities", "Time" ]
646,974
https://en.wikipedia.org/wiki/Wilkinson%27s%20polynomial
In numerical analysis, Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbations in the coefficients of the polynomial. The polynomial is Sometimes, the term Wilkinson's polynomial is also used to refer to some other polynomials appearing in Wilkinson's discussion. Background Wilkinson's polynomial arose in the study of algorithms for finding the roots of a polynomial It is a natural question in numerical analysis to ask whether the problem of finding the roots of from the coefficients is well-conditioned. That is, we hope that a small change in the coefficients will lead to a small change in the roots. Unfortunately, this is not the case here. The problem is ill-conditioned when the polynomial has a multiple root. For instance, the polynomial has a double root at . However, the polynomial (a perturbation of size ε) has roots at , which is much bigger than when is small. It is therefore natural to expect that ill-conditioning also occurs when the polynomial has zeros which are very close. However, the problem may also be extremely ill-conditioned for polynomials with well-separated zeros. Wilkinson used the polynomial to illustrate this point (Wilkinson 1963). In 1984, he described the personal impact of this discovery: Speaking for myself I regard it as the most traumatic experience in my career as a numerical analyst. Wilkinson's polynomial is often used to illustrate the undesirability of naively computing eigenvalues of a matrix by first calculating the coefficients of the matrix's characteristic polynomial and then finding its roots, since using the coefficients as an intermediate step may introduce an extreme ill-conditioning even if the original problem was well conditioned. Conditioning of Wilkinson's polynomial Wilkinson's polynomial clearly has 20 roots, located at . These roots are far apart. However, the polynomial is still very ill-conditioned. Expanding the polynomial, one finds If the coefficient of x19 is decreased from −210 by 2−23 to −210.0000001192, then the polynomial value w(20) decreases from 0 to −2−232019 = −6.25×1017, and the root at grows to . The roots at and collide into a double root at which turns into a pair of complex conjugate roots at as the perturbation increases further. The 20 roots become (to 5 decimals) Some of the roots are greatly displaced, even though the change to the coefficient is tiny and the original roots seem widely spaced. Wilkinson showed by the stability analysis discussed in the next section that this behavior is related to the fact that some roots α (such as α = 15) have many roots β that are "close" in the sense that |α − β| is smaller than |α|. Wilkinson chose the perturbation of 2−23 because his Pilot ACE computer had 30-bit floating point significands, so for numbers around 210, 2−23 was an error in the first bit position not represented in the computer. The two real numbers, −210 and −210 − 2−23, are represented by the same floating point number, which means that 2−23 is the unavoidable error in representing a real coefficient close to −210 by a floating point number on that computer. The perturbation analysis shows that 30-bit coefficient precision is insufficient for separating the roots of Wilkinson's polynomial. Stability analysis Suppose that we perturb a polynomial with roots by adding a small multiple of a polynomial , and ask how this affects the roots . To first order, the change in the roots will be controlled by the derivative When the derivative is small, the roots will be more stable under variations of , and conversely if this derivative is large the roots will be unstable. In particular, if is a multiple root, then the denominator vanishes. In this case, αj is usually not differentiable with respect to (unless happens to vanish there), and the roots will be extremely unstable. For small values of the perturbed root is given by the power series expansion in and one expects problems when |t| is larger than the radius of convergence of this power series, which is given by the smallest value of |t| such that the root becomes multiple. A very crude estimate for this radius takes half the distance from to the nearest root, and divides by the derivative above. In the example of Wilkinson's polynomial of degree 20, the roots are given by for , and is equal to x19. So the derivative is given by This shows that the root will be less stable if there are many roots close to , in the sense that the distance |αj − αk| between them is smaller than |αj|. Example. For the root α1 = 1, the derivative is equal to 1/19! which is very small; this root is stable even for large changes in t. This is because all the other roots β are a long way from it, in the sense that |α1 − β| = 1, 2, 3, ..., 19 is larger than |α1| = 1. For example, even if t is as large as –10000000000, the root α1 only changes from 1 to about 0.99999991779380 (which is very close to the first order approximation 1 + t/19! ≈ 0.99999991779365). Similarly, the other small roots of Wilkinson's polynomial are insensitive to changes in t. Example. On the other hand, for the root α20 = 20, the derivative is equal to −2019/19! which is huge (about 43000000), so this root is very sensitive to small changes in t. The other roots β are close to α20, in the sense that |β − α20| = 1, 2, 3, ..., 19 is less than |α20| = 20. For t = −2 − 23 the first-order approximation 20 − t·2019/19! = 25.137... to the perturbed root 20.84... is terrible; this is even more obvious for the root α19 where the perturbed root has a large imaginary part but the first-order approximation (and for that matter all higher-order approximations) are real. The reason for this discrepancy is that |t| ≈ 0.000000119 is greater than the radius of convergence of the power series mentioned above (which is about 0.0000000029, somewhat smaller than the value 0.00000001 given by the crude estimate) so the linearized theory does not apply. For a value such as t = 0.000000001 that is significantly smaller than this radius of convergence, the first-order approximation 19.9569... is reasonably close to the root 19.9509... At first sight the roots α1 = 1 and α20 = 20 of Wilkinson's polynomial appear to be similar, as they are on opposite ends of a symmetric line of roots, and have the same set of distances 1, 2, 3, ..., 19 from other roots. However the analysis above shows that this is grossly misleading: the root α20 = 20 is less stable than α1 = 1 (to small perturbations in the coefficient of x19) by a factor of 2019 = 5242880000000000000000000. Wilkinson's second example The second example considered by Wilkinson is The twenty zeros of this polynomial are in a geometric progression with common ratio 2, and hence the quotient cannot be large. Indeed, the zeros of are quite stable to large relative changes in the coefficients. The effect of the basis The expansion expresses the polynomial in a particular basis, namely that of the monomials. If the polynomial is expressed in another basis, then the problem of finding its roots may cease to be ill-conditioned. For example, in a Lagrange form, a small change in one (or several) coefficients need not change the roots too much. Indeed, the basis polynomials for interpolation at the points 0, 1, 2, ..., 20 are Every polynomial (of degree 20 or less) can be expressed in this basis: For Wilkinson's polynomial, we find Given the definition of the Lagrange basis polynomial , a change in the coefficient will produce no change in the roots of . However, a perturbation in the other coefficients (all equal to zero) will slightly change the roots. Therefore, Wilkinson's polynomial is well-conditioned in this basis. Notes References Wilkinson discussed "his" polynomial in J. H. Wilkinson (1959). The evaluation of the zeros of ill-conditioned polynomials. Part I. Numerische Mathematik 1:150–166. J. H. Wilkinson (1963). Rounding Errors in Algebraic Processes. Englewood Cliffs, New Jersey: Prentice Hall. It is mentioned in standard text books in numerical analysis, like F. S. Acton, Numerical methods that work, , p. 201. Other references: Ronald G. Mosier (July 1986). Root neighborhoods of a polynomial. Mathematics of Computation 47(175):265–273. J. H. Wilkinson (1984). The perfidious polynomial. Studies in Numerical Analysis, ed. by G. H. Golub, pp. 1–28. (Studies in Mathematics, vol. 24). Washington, D.C.: Mathematical Association of America. A high-precision numerical computation is presented in: Ray Buvel, Polynomials And Rational Functions, part of the RPN Calculator User Manual (for Python), retrieved on 29 July 2006. Numerical analysis Polynomials
Wilkinson's polynomial
[ "Mathematics" ]
2,047
[ "Polynomials", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Approximations", "Algebra" ]
647,269
https://en.wikipedia.org/wiki/Variable%20number%20tandem%20repeat
A variable number tandem repeat (or VNTR) is a location in a genome where a short nucleotide sequence is organized as a tandem repeat. These can be found on many chromosomes, and often show variations in length (number of repeats) among individuals. Each variant acts as an inherited allele, allowing them to be used for personal or parental identification. Their analysis is useful in genetics and biology research, forensics, and DNA fingerprinting. Structure and allelic variation In the schematic above, the rectangular blocks represent each of the repeated DNA sequences at a particular VNTR location. The repeats are in tandem – i.e. they are clustered together and oriented in the same direction. Individual repeats can be removed from (or added to) the VNTR via recombination or replication errors, leading to alleles with different numbers of repeats. Flanking regions are segments of repetitive sequence (shown here as thin lines), allowing the VNTR blocks to be extracted with restriction enzymes and analyzed by RFLP, or amplified by the polymerase chain reaction (PCR) technique and their size determined by gel electrophoresis. Use in genetic analysis VNTRs were an important source of RFLP genetic markers used in linkage analysis (mapping) of diploid genomes. Now that many genomes have been sequenced, VNTRs have become essential to forensic crime investigations, via DNA fingerprinting and the CODIS database. When removed from surrounding DNA by the PCR or RFLP methods, and their size determined by gel electrophoresis or Southern blotting, they produce a pattern of bands unique to each individual. When tested with a group of independent VNTR markers, the likelihood of two unrelated individuals' having the same allelic pattern is extremely low. VNTR analysis is also being used to study genetic diversity and breeding patterns in populations of wild or domesticated animals. As such, VNTRs can be used to distinguish strains of bacterial pathogens. In this microbial forensics context, such assays are usually called Multiple Loci VNTR Analysis or MLVA. Inheritance In analyzing VNTR data, two basic genetic principles can be used: Identity Matching – both VNTR alleles from a specific location must match. If two samples are from the same individual, they must show the same allele pattern. Inheritance Matching – the VNTR alleles must follow the rules of inheritance. In matching an individual with his parents or children, a person must have an allele that matches one from each parent. If the relationship is more distant, such as a grandparent or sibling, then matches must be consistent with the degree of relatedness. Relationship to other types of repetitive DNA Repetitive DNA, representing over 40% of the human genome, is arranged in a bewildering array of patterns. Repeats were first identified by the extraction of Satellite DNA, which does not reveal how they are organized. The use of restriction enzymes showed that some repeat blocks were interspersed throughout the genome. DNA sequencing later showed that other repeats are clustered at specific locations, with tandem repeats being more common than inverted repeats (which may interfere with DNA replication). VNTRs are the class of clustered tandem repeats that exhibit allelic variation in their lengths. Classes VNTRs are a type of minisatellite in which the size of the repeat sequence is generally ten to one hundred base pairs. Minisatellites are a type of DNA tandem repeat sequence, meaning that the sequences repeat one after another without other sequences or nucleotides in between them. Minisatellites are characterized by a repeat sequence of about ten to one hundred nucleotides, and the number of times the sequence repeats varies from about five to fifty times. The sequences of minisatellites are larger than those of microsatellites, in which the repeat sequence is generally 1 to 6 nucleotides. The two types of repeat sequences are both tandem but are specified by the length of the repeat sequence. VNTRs, therefore, because they have repeat sequences of ten to one hundred nucleotides in which every repeat is exactly the same, are considered minisatellites. However, while all VNTRs are minisatellites, not all minisatellites are VNTRs. VNTRs can vary in number of repeats from individual to individual, as where some non-VNTR minisatellites have repeat sequences that repeat the same number of times in all individuals containing the tandem repeats in their genomes. See also AFLP MLVA Short tandem repeat Tandem repeat BioNumerics References External links Examples : VNTRs – info and animated example Databases : The Microorganisms Tandem Repeats Database The MLVAbank Short Tandem Repeats Database Tandem Repeats Database (TRDB) Search tools : TAPO: A combined method for the identification of tandem repeats in protein structures Tandem Repeats Finder Mreps STAR TRED TandemSWAN Microsatellite repeats finder JSTRING – Java Search for Tandem Repeats in genomes Phobos – a tandem repeat search tool for perfect and imperfect repeats – the maximum pattern size depends only on computational power Repetitive DNA sequences
Variable number tandem repeat
[ "Biology" ]
1,060
[ "Molecular genetics", "Repetitive DNA sequences" ]
647,287
https://en.wikipedia.org/wiki/Horseshoe%20Curve%20%28Pennsylvania%29
The Horseshoe Curve is a three-track railroad curve on Norfolk Southern Railway's Pittsburgh Line in Blair County, Pennsylvania. The curve is roughly long and in diameter. Completed in 1854 by the Pennsylvania Railroad as a way to reduce the westbound grade to the summit of the Allegheny Mountains, it replaced the time-consuming Allegheny Portage Railroad, which was the only other route across the mountains for large vehicles. The curve was later owned and used by three Pennsylvania Railroad successors: Penn Central, Conrail, and Norfolk Southern. Horseshoe Curve has long been a tourist attraction. A trackside observation park was completed in 1879. The park was renovated and a visitor center built in the early 1990s. The Railroaders Memorial Museum in Altoona manages the center, which has exhibits pertaining to the curve. The Horseshoe Curve was added to the National Register of Historic Places and designated as a National Historic Landmark in 1966. It became a National Historic Civil Engineering Landmark in 2004. Location and design Horseshoe Curve is west of Altoona, Pennsylvania, in Logan Township, Blair County. It sits at railroad milepost 242 on the Pittsburgh Line, which is the Norfolk Southern Railway Pittsburgh Division main line between Pittsburgh and Harrisburg, Pennsylvania. Horseshoe Curve bends around a dam and lake, the highest of three Altoona Water Authority reservoirs that supply water from the valley to the city. It spans two ravines formed by creeks: Kittanning Run, on the north side of the valley, and Glenwhite Run, on the south. The Blair County Veterans Memorial Highway (SR 4008) follows the valley west from Altoona and tunnels under the curve. Westbound trains climb a maximum grade of 1.85 percent for from Altoona to Gallitzin. Just west of the Gallitzin Tunnels, trains pass the summit of the Allegheny Mountains, then descend for to Johnstown on a grade of 1.1 percent or less. The overall grade of the curve was listed by the Pennsylvania Railroad as 1.45 percent; it is listed as 1.34 percent by Norfolk Southern. The curve is long and about across at its widest. For every , the tracks at the Horseshoe Curve bend 9 degrees 15 arc minutes, with the entire curve totaling 220 degrees. The rise of a westbound train through the curve can be described in several ways. One measurement is from the point where the rails north of the curve start to bow out to a point on the line directly south, across the original Kittanning Reservoir: across this north–south distance of , a train rises from above sea level to . Another measurement is from the point where the rails coming west out of Altoona make their first detour north to the curve, to a point across Lake Altoona where the rails begin their one-mile straight run south before turning west to the Gallitzin Tunnels; this measurement encompasses the entire Curve structure, including both reservoirs built in its bounds to protect the curve from flooding. Across this north-south distance of , a westbound train rises from to . This latter rise—133 vertical feet in 1,006 linear feet—is a 13.2% grade, completely unascendable by conventional railroads, which usually stick to grades of 2.2% or less. Each track consists of , welded rail. Before dieselization and the introduction of dynamic braking and rail oilers, the rails along the curve were transposed—left to right and vice versa—to equalize the wear on each rail from the flanges of passing steam locomotives and rail cars, thereby extending their life. History Origin In 1834, the Commonwealth of Pennsylvania built the Allegheny Portage Railroad across the Allegheny Mountains to connect Philadelphia and Pittsburgh, as part of the Main Line of Public Works. The Portage Railroad was a series of canals and inclined planes and remained in use until the mid-19th century. The Pennsylvania Railroad was incorporated in 1847 to build a railroad from Harrisburg to Pittsburgh, replacing the cumbersome Portage Railroad. Using surveys completed in 1842, the state's engineers recommended an route west from Lewistown that followed the ridges with a maximum grade of 0.852 percent. But the Chief Engineer for the Pennsylvania Railroad, John Edgar Thomson, chose a route on lower, flatter terrain along the Juniata River and accepted a steeper grade west of Altoona. The valley west of Altoona was split into two ravines by a mountain; surveys had already found a route with an acceptable grade east from Gallitzin to the south side of the valley, and the proposed Horseshoe Curve would allow the same grade to continue to Altoona. Construction Work on Horseshoe Curve began in 1850. It was done without heavy equipment, only men "with picks and shovels, horses and drags". Engineers built an earth fill over the first ravine encountered while ascending, formed by Kittanning Run, cut the point of the mountain between the ravines, and filled in the second ravine, formed by Glenwhite Run. The line between Altoona and Johnstown, including Horseshoe Curve, opened on February 15, 1854. The total cost was $2,495,000 or $80,225 per mile ($49,850 /km). In 1879, the remaining part of the mountain inside the curve was leveled to allow the construction of a park and observation area—the first built for viewing trains. As demand for train travel increased, a third track was added to the curve in 1898 and a fourth was added two years later. From around the 1860s to just before World War II, passengers could ride to the PRR's Kittanning Point station near the curve. Two branch railroads connected to the main line at Horseshoe Curve in the early 20th century; the Kittanning Run Railroad and the railroad owned by the Glen White Coal and Lumber Company followed their respective creeks to nearby coal mines. The Pennsylvania Railroad delivered empty hopper cars to the Kittanning Point station which the two railroads returned loaded with coal. In the early 1900s, locomotives could take on fuel and water at a coal trestle on a spur track across from the station. A reservoir was built at the apex of the Horseshoe Curve in 1887 for Altoona; a second reservoir, below the first, was finished in 1896. A third reservoir, Lake Altoona, was completed by 1913. A macadam road to the curve was opened in 1932 allowing access for visitors, and a gift shop was built in 1940. Horseshoe Curve was depicted in brochures, calendars and other promotional material; Pennsylvania Railroad stock certificates were printed with a vignette of it. The Pennsylvania pitted the scenery of Horseshoe Curve against rival New York Central Railroad's "Water Level Route" during the 1890s. A raised-relief, scale model of the curve was included as part of the Pennsylvania Railroad's exhibit at the 1893 World's Columbian Exposition in Chicago. Pennsylvania Railroad conductors were told to announce the Horseshoe Curve to daytime passengers—a tradition that continues aboard Amtrak trains. World War II and post-war During World War II, the PRR carried troops and materiel for the Allied war effort, and the curve was under armed guard. The military intelligence arm of Nazi Germany, the Abwehr, plotted to sabotage important industrial assets in the United States in a project code-named Operation Pastorius. In June 1942, four men were brought by submarine and landed on Long Island, planning to destroy such sites as the curve, Hell Gate Bridge, Alcoa aluminum factories and locks on the Ohio River. The would-be saboteurs were quickly apprehended by the Federal Bureau of Investigation after one, George John Dasch, turned himself in. All but Dasch and one other would-be saboteur were executed as spies and saboteurs. Train count peaked in the 1940s with over 50 passenger trains per day, along with many freight and military trains. Demand for train travel dropped greatly after World War II, as highway and air travel became popular. During the 1954 celebration of the centennial of the opening of Horseshoe Curve, a night photo was arranged by Sylvania Electric Products using 6,000 flashbulbs and of wiring to illuminate the area. The event also commemorated the 75th anniversary of the incandescent light bulb. Pennsylvania steam locomotive 1361 was placed at the park inside the Horseshoe Curve on June 8, 1957. It is one of 425 K4s-class engines: the principal passenger locomotives on the Pennsylvania Railroad that regularly plied the curve. The Horseshoe Curve was listed on the National Register of Historic Places and was designated a National Historic Landmark on November 13, 1966. The operation of the observation park was transferred to the city of Altoona the same year. The Pennsylvania Railroad was combined with the New York Central Railroad in 1968. The merger created Penn Central, which went bankrupt in 1970 and was taken over by the federal government in 1976, as part of the merger that created Conrail. The second track from the inside at the Horseshoe Curve was removed by Conrail in 1981. The K4s 1361 was removed from the curve for a restoration to working order in September 1985 and was replaced with the ex-Conrail EMD GP9 diesel-electric locomotive 7048 that was repainted into a Pennsylvania Railroad scheme. Starting in June 1990, the park at the Horseshoe Curve underwent a $5.8 million renovation funded by the Pennsylvania Department of Transportation and by the National Park Service through its "America's Industrial Heritage Project". The renovations were completed in April 1992 with the dedication of a new visitor center. In 1999, Conrail was divided between CSX Transportation and Norfolk Southern, with the Horseshoe Curve being acquired by the latter. The Horseshoe Curve was lit up again with fireworks and rail-borne searchlights during its sesquicentennial in 2004 in homage to the 1954 celebrations. It was designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2004. Current operations The curve remains busy as part of Norfolk Southern's Pittsburgh Line: , it was passed by 51 scheduled freight trains each day, not including locals and helper engines, which can double the number. Coupled to the rear of long trains, helper engines add power going up and help to brake coming down. For some years before 2020, Norfolk Southern used SD40Es as helpers; since then, EMD SD70ACU locomotives are used. In 2012, Norfolk Southern said annual traffic passing Horseshoe Curve was , including locomotives. Amtrak's Pennsylvanian between Pittsburgh and New York City rounds the curve once each way daily. Maximum speeds for trains at Horseshoe Curve are for freight and about for passenger trains. Trackside attractions The Railroaders Memorial Museum in Altoona manages a visitor center next to the curve. The center has historical artifacts and memorabilia relating to the curve and a raised-relief map of the Altoona–Johnstown area. Access to the curve is by a funicular or a 194-step stairway. The funicular is single-tracked, with the cars passing each other halfway up the slope; the cars are painted to resemble Pennsylvania Railroad passenger cars. A former "watchman's shanty" is in the park. Horseshoe Curve is popular with railfans; watchers can sometimes see three trains passing at once. In August 2012, the former Nickel Plate Road (NKP) steam locomotive No. 765 traversed Horseshoe Curve: the first steam locomotive to do so since 1977, while deadheading to and from Harrisburg as part of Norfolk Southern's 21st Century Steam program. NKP 765 returned to the curve in May 2013 with public excursion trains from Lewistown to Gallitzin. See also Altoona Curve, a local baseball team named after the railroad curve. List of funicular railways List of Historic Civil Engineering Landmarks List of National Historic Landmarks in Pennsylvania National Register of Historic Places listings in Blair County, Pennsylvania Raurimu Spiral Tehachapi Loop Notes References External links Pennsylvania Locomotives on Horseshoe Curve 1854 establishments in Pennsylvania Altoona, Pennsylvania Historic Civil Engineering Landmarks Museums in Blair County, Pennsylvania National Historic Landmarks in Pennsylvania National Register of Historic Places in Blair County, Pennsylvania Norfolk Southern Railway Pennsylvania Railroad Rail infrastructure in Pennsylvania Rail infrastructure on the National Register of Historic Places in Pennsylvania Railroad museums in Pennsylvania Railroad-related National Historic Landmarks Railway lines opened in 1854 Tourist attractions in Blair County, Pennsylvania Transportation buildings and structures in Blair County, Pennsylvania
Horseshoe Curve (Pennsylvania)
[ "Engineering" ]
2,489
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
648,008
https://en.wikipedia.org/wiki/Asymptotic%20freedom
In quantum field theory, asymptotic freedom is a property of some gauge theories that causes interactions between particles to become asymptotically weaker as the energy scale increases and the corresponding length scale decreases. (Alternatively, and perhaps contrarily, in applying an S-matrix, asymptotically free refers to free particles states in the distant past or the distant future.) Asymptotic freedom is a feature of quantum chromodynamics (QCD), the quantum field theory of the strong interaction between quarks and gluons, the fundamental constituents of nuclear matter. Quarks interact weakly at high energies, allowing perturbative calculations. At low energies, the interaction becomes strong, leading to the confinement of quarks and gluons within composite hadrons. The asymptotic freedom of QCD was discovered in 1973 by David Gross and Frank Wilczek, and independently by David Politzer in the same year. For this work all three shared the 2004 Nobel Prize in Physics. Discovery Asymptotic freedom in QCD was discovered in 1973 by David Gross and Frank Wilczek, and independently by David Politzer in the same year. The same phenomenon had previously been observed (in quantum electrodynamics with a charged vector field, by V.S. Vanyashin and M.V. Terent'ev in 1965; and Yang–Mills theory by Iosif Khriplovich in 1969 and Gerard 't Hooft in 1972), but its physical significance was not realized until the work of Gross, Wilczek and Politzer, which was recognized by the 2004 Nobel Prize in Physics. Experiments at the Stanford Linear Accelerator showed that inside protons, quarks behaved as if they were free. This was a great surprise, as many believed quarks to be tightly bound by the strong interaction, and so they should rapidly dissipate their motion by strong interaction radiation when they got violently accelerated, much like how electrons emit electromagnetic radiation when accelerated. The discovery was instrumental in "rehabilitating" quantum field theory. Prior to 1973, many theorists suspected that field theory was fundamentally inconsistent because the interactions become infinitely strong at short distances. This phenomenon is usually called a Landau pole, and it defines the smallest length scale that a theory can describe. This problem was discovered in field theories of interacting scalars and spinors, including quantum electrodynamics (QED), and Lehmann positivity led many to suspect that it is unavoidable. Asymptotically free theories become weak at short distances, there is no Landau pole, and these quantum field theories are believed to be completely consistent down to any length scale. Electroweak theory within the Standard Model is not asymptotically free. So a Landau pole exists in the Standard Model. With the Landau pole a problem arises when Higgs boson is being considered. Quantum triviality can be used to bound or predict parameters such as the Higgs boson mass. This leads to a predictable Higgs mass in asymptotic safety scenarios. In other scenarios, interactions are weak so that any inconsistency arises at distances shorter than the Planck length. Screening and antiscreening The variation in a physical coupling constant under changes of scale can be understood qualitatively as coming from the action of the field on virtual particles carrying the relevant charge. The Landau pole behavior of QED (related to quantum triviality) is a consequence of screening by virtual charged particle–antiparticle pairs, such as electron–positron pairs, in the vacuum. In the vicinity of a charge, the vacuum becomes polarized: virtual particles of opposing charge are attracted to the charge, and virtual particles of like charge are repelled. The net effect is to partially cancel out the field at any finite distance. Getting closer and closer to the central charge, one sees less and less of the effect of the vacuum, and the effective charge increases. In QCD the same thing happens with virtual quark-antiquark pairs; they tend to screen the color charge. However, QCD has an additional wrinkle: its force-carrying particles, the gluons, themselves carry color charge, and in a different manner. Each gluon carries both a color charge and an anti-color magnetic moment. The net effect of polarization of virtual gluons in the vacuum is not to screen the field but to augment it and change its color. This is sometimes called antiscreening (color paramagnetism). Getting closer to a quark diminishes the antiscreening effect of the surrounding virtual gluons, so the contribution of this effect would be to weaken the effective charge with decreasing distance. Since the virtual quarks and the virtual gluons contribute opposite effects, which effect wins out depends on the number of different kinds, or flavors, of quark. For standard QCD with three colors, as long as there are no more than 16 flavors of quark (not counting the antiquarks separately), antiscreening prevails and the theory is asymptotically free. In fact, there are only 6 known quark flavors. Calculating asymptotic freedom Asymptotic freedom can be derived by calculating the beta function describing the variation of the theory's coupling constant under the renormalization group. For sufficiently short distances or large exchanges of momentum (which probe short-distance behavior, roughly because of the inverse relationship between a quantum's momentum and De Broglie wavelength), an asymptotically free theory is amenable to perturbation theory calculations using Feynman diagrams. Such situations are therefore more theoretically tractable than the long-distance, strong-coupling behavior also often present in such theories, which is thought to produce confinement. Calculating the beta-function is a matter of evaluating Feynman diagrams contributing to the interaction of a quark emitting or absorbing a gluon. Essentially, the beta-function describes how the coupling constants vary as one scales the system . The calculation can be done using rescaling in position space or momentum space (momentum shell integration). In non-abelian gauge theories such as QCD, the existence of asymptotic freedom depends on the gauge group and number of flavors of interacting particles. To lowest nontrivial order, the beta-function in an SU(N) gauge theory with kinds of quark-like particle is where is the theory's equivalent of the fine-structure constant, in the units favored by particle physicists. If this function is negative, the theory is asymptotically free. For SU(3), one has and the requirement that gives Thus for SU(3), the color charge gauge group of QCD, the theory is asymptotically free if there are 16 or fewer flavors of quarks. Besides QCD, asymptotic freedom can also be seen in other systems like the nonlinear -model in 2 dimensions, which has a structure similar to the SU(N) invariant Yang–Mills theory in 4 dimensions. Finally, one can find theories that are asymptotically free and reduce to the full Standard Model of electromagnetic, weak and strong forces at low enough energies. See also Asymptotic safety Gluon field strength tensor Quantum triviality References Quantum field theory Quantum chromodynamics Renormalization group Gauge theories
Asymptotic freedom
[ "Physics" ]
1,529
[ "Quantum field theory", "Physical phenomena", "Critical phenomena", "Quantum mechanics", "Renormalization group", "Statistical mechanics" ]
648,326
https://en.wikipedia.org/wiki/Quantum%20Zeno%20effect
The quantum Zeno effect (also known as the Turing paradox) is a feature of quantum-mechanical systems allowing a particle's time evolution to be slowed down by measuring it frequently enough with respect to some chosen measurement setting. Sometimes this effect is interpreted as "a system cannot change while you are watching it". One can "freeze" the evolution of the system by measuring it frequently enough in its known initial state. The meaning of the term has since expanded, leading to a more technical definition, in which time evolution can be suppressed not only by measurement: the quantum Zeno effect is the suppression of unitary time evolution in quantum systems provided by a variety of sources: measurement, interactions with the environment, stochastic fields, among other factors. As an outgrowth of study of the quantum Zeno effect, it has become clear that applying a series of sufficiently strong and fast pulses with appropriate symmetry can also decouple a system from its decohering environment. The first rigorous and general derivation of the quantum Zeno effect was presented in 1974 by Antonio Degasperis, Luciano Fonda, and Giancarlo Ghirardi, although it had previously been described by Alan Turing. The comparison with Zeno's paradox is due to a 1977 article by Baidyanath Misra & E. C. George Sudarshan. The name comes by analogy to Zeno's arrow paradox, which states that because an arrow in flight is not seen to move during any single instant, it cannot possibly be moving at all. In the quantum Zeno effect an unstable state seems frozen – to not 'move' – due to a constant series of observations. According to the reduction postulate, each measurement causes the wavefunction to collapse to an eigenstate of the measurement basis. In the context of this effect, an observation can simply be the absorption of a particle, without the need of an observer in any conventional sense. However, there is controversy over the interpretation of the effect, sometimes referred to as the "measurement problem" in traversing the interface between microscopic and macroscopic objects. Another crucial problem related to the effect is strictly connected to the time–energy indeterminacy relation (part of the indeterminacy principle). If one wants to make the measurement process more and more frequent, one has to correspondingly decrease the time duration of the measurement itself. But the request that the measurement last only a very short time implies that the energy spread of the state in which reduction occurs becomes increasingly large. However, the deviations from the exponential decay law for small times is crucially related to the inverse of the energy spread, so that the region in which the deviations are appreciable shrinks when one makes the measurement process duration shorter and shorter. An explicit evaluation of these two competing requests shows that it is inappropriate, without taking into account this basic fact, to deal with the actual occurrence and emergence of Zeno's effect. Closely related (and sometimes not distinguished from the quantum Zeno effect) is the watchdog effect, in which the time evolution of a system is affected by its continuous coupling to the environment. Description Unstable quantum systems are predicted to exhibit a short-time deviation from the exponential decay law. This universal phenomenon has led to the prediction that frequent measurements during this nonexponential period could inhibit decay of the system, one form of the quantum Zeno effect. Subsequently, it was predicted that measurements applied more slowly could also enhance decay rates, a phenomenon known as the quantum anti-Zeno effect. In quantum mechanics, the interaction mentioned is called "measurement" because its result can be interpreted in terms of classical mechanics. Frequent measurement prohibits the transition. It can be a transition of a particle from one half-space to another (which could be used for an atomic mirror in an atomic nanoscope) as in the time-of-arrival problem, a transition of a photon in a waveguide from one mode to another, and it can be a transition of an atom from one quantum state to another. It can be a transition from the subspace without decoherent loss of a qubit to a state with a qubit lost in a quantum computer. In this sense, for the qubit correction, it is sufficient to determine whether the decoherence has already occurred or not. All these can be considered as applications of the Zeno effect. By its nature, the effect appears only in systems with distinguishable quantum states, and hence is inapplicable to classical phenomena and macroscopic bodies. The mathematician Robin Gandy recalled Turing's formulation of the quantum Zeno effect in a letter to fellow mathematician Max Newman, shortly after Turing's death: As a result of Turing's suggestion, the quantum Zeno effect is also sometimes known as the Turing paradox. The idea is implicit in the early work of John von Neumann on the mathematical foundations of quantum mechanics, and in particular the rule sometimes called the reduction postulate. It was later shown that the quantum Zeno effect of a single system is equivalent to the indetermination of the quantum state of a single system. Various realizations and general definition The treatment of the Zeno effect as a paradox is not limited to the processes of quantum decay. In general, the term Zeno effect is applied to various transitions, and sometimes these transitions may be very different from a mere "decay" (whether exponential or non-exponential). One realization refers to the observation of an object (Zeno's arrow, or any quantum particle) as it leaves some region of space. In the 20th century, the trapping (confinement) of a particle in some region by its observation outside the region was considered as nonsensical, indicating some non-completeness of quantum mechanics. Even as late as 2001, confinement by absorption was considered as a paradox. Later, similar effects of the suppression of Raman scattering was considered an expected effect, not a paradox at all. The absorption of a photon at some wavelength, the release of a photon (for example one that has escaped from some mode of a fiber), or even the relaxation of a particle as it enters some region, are all processes that can be interpreted as measurement. Such a measurement suppresses the transition, and is called the Zeno effect in the scientific literature. In order to cover all of these phenomena (including the original effect of suppression of quantum decay), the Zeno effect can be defined as a class of phenomena in which some transition is suppressed by an interaction – one that allows the interpretation of the resulting state in the terms 'transition did not yet happen' and 'transition has already occurred', or 'The proposition that the evolution of a quantum system is halted' if the state of the system is continuously measured by a macroscopic device to check whether the system is still in its initial state. Periodic measurement of a quantum system Consider a system in a state , which is the eigenstate of some measurement operator. Say the system under free time evolution will decay with a certain probability into state . If measurements are made periodically, with some finite interval between each one, at each measurement, the wave function collapses to an eigenstate of the measurement operator. Between the measurements, the system evolves away from this eigenstate into a superposition state of the states and . When the superposition state is measured, it will again collapse, either back into state as in the first measurement, or away into state . However, its probability of collapsing into state after a very short amount of time is proportional to , since probabilities are proportional to squared amplitudes, and amplitudes behave linearly. Thus, in the limit of a large number of short intervals, with a measurement at the end of every interval, the probability of making the transition to goes to zero. According to decoherence theory, the collapse of the wave function is not a discrete, instantaneous event. A "measurement" is equivalent to strongly coupling the quantum system to the noisy thermal environment for a brief period of time, and continuous strong coupling is equivalent to frequent "measurement". The time it takes for the wave function to "collapse" is related to the decoherence time of the system when coupled to the environment. The stronger the coupling is, and the shorter the decoherence time, the faster it will collapse. So in the decoherence picture, a perfect implementation of the quantum Zeno effect corresponds to the limit where a quantum system is continuously coupled to the environment, and where that coupling is infinitely strong, and where the "environment" is an infinitely large source of thermal randomness. Experiments and discussion Experimentally, strong suppression of the evolution of a quantum system due to environmental coupling has been observed in a number of microscopic systems. In 1989, David J. Wineland and his group at NIST observed the quantum Zeno effect for a two-level atomic system that was interrogated during its evolution. Approximately 5,000 ions were stored in a cylindrical Penning trap and laser-cooled to below 250 mK. A resonant RF pulse was applied, which, if applied alone, would cause the entire ground-state population to migrate into an excited state. After the pulse was applied, the ions were monitored for photons emitted due to relaxation. The ion trap was then regularly "measured" by applying a sequence of ultraviolet pulses during the RF pulse. As expected, the ultraviolet pulses suppressed the evolution of the system into the excited state. The results were in good agreement with theoretical models. In 2001, Mark G. Raizen and his group at the University of Texas at Austin observed the quantum Zeno effect for an unstable quantum system, as originally proposed by Sudarshan and Misra. They also observed an anti-Zeno effect. Ultracold sodium atoms were trapped in an accelerating optical lattice, and the loss due to tunneling was measured. The evolution was interrupted by reducing the acceleration, thereby stopping quantum tunneling. The group observed suppression or enhancement of the decay rate, depending on the regime of measurement. In 2015, Mukund Vengalattore and his group at Cornell University demonstrated a quantum Zeno effect as the modulation of the rate of quantum tunnelling in an ultracold lattice gas by the intensity of light used to image the atoms. The quantum Zeno effect is used in commercial atomic magnetometers and proposed to be part of birds' magnetic compass sensory mechanism (magnetoreception). It is still an open question how closely one can approach the limit of an infinite number of interrogations due to the Heisenberg uncertainty involved in shorter measurement times. It has been shown, however, that measurements performed at a finite frequency can yield arbitrarily strong Zeno effects. In 2006, Streed et al. at MIT observed the dependence of the Zeno effect on measurement pulse characteristics. The interpretation of experiments in terms of the "Zeno effect" helps describe the origin of a phenomenon. Nevertheless, such an interpretation does not bring any principally new features not described with the Schrödinger equation of the quantum system. Even more, the detailed description of experiments with the "Zeno effect", especially at the limit of high frequency of measurements (high efficiency of suppression of transition, or high reflectivity of a ridged mirror) usually do not behave as expected for an idealized measurement. It was shown that the quantum Zeno effect persists in the many-worlds and relative-states interpretations of quantum mechanics. See also Einselection Interference (wave propagation) Measurement problem Observer effect (physics) Quantum decoherence Quantum Darwinism Wavefunction collapse Zeno's paradoxes References Further reading External links Zeno.qcl A computer program written in QCL which demonstrates the Quantum Zeno effect Quantum measurement Quantum mechanical entropy
Quantum Zeno effect
[ "Physics" ]
2,395
[ "Physical quantities", "Quantum mechanics", "Entropy", "Quantum measurement", "Quantum mechanical entropy" ]
649,115
https://en.wikipedia.org/wiki/E7%20%28mathematics%29
{{DISPLAYTITLE:E7 (mathematics)}} In mathematics, E7 is the name of several closely related Lie groups, linear algebraic groups or their Lie algebras e7, all of which have dimension 133; the same notation E7 is used for the corresponding root lattice, which has rank 7. The designation E7 comes from the Cartan–Killing classification of the complex simple Lie algebras, which fall into four infinite series labeled An, Bn, Cn, Dn, and five exceptional cases labeled E6, E7, E8, F4, and G2. The E7 algebra is thus one of the five exceptional cases. The fundamental group of the (adjoint) complex form, compact real form, or any algebraic version of E7 is the cyclic group Z/2Z, and its outer automorphism group is the trivial group. The dimension of its fundamental representation is 56. Real and complex forms There is a unique complex Lie algebra of type E7, corresponding to a complex group of complex dimension 133. The complex adjoint Lie group E7 of complex dimension 133 can be considered as a simple real Lie group of real dimension 266. This has fundamental group Z/2Z, has maximal compact subgroup the compact form (see below) of E7, and has an outer automorphism group of order 2 generated by complex conjugation. As well as the complex Lie group of type E7, there are four real forms of the Lie algebra, and correspondingly four real forms of the group with trivial center (all of which have an algebraic double cover, and three of which have further non-algebraic covers, giving further real forms), all of real dimension 133, as follows: The compact form (which is usually the one meant if no other information is given), which has fundamental group Z/2Z and has trivial outer automorphism group. The split form, EV (or E7(7)), which has maximal compact subgroup SU(8)/{±1}, fundamental group cyclic of order 4 and outer automorphism group of order 2. EVI (or E7(-5)), which has maximal compact subgroup SU(2)·SO(12)/(center), fundamental group non-cyclic of order 4 and trivial outer automorphism group. EVII (or E7(-25)), which has maximal compact subgroup SO(2)·E6/(center), infinite cyclic fundamental group and outer automorphism group of order 2. For a complete list of real forms of simple Lie algebras, see the list of simple Lie groups. The compact real form of E7 is the isometry group of the 64-dimensional exceptional compact Riemannian symmetric space EVI (in Cartan's classification). It is known informally as the "" because it can be built using an algebra that is the tensor product of the quaternions and the octonions, and is also known as a Rosenfeld projective plane, though it does not obey the usual axioms of a projective plane. This can be seen systematically using a construction known as the magic square, due to Hans Freudenthal and Jacques Tits. The Tits–Koecher construction produces forms of the E7 Lie algebra from Albert algebras, 27-dimensional exceptional Jordan algebras. E7 as an algebraic group By means of a Chevalley basis for the Lie algebra, one can define E7 as a linear algebraic group over the integers and, consequently, over any commutative ring and in particular over any field: this defines the so-called split (sometimes also known as "untwisted") adjoint form of E7. Over an algebraically closed field, this and its double cover are the only forms; however, over other fields, there are often many other forms, or "twists" of E7, which are classified in the general framework of Galois cohomology (over a perfect field k) by the set H1(k, Aut(E7)) which, because the Dynkin diagram of E7 (see below) has no automorphisms, coincides with H1(k, E7, ad). Over the field of real numbers, the real component of the identity of these algebraically twisted forms of E7 coincide with the three real Lie groups mentioned above, but with a subtlety concerning the fundamental group: all adjoint forms of E7 have fundamental group Z/2Z in the sense of algebraic geometry, meaning that they admit exactly one double cover; the further non-compact real Lie group forms of E7 are therefore not algebraic and admit no faithful finite-dimensional representations. Over finite fields, the Lang–Steinberg theorem implies that H1(k, E7) = 0, meaning that E7 has no twisted forms: see below. Algebra Dynkin diagram The Dynkin diagram for E7 is given by . Root system Even though the roots span a 7-dimensional space, it is more symmetric and convenient to represent them as vectors lying in a 7-dimensional subspace of an 8-dimensional vector space. The roots are all the 8×7 permutations of (1,−1,0,0,0,0,0,0) and all the permutations of (,,,,−,−,−,−) Note that the 7-dimensional subspace is the subspace where the sum of all the eight coordinates is zero. There are 126 roots. The simple roots are (0,−1,1,0,0,0,0,0) (0,0,−1,1,0,0,0,0) (0,0,0,−1,1,0,0,0) (0,0,0,0,−1,1,0,0) (0,0,0,0,0,−1,1,0) (0,0,0,0,0,0,−1,1) (,,,,−,−,−,−) They are listed so that their corresponding nodes in the Dynkin diagram are ordered from left to right (in the diagram depicted above) with the side node last. An alternative description An alternative (7-dimensional) description of the root system, which is useful in considering as a subgroup of E8, is the following: All permutations of (±1,±1,0,0,0,0,0) preserving the zero at the last entry, all of the following roots with an even number of + and the two following roots Thus the generators consist of a 66-dimensional so(12) subalgebra as well as 64 generators that transform as two self-conjugate Weyl spinors of spin(12) of opposite chirality, and their chirality generator, and two other generators of chiralities . Given the E7 Cartan matrix (below) and a Dynkin diagram node ordering of: one choice of simple roots is given by the rows of the following matrix: Weyl group The Weyl group of E7 is of order 2903040: it is the direct product of the cyclic group of order 2 and the unique simple group of order 1451520 (which can be described as PSp6(2) or PSΩ7(2)). Cartan matrix Important subalgebras and representations E7 has an SU(8) subalgebra, as is evident by noting that in the 8-dimensional description of the root system, the first group of roots are identical to the roots of SU(8) (with the same Cartan subalgebra as in the E7). In addition to the 133-dimensional adjoint representation, there is a 56-dimensional "vector" representation, to be found in the E8 adjoint representation. The characters of finite dimensional representations of the real and complex Lie algebras and Lie groups are all given by the Weyl character formula. The dimensions of the smallest irreducible representations are : 1, 56, 133, 912, 1463, 1539, 6480, 7371, 8645, 24320, 27664, 40755, 51072, 86184, 150822, 152152, 238602, 253935, 293930, 320112, 362880, 365750, 573440, 617253, 861840, 885248, 915705, 980343, 2273920, 2282280, 2785552, 3424256, 3635840... The underlined terms in the sequence above are the dimensions of those irreducible representations possessed by the adjoint form of E7 (equivalently, those whose weights belong to the root lattice of E7), whereas the full sequence gives the dimensions of the irreducible representations of the simply connected form of E7. There exist non-isomorphic irreducible representation of dimensions 1903725824, 16349520330, etc. The fundamental representations are those with dimensions 133, 8645, 365750, 27664, 1539, 56 and 912 (corresponding to the seven nodes in the Dynkin diagram in the order chosen for the Cartan matrix above, i.e., the nodes are read in the six-node chain first, with the last node being connected to the third). The embeddings of the maximal subgroups of E7 up to dimension 133 are shown to the right. E7 Polynomial Invariants E7 is the automorphism group of the following pair of polynomials in 56 non-commutative variables. We divide the variables into two groups of 28, (p, P) and (q, Q) where p and q are real variables and P and Q are 3×3 octonion hermitian matrices. Then the first invariant is the symplectic invariant of Sp(56, R): The second more complicated invariant is a symmetric quartic polynomial: Where and the binary circle operator is defined by . An alternative quartic polynomial invariant constructed by Cartan uses two anti-symmetric 8x8 matrices each with 28 components. Chevalley groups of type E7 The points over a finite field with q elements of the (split) algebraic group E7 (see above), whether of the adjoint (centerless) or simply connected form (its algebraic universal cover), give a finite Chevalley group. This is closely connected to the group written E7(q), however there is ambiguity in this notation, which can stand for several things: the finite group consisting of the points over Fq of the simply connected form of E7 (for clarity, this can be written E7,sc(q) and is known as the "universal" Chevalley group of type E7 over Fq), (rarely) the finite group consisting of the points over Fq of the adjoint form of E7 (for clarity, this can be written E7,ad(q), and is known as the "adjoint" Chevalley group of type E7 over Fq), or the finite group which is the image of the natural map from the former to the latter: this is what will be denoted by E7(q) in the following, as is most common in texts dealing with finite groups. From the finite group perspective, the relation between these three groups, which is quite analogous to that between SL(n, q), PGL(n, q) and PSL(n, q), can be summarized as follows: E7(q) is simple for any q, E7,sc(q) is its Schur cover, and the E7,ad(q) lies in its automorphism group; furthermore, when q is a power of 2, all three coincide, and otherwise (when q is odd), the Schur multiplier of E7(q) is 2 and E7(q) is of index 2 in E7,ad(q), which explains why E7,sc(q) and E7,ad(q) are often written as 2·E7(q) and E7(q)·2. From the algebraic group perspective, it is less common for E7(q) to refer to the finite simple group, because the latter is not in a natural way the set of points of an algebraic group over Fq unlike E7,sc(q) and E7,ad(q). As mentioned above, E7(q) is simple for any q, and it constitutes one of the infinite families addressed by the classification of finite simple groups. Its number of elements is given by the formula : The order of E7,sc(q) or E7,ad(q) (both are equal) can be obtained by removing the dividing factor gcd(2, q−1) . The Schur multiplier of E7(q) is gcd(2, q−1), and its outer automorphism group is the product of the diagonal automorphism group Z/gcd(2, q−1)Z (given by the action of E7,ad(q)) and the group of field automorphisms (i.e., cyclic of order f if q = pf where p is prime). Importance in physics N = 8 supergravity in four dimensions, which is a dimensional reduction from eleven-dimensional supergravity, admit an E7 bosonic global symmetry and an SU(8) bosonic local symmetry. The fermions are in representations of SU(8), the gauge fields are in a representation of E7, and the scalars are in a representation of both (Gravitons are singlets with respect to both). Physical states are in representations of the coset . In string theory, E7 appears as a part of the gauge group of one of the (unstable and non-supersymmetric) versions of the heterotic string. It can also appear in the unbroken gauge group in six-dimensional compactifications of heterotic string theory, for instance on the four-dimensional surface K3. See also En (Lie algebra) ADE classification List of simple Lie groups Notes References John Baez, The Octonions, Section 4.5: E7, Bull. Amer. Math. Soc. 39 (2002), 145-205. Online HTML version at http://math.ucr.edu/home/baez/octonions/node18.html. E. Cremmer and B. Julia, The Supergravity Theory. 1. The Lagrangian, Phys.Lett.B80:48,1978. Online scanned version at http://ac.els-cdn.com/0370269378903039/1-s2.0-0370269378903039-main.pdf?_tid=79273f80-539d-11e4-a133-00000aab0f6c&acdnat=1413289833_5f3539a6365149b108ddcec889200964. Algebraic groups Lie groups Exceptional Lie algebras
E7 (mathematics)
[ "Mathematics" ]
3,207
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
649,711
https://en.wikipedia.org/wiki/Vector%20boson
In particle physics, a vector boson is a boson whose spin equals one. Vector bosons that are also elementary particles are gauge bosons, the force carriers of fundamental interactions. Some composite particles are vector bosons, for instance any vector meson (quark and antiquark). During the 1970s and 1980s, intermediate vector bosons (the W and Z bosons, which mediate the weak interaction) drew much attention in particle physics. A pseudovector boson is a vector boson that has even parity, whereas "regular" vector bosons have odd parity. There are no fundamental pseudovector bosons, but there are pseudovector mesons. In relation to the Higgs boson The W and Z particles interact with the Higgs boson as shown in the Feynman diagram. Explanation The name vector boson arises from quantum field theory. The component of such a particle's spin along any axis has the three eigenvalues −, 0, and + (where is the reduced Planck constant), meaning that any measurement of its spin can only yield one of these values. (This is true for massive vector bosons; the situation differs for massless particles such as the photon, for reasons beyond the scope of this article. See Wigner's classification.) The space of spin states therefore is a discrete degree of freedom consisting of three states, the same as the number of components of a vector in three-dimensional space. Quantum superpositions of these states can be taken such that they transform under rotations just like the spatial components of a rotating vector (the so-called 3 representation of SU(2)). If the vector boson is taken to be the quantum of a field, the field is a vector field, hence the name. The boson part of the name arises from the spin-statistics relation, which requires that all integer spin particles be bosons. See also Scalar boson Maxwell's equations Proca action References Bosons Mesons Gauge theories Particle physics
Vector boson
[ "Physics" ]
418
[ "Bosons", "Subatomic particles", "Particle physics", "Matter" ]
649,743
https://en.wikipedia.org/wiki/Fundamental%20representation
In representation theory of Lie groups and Lie algebras, a fundamental representation is an irreducible finite-dimensional representation of a semisimple Lie group or Lie algebra whose highest weight is a fundamental weight. For example, the defining module of a classical Lie group is a fundamental representation. Any finite-dimensional irreducible representation of a semisimple Lie group or Lie algebra can be constructed from the fundamental representations by a procedure due to Élie Cartan. Thus in a certain sense, the fundamental representations are the elementary building blocks for arbitrary finite-dimensional representations. Examples In the case of the general linear group, all fundamental representations are exterior products of the defining module. In the case of the special unitary group SU(n), the n − 1 fundamental representations are the wedge products consisting of the alternating tensors, for k = 1, 2, ..., n − 1. The spin representation of the twofold cover of an odd orthogonal group, the odd spin group, and the two half-spin representations of the twofold cover of an even orthogonal group, the even spinor group, are fundamental representations that cannot be realized in the space of tensors. The adjoint representation of the simple Lie group of type E8 is a fundamental representation. Explanation The irreducible representations of a simply-connected compact Lie group are indexed by their highest weights. These weights are the lattice points in an orthant Q+ in the weight lattice of the Lie group consisting of the dominant integral weights. It can be proved that there exists a set of fundamental weights, indexed by the vertices of the Dynkin diagram, such that any dominant integral weight is a non-negative integer linear combination of the fundamental weights. The corresponding irreducible representations are the fundamental representations of the Lie group. From the expansion of a dominant weight in terms of the fundamental weights one can take a corresponding tensor product of the fundamental representations and extract one copy of the irreducible representation corresponding to that dominant weight. Other uses Outside of Lie theory, the term fundamental representation is sometimes loosely used to refer to a smallest-dimensional faithful representation, though this is also often called the standard or defining representation (a term referring more to the history, rather than having a well-defined mathematical meaning). References . Specific Lie groups Representation theory
Fundamental representation
[ "Mathematics" ]
467
[ "Lie groups", "Mathematical structures", "Fields of abstract algebra", "Algebraic structures", "Representation theory" ]
649,976
https://en.wikipedia.org/wiki/Yuan-Cheng%20Fung
Yuan-Cheng "Bert" Fung (September 15, 1919 – December 15, 2019) was a Chinese-American bioengineer and writer. He is regarded as a founding figure of bioengineering, tissue engineering, and the "Founder of Modern Biomechanics". Biography Fung was born in Jiangsu Province, China in 1919. He earned a bachelor's degree in 1941 and a master's degree in 1943 from the National Central University (later renamed Nanjing University in mainland China and reinstated in Taiwan), and earned a Ph.D. from the California Institute of Technology in 1948. Fung was Professor Emeritus and Research Engineer at the University of California San Diego. He published prominent texts along with Pin Tong who was then at Hong Kong University of Science & Technology. Fung died at Jacobs Medical Center in San Diego, California, aged 100, on December 15, 2019. Fung was married to Luna Yu Hsien-Shih, a former mathematician and cofounder of the UC San Diego International Center, until her death in 2017. The couple raised two children. Research He is the author of numerous books including Foundations of Solid Mechanics, Continuum Mechanics, and a series of books on Biomechanics. He is also one of the principal founders of the Journal of Biomechanics and was a past chair of the ASME International Applied Mechanics Division. In 1972, Fung established the Biomechanics Symposium under the American Society of Mechanical Engineers. This biannual summer meeting, first held at the Georgia Institute of Technology, became the annual Summer Bioengineering Conference. Fung and colleagues were also the first to recognize the importance of residual stress on arterial mechanical behavior. Fung's Law Fung's famous exponential strain constitutive equation for preconditioned soft tissues is with quadratic forms of Green-Lagrange strains and , and material constants. is a strain energy function per volume unit, which is the mechanical strain energy for a given temperature. Materials that follow this law are known as Fung-elastic. Honors and awards Theodore von Karman Medal, 1976 Otto Laporte Award, 1977 Worcester Reed Warner Medal, 1984 Jean-Leonard-Marie Poiseuille Award, 1986 Timoshenko Medal, 1991 Lissner Award for Bioengineering, from ASME Borelli Medal, from ASB Landis Award, from Microcirculation Society Alza Award, from BMES Melville Medal, 1994 United States National Academy of Engineering Founders Award (NAE Founders Award), 1998 National Medal of Science, 2000 Fritz J. and Dolores H. Russ Prize, 2007 ("for the characterization and modeling of human tissue mechanics and function leading to prevention and mitigation of trauma.") Revelle Medal, from UC San Diego, 2016 Fung was elected to the United States National Academy of Sciences (1993), the National Academy of Engineering (1979), the Institute of Medicine (1991), the Academia Sinica (1968), and was a Foreign Member of the Chinese Academy of Sciences (1994 election). References External links Classical and Computational Solid Mechanics Profile at UCSD Y.C. Fung, Mechanics of Man, Acceptance Speech for the Timoshenko Medal. YC Fung Young Investigator Award Molecular & Cellular Biomechanics: In Honor of The 90th Birthday of Professor Yuan Cheng Fung 1919 births 2019 deaths 20th-century American biologists 20th-century American engineers 21st-century American biologists 21st-century American engineers American bioengineers American men centenarians American science writers American writers of Chinese descent Biologists from Jiangsu Biomechanics Beijing No. 4 High School alumni Chongqing Nankai Secondary School alumni California Institute of Technology alumni California Institute of Technology faculty Chinese emigrants to the United States Educators from Changzhou Engineers from Jiangsu Foreign members of the Chinese Academy of Sciences Members of Academia Sinica Members of the National Academy of Medicine Members of the United States National Academy of Engineering Members of the United States National Academy of Sciences National Medal of Science laureates Nanjing University alumni National Central University alumni Scientists from California Scientists from Changzhou Tissue engineering University of California, San Diego faculty Writers from Changzhou
Yuan-Cheng Fung
[ "Chemistry", "Engineering", "Biology" ]
852
[ "Biological engineering", "Cloning", "Chemical engineering", "Tissue engineering", "Medical technology" ]
11,237,888
https://en.wikipedia.org/wiki/Genome%20survey%20sequence
In the fields of bioinformatics and computational biology, Genome survey sequences (GSS) are nucleotide sequences similar to expressed sequence tags (ESTs) that the only difference is that most of them are genomic in origin, rather than mRNA. Genome survey sequences are typically generated and submitted to NCBI by labs performing genome sequencing and are used, amongst other things, as a framework for the mapping and sequencing of genome size pieces included in the standard GenBank divisions. Contributions Genome survey sequencing is a new way to map the genome sequences since it is not dependent on mRNA. Current genome sequencing approaches are mostly high-throughput shotgun methods, and GSS is often used on the first step of sequencing. GSSs can provide an initial global view of a genome, which includes both coding and non-coding DNA and contain repetitive section of the genome unlike ESTs. For the estimation of repetitive sequences, GSS plays an important role in the early assessment of a sequencing project since these data can affect the assessment of sequences coverage, library quality and the construction process. For example, in the estimation of dog genome, it can estimate the global parameters, such as neutral mutation rate and repeat content. GSS is also an effective way to large-scale and rapidly characterizing genomes of related species where there is only little gene sequences or maps. GSS with low coverage can generate abundant information of gene content and putative regulatory elements of comparative species. It can compare these genes of related species to find out relatively expanded or contracted families. And combined with physical clone coverage, researchers can navigate the genome easily and characterize the specific genomic section by more extensive sequencing. Limitation The limitation of genomic survey sequence is that it lacks long-range continuity because of its fragmentary nature, which makes it harder to forecast gene and marker order. For example, to detect repetitive sequences in GSS data, it may not be possible to find out all the repeats since the repetitive genome may be longer than the reads, which is difficult to recognize. Types of data The GSS division contains (but is not limited to) the following types of data: Random "single pass read" genome survey sequences Random “single pass read” genome survey sequences is GSSs that generated along single pass read by random selection. Single-pass sequencing with lower fidelity can be used on the rapid accumulation of genomic data but with a lower accuracy. It includes RAPD, RFLP, AFLP and so on. Cosmid/BAC/YAC end sequences Cosmid/BAC/YAC end sequences use Cosmid/Bacterial artificial chromosome/Yeast artificial chromosome to sequence the genome from the end side. These sequences act like very low copy plasmids that there is only one copy per cell sometimes. To get enough chromosome, they need a large number of E. coli culture that 2.5 - 5 litres may be a reasonable amount. Cosmid/BAC/YAC can also be used to get bigger clone of DNA fragment than vectors like plasmid and phagemid. A larger insert is often helpful for the sequence project in organizing clones. Eukaryotic proteins can be expressed by using YAC with posttranslational modification. BAC can’t do that, but BACs can reliably represent human DNA much better than YAC or cosmid. Exon trapped genomic sequences Exon trapped sequence is used to identify genes in cloned DNA, and this is achieved by recognizing and trapping carrier containing exon sequence of DNA. Exon trapping has two main features: First, it is independent of availability of the RNA expressing target DNA. Second, isolated sequences can be derived directly from clone without knowing tissues expressing the gene which needs to be identified. During slicing, exon can be remained in mRNA and information carried by exon can be contained in the protein. Since fragment of DNA can be inserted into sequences, if an exon is inserted into intron, the transcript will be longer than usual and this transcript can be trapped by analysis. Alu PCR sequences Alu repetitive element is member of Short Interspersed Elements (SINE) in mammalian genome. There are about 300 to 500 thousand copies of Alu repetitive element in human genome, which means one Alu element exists in 4 to 6 kb averagely. Alu elements are distributed widely in mammalian genome, and repeatability is one of the characteristics, that is why it is called Alu repetitive element. By using special Alu sequence as target locus, specific human DNA can be obtained from clone of TAC, BAC, PAC or human-mouse cell hybrid. PCR is an approach used to clone a small piece of fragment of DNA. The fragment could be one gene or just a part of gene. PCR can only clone very small fragment of DNA, which generally does not exceed 10kbp. Alu PCR is a "DNA fingerprinting" technique. This approach is rapid and easy to use. It is obtained from analysis of many genomic loci flanked by Alu repetitive elements, which are non-autonomous retrotransposons present in high number of copies in primate genomes. Alu element can be used for genome fingerprinting based on PCR, which is also called Alu PCR. Transposon-tagged sequences There are several ways to analyze the function of a particular gene sequence, the most direct method is to replace it or cause a mutation and then to analyze the results and effects. There are three method are developed for this purpose: gene replacement, sense and anti-sense suppression, and insertional mutagenesis. Among these methods, insertional mutagenesis was proved to be very good and successful approach. At first, T-DNA was applied for insertional mutagenesis. However, using transposable element can bring more advantages. Transposable elements were first discovered by Barbara McClintock in maize plants. She identified the first transposable genetic element, which she called the Dissociation (Ds) locus. The size of transposable element is between 750 and 40000bp. Transposable element can be mainly classified as two classes: One class is very simple, called insertion sequence (IS), the other class is complicated, called transposon. Transposon has one or several characterized genes, which can be easily identified. IS has the gene of transposase. Transposon can be used as tag for a DNA with a know sequence. Transposon can appear at other locus through transcription or reverse transcription by the effect of nuclease. This appearance of transposon proved that genome is not statistical, but always changing the structure of itself. There are two advantages by using transposon tagging. First, if a transposon is inserted into a gene sequence, this insertion is single and intact. The intactness can make tagged sequence easily to molecular analysis. The other advantage is that, many transposons can be found eliminated from tagged gene sequence when transposase is analyzed. This provides confirmation that the inserted gene sequence was really tagged by transposon. Example of GSS file The following is an example of GSS file that can be submitted to GenBank: TYPE: GSS STATUS: New CONT_NAME: Sikela JM GSS#: Ayh00001 CLONE: HHC189 SOURCE: ATCC SOURCE_INHOST: 65128 OTHER_GSS: GSS00093, GSS000101 CITATION: Genomic sequences from Human brain tissue SEQ_PRIMER: M13 Forward P_END: 5' HIQUAL_START: 1 HIQUAL_STOP: 285 DNA_TYPE: Genomic CLASS: shotgun LIBRARY: Hippocampus, Stratagene (cat. #936205) PUBLIC: PUT_ID: Actin, gamma, skeletal COMMENT: SEQUENCE: AATCAGCCTGCAAGCAAAAGATAGGAATATTCACCTACAGTGGGCACCTCCTTAAGAAGCTG ATAGCTTGTTACACAGTAATTAGATTGAAGATAATGGACACGAAACATATTCCGGGATTAAA CATTCTTGTCAAGAAAGGGGGAGAGAAGTCTGTTGTGCAAGTTTCAAAGAAAAAGGGTACCA GCAAAAGTGATAATGATTTGAGGATTTCTGTCTCTAATTGGAGGATGATTCTCATGTAAGGT GCAAAAGTGATAATGATTTGAGGATTTCTGTCTCTAATTGGAGGATGATTCTCATGTAAGGT TGTTAGGAAATGGCAAAGTATTGATGATTGTGTGCTATGTGATTGGTGCTAGATACTTTAAC TGAGTATACGAGTGAAATACTTGAGACTCGTGTCACTT || References Bioinformatics Genomics
Genome survey sequence
[ "Engineering", "Biology" ]
1,829
[ "Bioinformatics", "Biological engineering" ]
11,240,093
https://en.wikipedia.org/wiki/Intransitive%20game
An intransitive or non-transitive game is a zero-sum game in which pairwise competitions between the strategies contain a cycle. If strategy A beats strategy B, B beats C, and C beats A, then the binary relation "to beat" is intransitive, since transitivity would require that A beat C. The terms "transitive game" or "intransitive game" are not used in game theory. A prototypical example of an intransitive game is the game rock, paper, scissors. In probabilistic games like Penney's game, the violation of transitivity results in a more subtle way, and is often presented as a probability paradox. Examples Rock, paper, scissors Penney's game Intransitive dice Fire Emblem, the video game franchise that popularized intransitive cycles in unit weapons: swords and magic beats axes and bows, axes and bows beat lances and knives, and lances and knives beat swords and magic See also Stochastic transitivity References Game theory game classes
Intransitive game
[ "Mathematics" ]
213
[ "Applied mathematics", "Game theory", "Game theory game classes", "Applied mathematics stubs" ]
11,240,666
https://en.wikipedia.org/wiki/Protein%20adulteration%20in%20China
In China, the adulteration and contamination of several food and feed ingredients with inexpensive melamine and other compounds, such as cyanuric acid, ammeline and ammelide, are common practice. These adulterants can be used to inflate the apparent protein content of products, so that inexpensive ingredients can pass for more expensive, concentrated proteins. Melamine by itself has not been thought to be very toxic to animals or humans except possibly in very high concentrations, but the combination of melamine and cyanuric acid has been implicated in kidney failure. Reports that cyanuric acid may be an independently and potentially widely used adulterant in China have heightened concerns for both animal and human health. Chinese protein export contamination was first identified after the wide recall of many brands of cat and dog food starting in March 2007 (the 2007 pet food recalls). The recalls in North America, Europe and South Africa came in response to reports of kidney failure in pets. Several Chinese companies sold products claimed to be wheat gluten, rice protein or corn gluten, but which proved to be wheat flour adulterated with melamine, cyanuric acid, and other contaminants. The Chinese government was slow to respond, denying that vegetable protein was exported from China and refusing to allow foreign food safety investigators to enter the country. Ultimately, the Chinese government acknowledged that contamination had occurred and arrested the managers of two protein manufacturers identified so far and took other measures to improve food safety and product quality. Reports of widespread adulteration of Chinese animal feed with melamine have raised the issue of melamine contamination in the human food supply both in China and abroad. On 27 April 2007, the U.S. Food and Drug Administration (FDA) subjected all vegetable proteins imported from China, intended for human or animal consumption, to detention without physical examination, including: wheat gluten, rice gluten, rice protein, rice protein concentrate, corn gluten, corn gluten meal, corn by-products, soy protein, soy gluten, proteins (includes amino acids and protein hydrolysates), and mung bean protein. In a teleconference with reporters on 1 May, officials from the FDA and U.S. Department of Agriculture said that between 2.5 and 3 million people in the United States had consumed chickens that had consumed feed containing contaminated vegetable protein from China. Reports that melamine has been added as a binder in animal feed manufactured in North America also raise the possibility that harmful melamine contamination might not be limited to China. In September 2008, Sanlu Group had to recall baby formula because it was contaminated with melamine. Around 294,000 babies in China became ill after drinking the milk; at least six babies died. As of July 2010, Chinese authorities were still reporting some seizures of melamine-contaminated dairy product in some provinces, though it was unclear whether these new contaminations constituted wholly new adulterations or were the result of illegal reuse of material from the 2008 adulterations. History The contaminated vegetable proteins were imported from China in 2006 and early 2007 and used as pet food ingredients. The process of identifying and accounting for the source of the contamination and for how the contaminant causes sickness was ongoing. The first recalls were announced by Menu Foods late on Friday, 16 March 2007 for cat and dog food products in the United States. By 30 March the United States began to ban imports of wheat gluten from China. The Chinese government responded on 4 April by categorically denying any connection to the North American food poisonings refusing to allow inspection of facilities suspected of producing contaminated products. However, on 6 April 2007, the Chinese government told the Associated Press they would investigate the source of the wheat gluten and by 23 April China gave permission to FDA investigators to enter the country. On 25 April Chinese authorities began to shut down and destroy the implicated factories and detain their managers. The following day, China's Foreign Ministry said it had banned the use of melamine in food products, admitting that products containing melamine had cleared customs while continuing to dispute the role of melamine in causing pet deaths. China also vowed to cooperate with U.S. investigators to find the "real cause" of pet deaths. The United States Senate held an oversight hearing on the matter by 12 April. The economic impact on the pet food market has been extensive, with Menu Foods losing roughly US$30 million alone from the recall. On 24 April 2007, for the first time, FDA officials said that melamine had been detected in feed given to animals raised for human consumption within the United States. As of 7 May 2007, United States food safety officials stated: "There is very low risk to human health from consuming meat from hogs and chickens known to have been fed animal feed supplemented with pet food scraps that contained melamine and melamine-related compounds" Investigations In the 2007 outbreak, as all three pet food ingredients containing melamine had been imported from China, investigators focused their inquiries there. Another concern was raised by allegations that one contract manufacturer of pet food had included contaminated ingredients from China without the knowledge or approval of the pet food marketers. Melamine had also been purposely added as a binder to fish feed manufactured in the United States from ingredients produced in Ohio. This adulteration has not been linked to any illness. The FDA issued a Warning Letter to Tembec, the manufacturer of the adulterated binding ingredients. In response, Tembec declared that, in addition to completing the recall of all products containing the adulterated binding ingredients, it would "discontinue manufacturing and marketing of [the products] as aquatic feed binder. Tembec's aquatic feed binder products were also used by another US company, Uniscope, to produce a binder (XtraBond) for livestock feeds. This binder and the feeds made from it were not recalled, nor was the meat of the livestock fed on these feeds. No fish or fish products were recalled as a result of having been raised on the adulterated feeds. In 2008, investigation of kidney problems in Chinese infants focused on domestic dairy suppliers in China. Melamine production and use in China Melamine is commonly produced from urea, mainly by either catalyzed gas-phase production or high pressure liquid-phase production, and is soluble in water. Melamine is used combined with formaldehyde to produce melamine resin, a very durable thermosetting plastic, and melamine foam, a polymeric cleaning product. The end products include counter-tops, fabrics, glues and flame retardants. Occasionally, melamine-formaldehyde resin is added to gluten for non-food purposes, such as adhesives or fabric printing. Melamine is also a byproduct of several pesticides, including cyromazine. The Food Safety and Inspection Service (FSIS) of the United States Department of Agriculture (USDA) provides a test method for analyzing cyromazine and melamine in animal tissues in its Chemistry Laboratory Guidebook which "contains test methods used by FSIS Laboratories to support the Agency's inspection program, ensuring that meat, poultry, and egg products are safe, wholesome and accurately labeled." In 1999, in a proposed rule published in the Federal Register regarding cyromazine residue, the United States Environmental Protection Agency (EPA) proposed "remov[ing] melamine, a metabolite of cyromazine from the tolerance expression since it is no longer considered a residue of concern." Melamine production in China has also been reported as using coal as raw material. This production has been described as also producing "melamine scrap" which is not "pure melamine but impure melamine scrap that is sold more cheaply as the waste product after melamine is produced by chemical and fertilizer factories here." Shandong Mingshui Great Chemical Group, the company reported by The New York Times as producing melamine from coal, produces and sells both urea and melamine but does not list melamine resin as a product. Melamine production in China has increased greatly in recent years and was described as in "serious surplus" in 2006. In the United States Geological Survey 2004 Minerals Survey Yearbook, in a report on worldwide nitrogen production, the author stated that "China continued to plan and construct new ammonia and urea plants using coal gasification technology." The off-gas in production contains large amounts of ammonia (see melamine synthesis). Therefore, melamine production is often integrated into urea production which uses ammonia as feedstock. Crystallization and washing of melamine generates a considerable amount of waste water, which is a pollutant if discharged directly into the environment. The waste water may be concentrated into a solid (1.5-5% of the weight) for easier disposal. The solid may contain approximately 70% melamine, 23% oxytriazines (ammeline, ammelide and cyanuric acid), 0.7% polycondensates (melem, melam and melon). In January 2009, China's Ministry of Industry and Information Technology promulgated draft production permit rules aiming to stem a melamine production glut. Melamine had been widely sold, including over the Internet, for around 10,000 yuan ($1,500) a tonne. The ministry also aimed to shrink the number of melamine producers by setting minimum production levels and strengthening controls on ingredients and waste. Suspicion of contamination in China Melamine manufacturing and the chemical processes in which melamine are used are completely unrelated to the manufacture or processing of food products such as wheat gluten. On 9 April the FDA stated that there is a "distinct possibility" that the food was intentionally contaminated. According to Senator Richard J. Durbin, one theory that investigators are exploring is whether melamine was added to fraudulently increase the measured protein content, which determines the value of the product. Some analysis methods for determining protein content actually measure the amount of nitrogen present, on the assumption that only protein in the sample contributes significantly to its nitrogen content. Melamine contains a very high proportion of nitrogen. According to Liu Laiting, a Chinese professor of animal sciences, melamine is also hard to detect in ordinary tests. Glutens Xuzhou Anying Biologic Technology Development Company (徐州安营生物技术开发有限公司), an agricultural products company based in Xuzhou, Jiangsu, China, which U.S. officials believe was the source of the melamine-contaminated gluten, are maintaining innocence and assert that they are cooperating with officials. The general manager for Xuzhou Anying has denied that his company exported goods and says that they are researching who might have exported their product. They note that per Chinese law, all exported wheat gluten is tested and that they were simply a middle man for local producers. However, a truck driver who has carried goods for Xuzhou Anying contradicted this, saying "they have a factory that makes wheat gluten." Officials in the USDA and FDA believe that Xuzhou Anying labeled its wheat gluten as "nonfood" and exported through a third party, Suzhou Textiles Silk Light & Industrial Products. The nonfood designation would allow the gluten to be shipped without inspection, however a spokesman for Suzhou Textiles has denied that the company exported any wheat gluten. There is evidence that Xuzhou Anying, despite being a food ingredient supplier, has sought out large quantities of melamine in the past. The New York Times has reported that as recently as 29 March 2007, representatives of Xuzhou Anying wrote, "Our company buys large quantities of melamine scrap" on a message board for the trading of industrial materials. Melamine may have been added to enhance the apparent protein content of the wheat gluten. However, the importer of the wheat gluten, ChemNutra, claims that they received from Xuzhou Anying results of analyses showing "no impurities or contamination." It has not yet been determined whether Xuzhou Anying products other than wheat gluten have been shipped to North America. The second Chinese supplier involved in shipping melamine-contaminated food ingredients, Binzhou Futian Biology Technology, has been working with importer Wilbur-Ellis since July 2006. Binzhou Futian supplies soy, corn and other proteins to the United States, Europe and Southeast Asia. Binzhou typically ships rice protein concentrate in white bags but on 11 April one bag was pink and had the word "melamine" stenciled on it. Binzhou explained to Wilbur-Ellis that the original bag had broken and a mislabeled, but new, bag had been used. The company only supplies food and feed ingredients. Stephen Sundlof, director of the FDA's Center for Veterinary Medicine, said that melamine turning up in exported Chinese wheat gluten, rice protein concentrate and corn gluten supports theories of intentional adulteration. "That will be one of the theories we will pursue when we get into the plants in China." On 29 April 2007 and 30 April 2007, the International Herald Tribune and The New York Times reported that some animal feed manufacturers in China admit to having used melamine scrap in animal feed for years. Said Ji Denghui, general manager of the Fujian Sanming Dinghui Chemical Company: “Many companies buy melamine scrap to make animal feed, such as fish feed. I don't know if there’s a regulation on it. Probably not. No law or regulation says 'don’t do it,' so everyone's doing it. The laws in China are like that, aren't they? If there’s no accident, there won’t be any regulation.” Such use of "melamine scrap", described as left over from processing of coal into melamine for use in creating plastic and fertilizer, was described as widespread. Melamine is said to have been chosen in order to inflate crude protein content measures and to avoid tests for other common and illegal ingredients, such as urea. As of 2 May 2007, officials of the USDA and FDA still do not know who manufactured the contaminated food or where the contamination took place. The Chinese government has said that Xuzhou Anying, for instance, purchased its products from 25 different manufacturers. On 8 May 2007, The International Herald Tribune reported that three Chinese chemical makers have said that animal feed producers often purchase, or seek to purchase, the chemical, cyanuric acid, from their factories to blend into animal feed to give the false appearance of a higher level of protein, suggesting another potentially dangerous way that melamine and cyanuric acid might combine in protein products. The same day, FDA officials revealed that the vegetable proteins were not only contaminated, but mislabeled. Both the wheat gluten and rice protein concentrate were actually wheat flour, a much cheaper product from which wheat gluten is extracted. The addition of nitrogen-rich compounds were necessary to make the flour test as if it were protein extract. Dairy On 11 September 2008, fresh reports of massive outbreak of melamine contamination found in China led to recall of infant formula products in China. Some Chinese reports said the manufacturer of the milk products might not have consciously added Melamine to their powdered milk, however they could have used a soy protein substitute to lower production costs, and the source of their soy substitute had melamine added to it. Many Chinese babies had developed kidney stones and other acute kidney problems in recent months across China, investigation led to the discovery of this contaminant. Some people were wondering how much melamine has already entered food products designated for adults without discovery. More worrying are claims reported in China that there are now new chemicals that can be added to food to lower production costs, and yet pass the tests for melamine and other related chemicals. Impact of this incident to dairy industry outside China is beginning to unravel. By the end of September 2008, the Chinese government said that 22 dairy companies, including Sanlu and export brands like Mengniu and Yili, had produced powdered baby formula that contained traces of melamine. Some dairy farmers interviewed in Hebei Province said it was an open secret that milk was adulterated. Some dairies routinely watered down milk to increase profits, then added other cheap ingredients so the milk could pass a protein test. "Before melamine, the dealers added rice porridge or starch into the milk to artificially boost the protein count, but that method was easily tested as fake, so they switched to melamine,” said Zhao Huibin, a dairy farmer near Shijiazhuang. Investigators say the adulteration was nothing short of a wholesale re-engineering of milk. Researchers established that workers at Sanlu and at a number of milk-collection depots were diluting milk with water; they added melamine to dupe a test for determining crude protein content. "Adulteration used to be simple. What they did was very high-tech", says Chen Junshi, co-chair of the Sino-U.S. workshop and a risk-assessment specialist at China's Center for Disease Control and Prevention. Investigators subsequently learned that the emulsifier used to suspend melamine also boosted apparent milk-fat content. Sanlu baby formula contained a whopping 2563 mg/kg of melamine, adding 1% of apparent crude protein content to the formula, where normal milk is 3.0% to 3.4% protein. Chen says a dean of a school of food science told him that it would take a university team 3 months to develop this kind of concoction. Investigators have concluded that as-yet-unidentified individuals cooked up a protocol for a premix, a solution normally designed to fortify foods with vitamins or other nutrients but, in this case, it was poisonous. Several milk-collecting companies were using the same premix, Chen says: "So someone with technical skill had to be training them." Non-protein nitrogen as a feed additive Ruminant animals can obtain protein from at least some forms of non-protein nitrogen (NPN) through fermentation by their rumen bacteria, hence NPN is often added to their diet to supplement protein. Nonruminants such as cats, dogs and pigs (and humans) cannot utilize NPN. NPN are given to ruminants in the form of pelleted urea, ammonium phosphate and/or biuret. Sometimes slightly polymerized special urea-formaldehyde resin or a mixture of urea and formaldehyde (both are also known as formaldehyde-treated urea) is used in place of urea, because the former provides a better control on the nitrogen release. This practice is carried out in China and other countries, such as Finland, India and France. Cyanuric acid has also been used as NPN. For example, Archer Daniels Midland manufactures an NPN supplement for cattle, which contains biuret, triuret, cyanuric acid and urea. FDA permits a certain amount of cyanuric acid to be present in some additives used in animal feed and also drinking water. Melamine use as NPN for cattle was described in a 1958 patent. In 1978, however, a study concluded that melamine "may not be an acceptable nonprotein N source for ruminants", because its hydrolysis in cattle is slower and less complete than other nitrogen sources such as cottonseed meal and urea. In China, it is known that ground urea-formaldehyde resin is a common adulterant in feed for non-ruminants. Domestically it is often sold under the euphemism "protein essence" (蛋白精) and is described as "one kind of new proteinnitrogen feed additive". However, urea-formaldehyde resin itself has been suggested as appropriate for use in feed for some non-ruminants in at least one UN FAO report, suggesting its use as a binder in feed pellets in aquaculture. There is at least one report of inexpensively priced rice protein concentrate (feed grade) containing non-protein nitrogen being marketed for use in non-ruminants dating back to 2005. In a news item on its website, Jiangyin Hetai Industrial Co., Ltd. warned its customers of low-priced "PSEUDO rice protein" for sale in the market by another unnamed supplier, noting that the contaminant could be detected by analyzing the isoelectric point. It is not clear from that report whether the contaminant in that case was melamine or some other non-protein nitrogen source or whether any contaminated rice protein concentrate made it into the food supply at that time. On 18 April 2007, an ad was posted on the trading website Alibaba.com selling "Esb protein powder" in Xuzhou Anying's name. The product is said to be protein in nature and suitable for livestock and poultry feed, yet claims a crude protein content of 160–300%. It also mentions in passing the product makes use of "NPN" which is an acronym for non-protein nitrogen. Similar ads were placed on other websites, some dated as early as 31 October 2005. Products with similar descriptions were also sold as "EM bacterium active protein forage" by Shandong Binzhou Xinpeng Biosciences Company and "HP protein powder" by Shandong Jinan Together Biologic Technology Development Company. Protein testing Proteins, unlike most other food components, contain nitrogen, making nitrogen measurement a common surrogate for protein content. The standard tests for crude protein content used in the food industry (Kjeldahl method and Dumas method are used for official purposes) measure total nitrogen. Accidental contamination and intentional adulteration of protein meals with non-protein nitrogen sources that inflate crude protein content measurements have been known to occur in the food industry for decades. To ensure food quality, purchasers of protein meals routinely conduct quality control tests designed to detect the most common non-protein nitrogen contaminants, such as urea and ammonium nitrate. At least one pet food manufacturer not involved in any recalls, The Honest Kitchen, has reacted to the news of melamine contamination by announcing that it would add melamine testing to the suite of quality control tests it already conducted on all ingredients it purchases. In at least one other segment of the food industry, the dairy industry, some countries (at least the U.S., Australia, France and Hungary), have adopted "true protein" measurement, as opposed to crude protein measurement, as the standard for payment and testing: "True protein is a measure of only the proteins in milk, whereas crude protein is a measure of all sources of nitrogen and includes nonprotein nitrogen, such as urea, which has no food value to humans. … Current milk-testing equipment measures peptide bonds, a direct measure of true protein." Measuring peptide bonds in grains has also been put into practice in several countries including Canada, the UK, Australia, Russia and Argentina where near-infrared reflectance (NIR) technology, a type of infrared spectroscopy is used. The Food and Agriculture Organization of the United Nations (FAO) recommends that only amino acid analysis be used to determine protein in, inter alia, foods used as the sole source of nourishment, such as infant formula, but also provides: "When data on amino acids analyses are not available, determination of protein based on total N content by Kjeldahl (AOAC, 2000) or similar method … is considered acceptable." Allegations of manufacturing and product tampering 26 April 2007 and 27 April 2007 recalls by Blue Buffalo, Diamond, Harmony Farms, and Natural Balance are claimed by all 4 brands to be due to unauthorized inclusion of rice protein by American Nutrition, Inc. (ANI), their manufacturer. This adds a new potential source of contamination and distrust, namely non-compliant contract manufacturers, beyond the original problematic Chinese ingredient suppliers. Diamond and Natural Balance refer to this as a "manufacturing deviation" by ANI. Blue Buffalo and Harmony Farms characterize this as "product tampering" by ANI. ANI's recall notice makes no comment on these allegations. Melamine adulteration and contamination in the U.S. On 31 May 2007, the International Herald Tribune reported that melamine has also been purposely added as a binder to fish and livestock feed manufactured in the United States and traced to suppliers in Ohio and Colorado. In autumn 2008, the Food and Drug Administration detected traces of melamine in one top-selling brand of infant formula and traces of cyanuric acid in another brand. Separately, a third major formula maker said that in-house tests had detected trace levels of melamine in its infant formula. The three firms manufacture more than 90 percent of all infant formula produced in the United States. The FDA and other experts said the melamine contamination in U.S.-made formula had occurred unintentionally during the manufacturing process and were not a safety concern. Impact on human food supply In early 2007, U.S. officials publicly said that they do not believe melamine alone to be harmful to humans. However, there was too little data at that time to determine how it reacts with other substances, in particular, the combination of melamine with cyanuric acid, a similar chemical known to be found in the waste product of at least some methods of melamine production, and which combination some American and Canadian scientists have suggested may have led to the pet deaths through kidney failure. On 25 May 2007 in a US FDA/CSFAN Interim Melamine and Analogues Safety/Risk Assessment, the FDA stated: "While it is entirely possible that the analogues are more or less potent than the parent compound, melamine, we have no information that assesses the relative potency of the three analogues as compared to melamine; therefore, for the purpose of this interim assessment, we have made an assumption of equal potency. It has been hypothesized that melamine may interact synergistically with its three analogues, but no studies have been conducted that specifically test this hypothesis. Very preliminary work suggests that if it does occur, the formation of lattice crystals, particularly between melamine and cyanuric acid, takes place at very high dose levels and is a threshold and concentration dependent phenomenon that would not be relevant to low levels of exposure. Although still under investigation, it now appears that the combination of melamine and cyanuric acid has been linked to the acute renal failure in cats and dogs that have eaten the suspect pet foods...." In the United States, five potential vectors of impact on the human food supply have been identified. The first, which has already been acknowledged to have occurred by FDA and USDA officials, is via contaminated ingredients imported for use in pet foods and sold for use as salvage in animal feed which has been fed to some number of hogs and chickens, the meat from which has been processed and sold to some number of consumers: "There is very low risk to human health" in such cases involving pork and poultry. On 1 May 2007, the FDA and USDA stated that millions of chickens fed feed tainted with contaminated pet food had been consumed by an estimated 2.5 to 3 million people. The second potential vector is via contaminated vegetable proteins imported for intended use as animal feed, which has apparently been acknowledged to occur with regard to fish feed in Canada, while the third possible route is via contaminated vegetable proteins imported for intended use in human food products, and the FDA has issued an import alert subjecting all Chinese vegetable proteins to detention without examination. A fourth potential vector is referred to on 10 May 2007 FDA-USDA press conference, viz. incorporation of contaminated vegetable proteins into products intended for human use and subsequent importation. A fifth vector is acknowledged to have occurred on 30 May 2007 FDA/USDA press conference, whereby U.S. manufacturers of livestock and shrimp/fish feed have acknowledged adding melamine to their products as a binder. The original Xuzhou Anying wheat gluten was "human grade," as opposed to "feed grade," meaning that it could have been used to make food for humans such as bread or pasta. At least one contaminated batch was used to make food for humans, but the FDA quarantined it before any was sold. The FDA also notified the Centers For Disease Control and Prevention to watch for new patients admitted to hospitals with renal failure. As of April 2007, there were no observed increases in human illnesses, and little human food tested as contaminated. Reports of widespread melamine adulteration in Chinese animal feed have raised the possibility of wider melamine contamination in the human food supply in China and abroad. Despite the widely reported ban on melamine use in vegetable proteins in China, at least some chemical manufacturers continue to report selling it for use in animal feed and in products for human consumption. Said Li Xiuping, a manager at Henan Xinxiang Huaxing Chemical in Henan Province: "Our chemical products are mostly used for additives, not for animal feed. Melamine is mainly used in the chemical industry, but it can also be used in making cakes." In 2009, the World Health Organization (WHO) published a report on a December 2008 expert meeting held in conjunction with the FAO concluding, inter alia, that "a tolerable daily intake (TDI) of 0.2 mg/kg body weight for melamine was established. The TDI is applicable to the whole population, including infants." However, the experts also noted: "This TDI is applicable to exposure to melamine alone. … Available data indicate that simultaneous exposure to melamine and cyanuric acid is more toxic than exposures to each compound individually. Data are not adequate to allow the calculation of a health-based guidance value for this co-exposure." In the United States On 3 April 2007, The Boston Globe reported that tainted wheat gluten ended up in factories that produce food for human consumption. Then, on 19 April, federal U.S. officials said that they were investigating reports that Binzhou Futian rice protein had been used in hog feed, but declined to specify where. The California Department of Food and Agriculture placed American Hog Farm in Ceres, California under quarantine, after melamine was found in the urine of the hogs on the farm. According to California state officials, approximately 45 state residents consumed pork from hogs that had been fed melamine-contaminated feed. The FDA subsequently discovered that melamine was present in feed that had been given to hogs in California, New York, North Carolina, South Carolina, Utah, and possibly Ohio. In response, the FDA announced that, in addition to its existing practice of testing of wheat gluten and rice protein products for melamine, it would begin testing imported ingredients and finished products that contain cornmeal, corn gluten, rice bran and soy protein for the presence of melamine or cyanuric acid. The agency also subjected all vegetable proteins imported from China, intended for human or animal consumption, to detention without physical examination, beginning on 27 April. Finally, the FDA investigated domestic food manufacturers to ensure that no contaminated product was being used in foods intended for human use. On 28 April 2007, the USDA and the FDA held a joint press release, acknowledging that pork from hogs fed contaminated feed had entered the human food supply, but emphasizing that the risk of illness from eating such pork was "very low". On 30 April, they amended this statement to include poultry as well, after it was found that chickens in Indiana had been fed the contaminated feed. On 8 May, fish at several hatcheries in Oregon were also discovered to have consumed contaminated feed, but these fish were similarly not seen as a significant human health risk. Throughout April and May, the USDA investigated the potential human health risks of consuming the meat of animals that had eaten contaminated feed, and continued to hold press conferences discussing their latest findings. They consistently found that consuming pork and poultry from such sources did not pose a significant health risk, even after factoring in potential interactions between melamine and cyanuric acid. The Centers for Disease Control and Prevention also monitored hospitals and poison control centers during this period, and reported on 2 May 2007 that there had been no increase in reports of kidney disease. USDA ultimately cleared the affected swine for human consumption on 15 May 2007. After learning that infant formula from one firm in China was potentially contaminated with melamine, the FDA updated its risk assessment on 3 October 2008 (and again on 28 November 2008) to indicate that infants could be more sensitive than adults to melamine exposure. Human food supply outside of U.S. On 7 June 2007, the European Food Safety Authority (EFSA) issued a provisional statement, noting that they were investigation potential synergistic effects between melamine and cyanuric acid. However, by 21 June, the Health & Consumer Protection Directorate-General of the European Commission found that there was "no need to take restrictive measures" on livestock who had eaten contaminated feed, nor on food products derived from such animals. In 2008, the reports of contaminated powdered milk in China led to renewed examination of potential health risks. The EFSA issued a press release on 25 September 2008, noting that children who consumed above-average levels of milk products could potentially be at risk. A report from the Chinese Ministry of Health found that 294,000 infants in China had been affected by melamine-contaminated infant formula by the end of November 2008. More than 50,000 infants were hospitalized, and six deaths were confirmed, as a result of this contamination. Impact on human pharmaceutical supply In August 2009 the United States Food and Drug Administration advised pharmaceutical manufacturers that they should determine if they are using components possibly contaminated with melamine and test those components at risk, as well as make sure they get certifications from suppliers that at-risk components have been tested appropriately. A new guidance lists 27 components the agency considers to be at risk of melamine contamination based on its search of U.S. Pharmacopeia/National Formulary monographs and its Inactive Ingredient Database. The list — which includes adenine, ammonium salts, gelatin, guar gum, lactose, povidone and taurine — is not all-inclusive, the guidance says. "For the purpose of this guidance, we use the term at-risk component to mean those ingredients or raw materials that rely on a test for nitrogen content for their identity or purity or strength, and that contain nitrogen in amounts greater than 2.5 percent." Reaction In China Chinese government Once wheat gluten had been isolated as the source of the problems, federal investigators in the United States began to trace the gluten used in the foods. All of the gluten came from ChemNutra's Kansas City warehouse. ChemNutra said it had imported nearly 800 tonnes of wheat gluten from the Xuzhou Anying Biologic Technology Development Company of Xuzhou, Jiangsu, China between 29 November and 8 March. ChemNutra says the gluten came directly from China or from China through the Netherlands, and that the company had received no reports of contamination in the chemical analysis provided by Xuzhou Anying Biologic Technology Development Company. The products were shipped from the company's Kansas City warehouse to several pet food manufacturers and one distributor of pet food ingredients in the US and Canada, including the companies affected by the recall. Xuzhou Anying also exports carrots, garlic, ginger, corn protein powder, vegetables and feed. On 5 April 2007, several days after the United States halted all wheat gluten imports, the Chinese government categorically denied any connection to the North American food poisonings to The New York Times, claiming they had no record of exporting any agricultural products that could have tainted the recalled pet foods, including the wheat gluten that had been the focus of the investigation. The general manager of the Xuzhou Anying Biologic Technology Development Company also denied that they had exported any wheat gluten to North America. However, on 6 April, the Chinese government told the Associated Press they would investigate the source of the wheat gluten. Although the government refused to give details on the investigation, the Xinhua News Agency stated that "sampling and examination" of wheat gluten was under way across China, centering on the presence of melamine. Officials with the Office of the General Administration of Quality Supervision, Inspection and Quarantine, said that they will stay in touch with the U.S. Embassy in Beijing and that "further measures would be taken based on developments in the United States". The U.S. FDA requested to inspect facilities suspected of manufacturing contaminated products on 4 April; the Chinese government initially refused this request, before ultimately granting FDA investigators permission to enter the country on 23 April. On 25 April 2007, Chinese authorities shut down Binzhou Futian Biology Technology Co. Ltd., and detained its manager, Tian Feng. Feng denied responsibility, saying that he "didn't do anything wrong", and denying that he even knew what melamine was. The following day, China's Foreign Ministry said it has banned the use of melamine in food products, admitting that products containing melamine had cleared customs while continuing to dispute the role of melamine in causing pet deaths. China also vowed to cooperate with U.S. investigators to find the "real cause" of pet deaths. China provided a transcript of a 26 April press conference, indicating that an invitation to FDA investigators had been sent on 23 April but making no mention of banning melamine usage. On 3 May 2007, Chinese authorities detained Mao Lijun, general manager of the Xuzhou Anying Biologic Technology Development, one of the companies accused of exporting contaminated protein, on unspecified charges. On 29 May 2007, in actions not linked directly to the protein export scandal, Zheng Xiaoyu (郑筱萸), the former head of China's State Food and Drug Administration (SFDA), had been convicted of personally approving unproven and unsafe medicines after taking bribes from eight pharmaceutical companies totaling more than 6.49 million RMB (approximately 850,000 US dollars). These fraudulent approvals were estimated to have resulted in hundreds of patient deaths; consequently, Zheng was sentenced to death. It was also discovered during his eight years as head of the SFDA, Zheng had personally ordered the approval of more than 150,000 new medicines; by contrast, the U.S. FDA approves approximately 140 new medications per year. Most of those 150,000 medicines were manufactured by the eight pharmaceutical companies that bribed Zheng; one such unsafe medication, produced by the now-defunct Anhui Hua Yuan (华源) Company, resulted in 14 patient deaths and hundreds becoming permanently disabled. Zheng's former deputy was also convicted as an accomplice and given a two-year delayed death sentence. After these convictions, it was announced that a new system for unsafe food recall would be implemented by the end of 2007. By the end of August 2007, Xinhua reported that China had instituted new product recall and customer notification systems. Further customer protection measures were introduced in response to the 2008 Chinese milk scandal. A Xinhua article from September 2008 lists the following information as "Lessons Learned" from the milk scandal: "Sanlu, the center of the scandal, provided a bad example of crisis management. When it was first exposed, Sanlu refused to take the blame and passed the buck to innocent dairy farmers, which ignited great anger nationwide. A further official investigation showed Sanlu had lied about its contaminated baby formula for months while thousands of infants got sick and at least three died. Sanlu didn't openly admit its products were toxic until Sept. 11. It eventually recalled baby formula manufactured on and before Aug. 6." In the United States Federal government All of the food recalls executed by companies in the United States and Canada were voluntary, i.e. not mandated by any government agency. In the United States, prior to the recall, the Food and Drug Administration did not keep pet foods under the same level of protection and safety ensurance as food intended for human consumption. According to the FDA, the FDA's "regulation of pet food is similar to that for other animal feeds. The Federal Food, Drug, and Cosmetic Act (FFDCA) requires that pet foods, like human foods, be pure and wholesome, safe to eat, produced under sanitary conditions, contain no harmful substances, and be truthfully labeled." However, "there is no requirement that pet food products have premarket approval by FDA." Once the recall was announced, the Food and Drug Administration immediately began to mobilize resources to assist in the investigation. The FDA has dedicated each of its 20 district offices and three field laboratories to the investigation and more than "400 employees are involved in sample pet food collection, monitoring of recall effectiveness, and preparing consumer complaint reports." The FDA has activated its Emergency Operations Center, making sure the information on the poisoning gets to scientists and inspection teams. The agency "is also working with its regulatory partners in all 50 state agriculture and health agencies to inform them of the status of the investigative and analytical efforts." The FDA issued an alert to its field personnel that they should block import of wheat gluten from Xuzhou Anying Biologic Technology Development Company Ltd., and subject wheat gluten from China and the Netherlands to increased scrutiny. As a result of the contamination, consumers and pets' rights groups have called for the FDA to take a more active role in ensuring pet food safety. On 2 April 2007, People for the Ethical Treatment of Animals called for the resignation of the FDA's commissioner, Dr. Andrew von Eschenbach. Possibly in response to growing concern about ensuring the safety of the U.S. food supply, on 1 May 2007, Dr. von Eschenbach announced the creation of an Assistant Commissioner for Food Protection to advise on "strategic and substantive food safety and food defense matters." Dr. David Acheson will fill this roll. According to Dr. von Eschenbach, "The protection of America's food supply and therefore the safety of Americans eating food of domestic or international origin is of utmost importance to me as a physician, and to the mission of this agency." U.S. Congress In the aftermath of the recall, there was a call from consumers for an investigation into Menu Foods reaction to the poisonings, and the federal government's stand on pet food safety and quality control and the FDA's response to the recall. On 1 April 2007, Senator Dick Durbin (D – Illinois) called on the FDA to "account for weak links in the pet food inspection system." Earlier in the week, Representative Rosa DeLauro (D – Connecticut) asked for an analysis of the FDA's oversight of pet food manufacturing facilities and a report of actions taken since the recall. On 6 April 2007, Senator Durbin criticized the federal inspection process for both human and pet food and called for the hearings on the matter. According to the Los Angeles Times who interviewed Durbin 8 April, Durbin said he would like to see the FDA set national standards and inspection rules for pet food manufacturing facilities, and to see "federal law changed to allow the FDA to order a recall of food intended for human or pet consumption rather than rely on companies to do it voluntarily." Durbin was working with Senator Herb Kohl (D – Wisconsin), the Chairman of the United States Senate Appropriations Subcommittee on Agriculture, Rural Development, Food and Drug Administration, and Related Agencies. Senator Kohl initiated hearings in the Senate Appropriations Subcommittee along with Senator Durbin and Senator Bob Bennett (R – Utah). Senator Robert Byrd (D – West Virginia), from the United States Senate Committee on Appropriations was there as well. Witnesses included FDA officials. They looked into several areas: the delay in reporting by Menu Foods, the lack of federal inspections of pet food facilities, and incomplete reporting by the FDA since the start of the recall. During the hearing Senators Durbin and Byrd criticized the government's response during the recall. Durbin specifically criticized the lack of any regular inspection practices or quality control with regards to pet food safety. Senator Kohl criticized the FDA's communication to the public about recalled foods, noting that volunteer websites had more detailed and easier-to-access information about the extent of the problem and which specific foods are of concern than FDA's online resources which Kohl said was contradictory of itself at times, and which the FDA official giving testimony admitted to being difficult to navigate. On 18 April 2007, Senator Durbin and Representative DeLauro met with US FDA Commissioner von Eschenbach to discuss the additional rice protein recalls and learned that the Chinese government was blocking outside attempts to investigate the contamination. In response, they sent a letter to Zhou Wenzong, China's Ambassador to the United States saying in part that "contaminated batches of wheat gluten and rice protein responsible for these events were imported from China" and that "no level of melamine should be found in pet or human food" and asking for visas for inspectors from the United States. Public The protein export scandal inspired a significant amount of US media attention to Chinese food safety concerns, and increased unease about Chinese imports amongst the American public. A July 2007 Consumer Reports poll found that 92 percent of Americans favored "country of origin" labeling on meat products, while in a USA Today/Gallup poll, 74 percent of US respondents said they were "somewhat concerned" or "very concerned" about the safety of food imported from China. See also 2007 pet food recalls 2007 Chinese export recalls 2008 Chinese milk scandal Food safety in China References External links FDA Food Recall Page FDA Recall FAQ Food safety in China 2007 pet food recalls Scandals in China 2008 Chinese milk scandal Adulteration
Protein adulteration in China
[ "Chemistry" ]
9,412
[ "Adulteration", "Drug safety" ]
11,241,001
https://en.wikipedia.org/wiki/Quantum%20master%20equation
A quantum master equation is a generalization of the idea of a master equation. Rather than just a system of differential equations for a set of probabilities (which only constitutes the diagonal elements of a density matrix), quantum master equations are differential equations for the entire density matrix, including off-diagonal elements. A density matrix with only diagonal elements can be modeled as a classical random process, therefore such an "ordinary" master equation is considered classical. Off-diagonal elements represent quantum coherence which is a physical characteristic that is intrinsically quantum mechanical. A formally exact quantum master equation is the Nakajima–Zwanzig equation, which is in general as difficult to solve as the full quantum problem. The Redfield equation and Lindblad equation are examples of approximate Markovian quantum master equations. These equations are very easy to solve, but are not generally accurate. Some modern approximations based on quantum master equations, which show better agreement with exact numerical calculations in some cases, include the polaron transformed quantum master equation and the VPQME (variational polaron transformed quantum master equation). Numerically exact approaches to the kinds of problems to which master equations are usually applied include numerical Feynman integrals, quantum Monte Carlo, DMRG and NRG, MCTDH, and HEOM. See also Open quantum system Quantum dynamics Quantum coherence Differential equation Master equation Lindblad equation Nakajima–Zwanzig equation Feynman integral References Equations
Quantum master equation
[ "Mathematics" ]
301
[ "Mathematical objects", "Equations" ]
11,241,428
https://en.wikipedia.org/wiki/DAFNE
DAFNE or DAΦNE (Double Annular Φ Factory for Nice Experiments), is an electron-positron collider at the INFN Frascati National Laboratory in Frascati, Italy. It consists of 2 accelerator rings, both approximately 100 meters in length. Since 1999 it has been colliding electrons and positrons at a center of mass energy of 1.02 GeV to create phi mesons (φ). 85% of these decay into kaons (K), whose physics is the subject of most of the experiments at DAFNE. There have been five experiments at DAFNE: KLOE (K LOng Experiment), which has been studying CP violation in kaon decays and rare kaon decays since 2000. This is the largest of DAFNE experiments. It has been continued by the KLOE-2 experiment. FINUDA (FIsica NUcleare a DAFNE), studies the spectra and nonmesonic decays of hypernuclei containing lambda baryons (Λ). The hypernuclei are produced by negatively charged kaons () striking a thin target. DEAR (DAFNE Exotic Atoms Research experiment), determines scattering lengths in atoms made from a kaon and a proton or deuteron. DAFNE Light Laboratory (DAΦNE-L) consists of 3 lines of synchrotron radiation emitted by DAFNE, a fourth is under construction. SIDDHARTA (SIlicon Drift Detectors for Hadronic Atom Research by Timing Application), aims to improve the precision measurements of X-ray transitions in kaon atoms studied at DEAR. External links Homepage of the accelerator division of Frascati National Laboratory: public (Italian), technical References Particle physics facilities Particle experiments Research institutes in Italy Particle accelerators
DAFNE
[ "Physics" ]
371
[ "Particle physics stubs", "Particle physics" ]
11,242,909
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20U3
In molecular biology, U3 snoRNA is a non-coding RNA found predominantly in the nucleolus. U3 has C/D box motifs that technically make it a member of the box C/D class of snoRNAs; however, unlike other C/D box snoRNAs, it has not been shown to direct 2'-O-methylation of other RNAs. Rather, U3 is thought to guide site-specific cleavage of ribosomal RNA (rRNA) during pre-rRNA processing. The box C/D element is a subset of the six short sequence elements found in all U3 snoRNAs, namely boxes A, A', B, C, C', and D. Secondary Structure The U3 snoRNA secondary structure is characterized by a small 5' domain (with boxes A and A'), and a larger 3' domain (with boxes B, C, C', and D), the two domains being linked by a single-stranded hinge. Boxes B and C form the B/C motif, which appears to be exclusive to U3 snoRNAs, and boxes C' and D form the C'/D motif. The latter is functionally similar to the C/D motifs found in other snoRNAs. The 5' domain and the hinge region act as a pre-rRNA-binding domain. The 3' domain has conserved protein-binding sites. Both the box B/C and box C'/D motifs are sufficient for nuclear retention of U3 snoRNA. The box C'/D motif is also necessary for nucleolar localization, stability and hyper-methylation of U3 snoRNA. Both box B/C and C'/D motifs are involved in specific protein interactions and are necessary for the rRNA processing functions of U3 snoRNA. Two potential mRNA binding motifs have been identified on U3 that base pair with the target sequences 5'-CUACCUCUCU-3' and 5'-CUCAGGAG-3'. mRNA targets bound by U3 appear to be involved in protein translation. Species-specific secondary structure models S. cerevisiae secondary structure determined by chemical mapping of U3A RNA in a purified snoRNP is available. A human structure model has also been proposed. Like yeast and human, protozoan protist Entamoeba histolytica : a primitive eukaryote adopted the same conserved secondary structure of U3 snoRNA. Four consensus structures specific to metazoa, fungi, plants and basal eukaryotes have been proposed. See also Fibrillarin RCL1 RRP9 UTP6 UTP11L UTP14A UTP15 References External links uRNADB: U3 page (archive) The UMASS snoRNAdb entry for U3 The SGD entry for U3a The human snoRNAbase entry for U3 Small nuclear RNA
Small nucleolar RNA U3
[ "Chemistry" ]
614
[ "Molecular biology stubs", "Molecular biology" ]
11,243,193
https://en.wikipedia.org/wiki/Pepscan
Pepscan is a procedure for mapping and characterizing epitopes involving the synthesis of overlapping peptides and analysis of the peptides in enzyme-linked immunosorbent assays (ELISAs). The method is based on combinatorial chemistry and was pioneered by Mario Geysen and coworkers. Rob Meloen was one of Geysen's co-workers. He also played an important role in the development of numerous other new technologies, including vaccine and diagnostic product development for several viral diseases. From 1994 to 2010, Meloen was Professor of Special Appointment (Chair: Biomolecular Recognition) at Utrecht University. He was one of the co-founders of the company Pepscan (Lelystad, the Netherlands) and became Scientific Director (CSO). Pepscan is now part of the Biosynth Group. Twenty-five years later, the Pepscan methodology, evolved and modernized with the latest insights, is still an important part of Pepscan’s epitope mapping platform, which is instrumental in therapeutic antibody development. References Biochemistry methods Peptides Immunology
Pepscan
[ "Chemistry", "Biology" ]
228
[ "Biochemistry methods", "Biomolecules by chemical classification", "Immunology", "Molecular biology", "Biochemistry", "Peptides" ]
14,892,659
https://en.wikipedia.org/wiki/Stellar-wind%20bubble
A stellar-wind bubble is a cavity light-years across filled with hot gas blown into the interstellar medium by the high-velocity (several thousand km/s) stellar wind from a single massive star of type O or B. Weaker stellar winds also blow bubble structures, which are also called astrospheres. The heliosphere blown by the solar wind, within which all the major planets of the Solar System are embedded, is a small example of a stellar-wind bubble. Stellar-wind bubbles have a two-shock structure. The freely-expanding stellar wind hits an inner termination shock, where its kinetic energy is thermalized, producing 106 K, X-ray-emitting plasma. The hot, high-pressure, shocked wind expands, driving a shock into the surrounding interstellar gas. If the surrounding gas is dense enough (number densities or so), the swept-up gas radiatively cools far faster than the hot interior, forming a thin, relatively dense shell around the hot, shocked wind. See also Cosmic wind Stellar wind Solar wind Planetary wind Colliding-wind binary Pulsar wind nebula Galactic superwind Superwind References Stellar astronomy Galactic astronomy Interstellar media
Stellar-wind bubble
[ "Physics", "Astronomy" ]
243
[ "Interstellar media", "Outer space", "Plasma physics", "Galactic astronomy", "Astronomy stubs", "Stellar astronomy stubs", "Plasma physics stubs", "Astronomical sub-disciplines", "Stellar astronomy" ]
14,892,992
https://en.wikipedia.org/wiki/Lamb%20waves
Lamb waves propagate in solid plates or spheres. They are elastic waves whose particle motion lies in the plane that contains the direction of wave propagation and the direction perpendicular to the plate. In 1917, the English mathematician Horace Lamb published his classic analysis and description of acoustic waves of this type. Their properties turned out to be quite complex. An infinite medium supports just two wave modes traveling at unique velocities; but plates support two infinite sets of Lamb wave modes, whose velocities depend on the relationship between wavelength and plate thickness. Since the 1990s, the understanding and utilization of Lamb waves have advanced greatly, thanks to the rapid increase in the availability of computing power. Lamb's theoretical formulations have found substantial practical application, especially in the field of non-destructive testing. The term Rayleigh–Lamb waves embraces the Rayleigh wave, a type of wave that propagates along a single surface. Both Rayleigh and Lamb waves are constrained by the elastic properties of the surface(s) that guide them. Lamb's characteristic equations In general, elastic waves in solid materials are guided by the boundaries of the media in which they propagate. An approach to guided wave propagation, widely used in physical acoustics, is to seek sinusoidal solutions to the wave equation for linear elastic waves subject to boundary conditions representing the structural geometry. This is a classic eigenvalue problem. Waves in plates were among the first guided waves to be analyzed in this way. The analysis was developed and published in 1917 by Horace Lamb, a leader in the mathematical physics of his day. Lamb's equations were derived by setting up formalism for a solid plate having infinite extent in the x and y directions, and thickness d in the z direction. Sinusoidal solutions to the wave equation were postulated, having x- and z-displacements of the form This form represents sinusoidal waves propagating in the x direction with wavelength 2π/k and frequency ω/2π. Displacement is a function of x, z, t only; there is no displacement in the y direction and no variation of any physical quantities in the y direction. The physical boundary condition for the free surfaces of the plate is that the component of stress in the z direction at z = +/- d/2 is zero. Applying these two conditions to the above-formalized solutions to the wave equation, a pair of characteristic equations can be found. These are: for symmetric modes and for asymmetric modes, where Inherent in these equations is a relationship between the angular frequency ω and the wave number k. Numerical methods are used to find the phase velocity cp = fλ = ω/k, and the group velocity cg = dω/dk, as functions of d/λ or fd. cl and ct are the longitudinal wave and shear wave velocities respectively. The solution of these equations also reveals the precise form of the particle motion, which equations (1) and (2) represent in generic form only. It is found that equation (3) gives rise to a family of waves whose motion is symmetrical about the midplane of the plate (the plane z = 0), while equation (4) gives rise to a family of waves whose motion is antisymmetric about the midplane. Figure 1 illustrates a member of each family. Lamb’s characteristic equations were established for waves propagating in an infinite plate - a homogeneous, isotropic solid bounded by two parallel planes beyond which no wave energy can propagate. In formulating his problem, Lamb confined the components of particle motion to the direction of the plate normal (z-direction) and the direction of wave propagation (x-direction). By definition, Lamb waves have no particle motion in the y-direction. Motion in the y-direction in plates is found in the so-called SH or shear-horizontal wave modes. These have no motion in the x- or z-directions, and are thus complementary to the Lamb wave modes. These two are the only wave types which can propagate with straight, infinite wave fronts in a plate as defined above. Velocity dispersion inherent in the characteristic equations Lamb waves exhibit velocity dispersion; that is, their velocity of propagation c depends on the frequency (or wavelength), as well as on the elastic constants and density of the material. This phenomenon is central to the study and understanding of wave behavior in plates. Physically, the key parameter is the ratio of plate thickness d to wavelength . This ratio determines the effective stiffness of the plate and hence the velocity of the wave. In technological applications, a more practical parameter readily derived from this is used, namely the product of thickness and frequency: The relationship between velocity and frequency (or wavelength) is inherent in the characteristic equations. In the case of the plate, these equations are not simple and their solution requires numerical methods. This was an intractable problem until the advent of the digital computer forty years after Lamb's original work. The publication of computer-generated "dispersion curves" by Viktorov in the former Soviet Union, Firestone followed by Worlton in the United States, and eventually many others brought Lamb wave theory into the realm of practical applicability. The free "Dispersion Calculator" (DC) software allows computation of dispersion diagrams for isotropic plates and multilayered anisotropic specimens. Experimental waveforms observed in plates can be understood by interpretation with reference to the dispersion curves. Dispersion curves - graphs that show relationships between wave velocity, wavelength and frequency in dispersive systems - can be presented in various forms. The form that gives the greatest insight into the underlying physics has (angular frequency) on the y-axis and k (wave number) on the x-axis. The form used by Viktorov, that brought Lamb waves into practical use, has wave velocity on the y-axis and , the thickness/wavelength ratio, on the x-axis. The most practical form of all, for which credit is due to J. and H. Krautkrämer as well as to Floyd Firestone (who, incidentally, coined the phrase "Lamb waves") has wave velocity on the y-axis and fd, the frequency-thickness product, on the x-axis. Lamb's characteristic equations indicate the existence of two entire families of sinusoidal wave modes in infinite plates of width . This stands in contrast with the situation in unbounded media where there are just two wave modes, the longitudinal wave and the transverse or shear wave. As in Rayleigh waves which propagate along single free surfaces, the particle motion in Lamb waves is elliptical with its x and z components depending on the depth within the plate. In one family of modes, the motion is symmetrical about the midthickness plane. In the other family it is antisymmetric. The phenomenon of velocity dispersion leads to a rich variety of experimentally observable waveforms when acoustic waves propagate in plates. It is the group velocity cg, not the above-mentioned phase velocity c or cp, that determines the modulations seen in the observed waveform. The appearance of the waveforms depends critically on the frequency range selected for observation. The flexural and extensional modes are relatively easy to recognize and this has been advocated as a technique of nondestructive testing. The zero-order modes The symmetrical and antisymmetric zero-order modes deserve special attention. These modes have "nascent frequencies" of zero. Thus they are the only modes that exist over the entire frequency spectrum from zero to indefinitely high frequencies. In the low frequency range (i.e. when the wavelength is greater than the plate thickness) these modes are often called the “extensional mode” and the “flexural mode" respectively, terms that describe the nature of the motion and the elastic stiffnesses that govern the velocities of propagation. The elliptical particle motion is mainly in the plane of the plate for the symmetrical, extensional mode and perpendicular to the plane of the plate for the antisymmetric, flexural mode. These characteristics change at higher frequencies. These two modes are the most important because (a) they exist at all frequencies and (b) in most practical situations they carry more energy than the higher-order modes. The zero-order symmetrical mode (designated S0) travels at the "plate velocity" in the low-frequency regime where it is properly called the "extensional mode". In this regime, the plate stretches in the direction of propagation and contracts correspondingly in the thickness direction. As the frequency increases and the wavelength becomes comparable with the plate thickness, curving of the plate starts to have a significant influence on its effective stiffness. The phase velocity drops smoothly while the group velocity drops somewhat precipitously towards a minimum. At higher frequencies yet, both the phase velocity and the group velocity converge towards the Rayleigh wave velocity - the phase velocity from above, and the group velocity from below. In the low-frequency limit for the extensional mode, the z- and x-components of the surface displacement are in quadrature and the ratio of their amplitudes is given by: where is Poisson's ratio. The zero-order antisymmetric mode (designated A0) is highly dispersive in the low frequency regime where it is properly called the "flexural mode" or the "bending mode". For very low frequencies (very thin plates) the phase and group velocities are both proportional to the square root of the frequency; the group velocity is twice the phase velocity. This simple relationship is a consequence of the stiffness/thickness relationship for thin plates in bending. At higher frequencies where the wavelength is no longer much greater than the plate thickness, these relationships break down. The phase velocity rises less and less quickly and converges towards the Rayleigh wave velocity in the high frequency limit. The group velocity passes through a maximum, a little faster than the shear wave velocity, when the wavelength is approximately equal to the plate thickness. It then converges, from above, to the Rayleigh wave velocity in the high frequency limit. In experiments that allow both extensional and flexural modes to be excited and detected, the extensional mode often appears as a higher-velocity, lower-amplitude precursor to the flexural mode. The flexural mode is the more easily excited of the two and often carries most of the energy. The higher-order modes As the frequency is raised, the higher-order wave modes make their appearance in addition to the zero-order modes. Each higher-order mode is “born” at a resonant frequency of the plate, and exists only above that frequency. For example, in a inch (19mm) thick steel plate at a frequency of 200 kHz, the first four Lamb wave modes are present, and at 300 kHz, the first six. The first few higher-order modes can be distinctly observed under favorable experimental conditions. Under less than favorable conditions they overlap and can not be distinguished. The higher-order Lamb modes are characterized by nodal planes within the plate, parallel to the plate surfaces. Each of these modes exists only above a certain frequency which can be called its "nascent frequency". There is no upper frequency limit for any of the modes. The nascent frequencies can be pictured as the resonant frequencies for longitudinal or shear waves propagating perpendicular to the plane of the plate, i.e. where n is any positive integer. Here c can be either the longitudinal wave velocity or the shear wave velocity, and for each resulting set of resonances the corresponding Lamb wave modes are alternately symmetrical and antisymmetric. The interplay of these two sets results in a pattern of nascent frequencies that at first glance seems irregular. For example, in a 3/4 inch (19mm) thick steel plate having longitudinal and shear velocities of 5890 m/s and 3260 m/s respectively, the nascent frequencies of the antisymmetric modes A1 and A2 are 86 kHz and 310 kHz respectively, while the nascent frequencies of the symmetric modes S1, S2 and S3 are 155 kHz, 172 kHz and 343 kHz respectively. At its nascent frequency, each of these modes has an infinite phase velocity and a group velocity of zero. In the high frequency limit, the phase and group velocities of all these modes converge to the shear wave velocity. Because of these convergences, the Rayleigh and shear velocities (which are very close to one another) are of major importance in thick plates. Simply stated in terms of the material of greatest engineering significance, most of the high-frequency wave energy that propagates long distances in steel plates is traveling at 3000–3300 m/s. Particle motion in the Lamb wave modes is in general elliptical, having components both perpendicular to and parallel to the plane of the plate. These components are in quadrature, i.e. they have a 90° phase difference. The relative magnitude of the components is a function of frequency. For certain frequencies-thickness products, the amplitude of one component passes through zero so that the motion is entirely perpendicular or parallel to the plane of the plate. For particles on the plate surface, these conditions occur when the Lamb wave phase velocity is ct or for symmetric modes only cl, respectively. These directionality considerations are important when considering the radiation of acoustic energy from plates into adjacent fluids. The particle motion is also entirely perpendicular or entirely parallel to the plane of the plate, at a mode's nascent frequency. Close to the nascent frequencies of modes corresponding to longitudinal-wave resonances of the plate, their particle motion will be almost entirely perpendicular to the plane of the plate; and near the shear-wave resonances, parallel. J. and H. Krautkrämer have pointed out that Lamb waves can be conceived as a system of longitudinal and shear waves propagating at suitable angles across and along the plate. These waves reflect and mode-convert and combine to produce a sustained, coherent wave pattern. For this coherent wave pattern to be formed, the plate thickness has to be just right relative to the angles of propagation and wavelengths of the underlying longitudinal and shear waves; this requirement leads to the velocity dispersion relationships. Lamb waves with cylindrical symmetry; plate waves from point sources While Lamb's analysis assumed a straight wavefront, it has been shown that the same characteristic equations apply to cylindrical plate waves (i.e. waves propagating outwards from a line source, the line lying perpendicular to the plate). The difference is that whereas the "carrier" for the straight wavefront is a sinusoid, the "carrier" for the axisymmetric wave is a Bessel function. The Bessel function takes care of the singularity at the source, then converges towards sinusoidal behavior at great distances. These cylindrical waves are the eigenfunctions from which the plate's response to point disturbances can be composed. Thus a plate's response to a point disturbance can be expressed as a combination of Lamb waves, plus evanescent terms in the near field. The overall result can be loosely visualized as a pattern of circular wavefronts, like ripples from a stone dropped into a pond but changing more profoundly in form as they progress outwards. Lamb wave theory relates only to motion in the (r,z) direction; transverse motion is a different topic. Guided Lamb waves This phrase is quite often encountered in non-destructive testing. "Guided Lamb Waves" can be defined as Lamb-like waves that are guided by the finite dimensions of real test objects. To add the prefix "guided" to the phrase "Lamb wave" is thus to recognize that Lamb's infinite plate is, in reality, nowhere to be found. In reality we deal with finite plates, or plates wrapped into cylindrical pipes or vessels, or plates cut into thin strips, etc. Lamb wave theory often gives a very good account of much of the wave behavior of such structures. It will not give a perfect account, and that is why the phrase "Guided Lamb Waves" is more practically relevant than "Lamb Waves". One question is how the velocities and mode shapes of the Lamb-like waves will be influenced by the real geometry of the part. For example, the velocity of a Lamb-like wave in a thin cylinder will depend slightly on the radius of the cylinder and on whether the wave is traveling along the axis or round the circumference. Another question is what completely different acoustical behaviors and wave modes may be present in the real geometry of the part. For example, a cylindrical pipe has flexural modes associated with bodily movement of the whole pipe, quite different from the Lamb-like flexural mode of the pipe wall. Lamb waves in ultrasonic testing The purpose of ultrasonic testing is usually to find and characterize individual flaws in the object being tested. Such flaws are detected when they reflect or scatter the impinging wave and the reflected or scattered wave reaches the search unit with sufficient amplitude. Traditionally, ultrasonic testing has been conducted with waves whose wavelength is very much shorter than the dimension of the part being inspected. In this high-frequency-regime, the ultrasonic inspector uses waves that approximate to the infinite-medium longitudinal and shear wave modes, zig-zagging to and from across the thickness of the plate. Although the lamb wave pioneers worked on non-destructive testing applications and drew attention to the theory, widespread use did not come about until the 1990s when computer programs for calculating dispersion curves and relating them to experimentally observable signals became much more widely available. These computational tools, along with a more widespread understanding of the nature of Lamb waves, made it possible to devise techniques for nondestructive testing using wavelengths that are comparable with or greater than the thickness of the plate. At these longer wavelengths the attenuation of the wave is less so that flaws can be detected at greater distances. A major challenge and skill in the use of Lamb waves for ultrasonic testing is the generation of specific modes at specific frequencies that will propagate well and give clean return "echoes". This requires careful control of the excitation. Techniques for this include the use of comb transducers, wedges, waves from liquid media and electromagnetic acoustic transducers (EMAT's). Lamb waves in acousto-ultrasonic testing Acousto-ultrasonic testing differs from ultrasonic testing in that it was conceived as a means of assessing damage (and other material attributes) distributed over substantial areas, rather than characterizing flaws individually. Lamb waves are well suited to this concept, because they irradiate the whole plate thickness and propagate substantial distances with consistent patterns of motion. Lamb waves in acoustic emission testing Acoustic emission uses much lower frequencies than traditional ultrasonic testing, and the sensor is typically expected to detect active flaws at distances up to several meters. A large fraction of the structures customarily testing with acoustic emission are fabricated from steel plate - tanks, pressure vessels, pipes and so on. Lamb wave theory is, therefore, the prime theory for explaining the signal forms and propagation velocities that are observed when conducting acoustic emission testing. The analysis of Acoustic Emission signals via guided wave theory is referred to as Modal Acoustic Emission (MAE). Substantial improvements in the accuracy of source location (a major technique of AE testing) can be achieved through good understanding and skillful utilization of the Lamb wave body of knowledge. Ultrasonic and acoustic emission testing contrasted An arbitrary mechanical excitation applied to a plate will generate a multiplicity of Lamb waves carrying energy across a range of frequencies. Such is the case for the acoustic emission wave. In acoustic emission testing, the challenge is to recognize the multiple Lamb wave components in the received waveform and to interpret them in terms of source motion. This contrasts with the situation in ultrasonic testing, where the first challenge is to generate a single, well-controlled Lamb wave mode at a single frequency. But even in ultrasonic testing, mode conversion takes place when the generated Lamb wave interacts with flaws, so the interpretation of reflected signals compounded from multiple modes becomes a means of flaw characterization. See also Acoustics Acoustic wave Wave equation Waveguide Waveguide (acoustics) Waveguide (electromagnetism) References Rose, J.L.; "Ultrasonic Waves in Solid Media," Cambridge University Press, 1999. External links Modes of Sound Wave Propagation at NDT Resource Center Lamb wave in Nondestructive Testing Encyclopedia Lamb Wave Analysis of Acousto-Ultrasonic Signals in Plate by Liu Zhenqing: an article which includes the complete Lamb wave equations. Acoustics Wave mechanics Nondestructive testing Elasticity (physics)
Lamb waves
[ "Physics", "Materials_science" ]
4,262
[ "Physical phenomena", "Elasticity (physics)", "Deformation (mechanics)", "Classical mechanics", "Acoustics", "Waves", "Wave mechanics", "Materials testing", "Nondestructive testing", "Physical properties" ]
14,895,001
https://en.wikipedia.org/wiki/Yuremamine
Yuremamine is a phytoindole alkaloid which was isolated from the bark of Mimosa tenuiflora in 2005, and erroneously assigned a pyrrolo[1,2-a]indole structure that was thought to represent a new class of indole alkaloids. However, in 2015, the bioinspired total synthesis of yuremamine revealed its structure to be a flavonoid derivative. It was also noted in the original isolation of yuremamine that the alkaloid occurs naturally as a purple solid, but total synthesis revealed that yuremamine as a free base is colorless, and the formation of a trifluoroacetate salt during HPLC purification is what led to the purple appearance. References Alkaloids found in Fabaceae Dimethylamino compounds Pyrogallols Secondary alcohols Tryptamine alkaloids
Yuremamine
[ "Chemistry" ]
184
[ "Tryptamine alkaloids", "Alkaloids by chemical classification" ]
14,896,215
https://en.wikipedia.org/wiki/Maximum%20allowable%20operating%20pressure
Maximum Allowable Operating Pressure (MAOP) is a pressure limit set, usually by a government body, which applies to compressed gas pressure vessels, pipelines, and storage tanks. For pipelines, this value is derived from Barlow's Formula, which takes into account wall thickness, diameter, allowable stress (which is a function of the material used), and a safety factor. The MAOP is less than the MAWP (maximum allowable working pressure). MAWP is defined as the maximum pressure based on the design codes that the weakest component of a pressure vessel can handle. Commonly standard wall thickness components are used in fabricating pressurized equipment, and hence are able to withstand pressures above their design pressure. The MAWP is the pressure stamped on the pressure equipment, and the pressure that must not be exceeded in operation. Design pressure is the pressure a pressurized item is designed to, and is higher than any expected operating pressures. Due to the availability of standard wall thickness materials, many components will have a MAWP higher than the required design pressure. For pressure vessels, all pressures are defined as being at highest point of the unit in the operating position, and do not include static head pressure. The equipment designer needs to account for the higher pressures occurring at some components due to static head pressure. Relief valves are set at the design pressure of the pressurized item and sized to prevent the item under pressure from being over-pressurized. Depending on the design code that the pressurized item is designed, an over-pressure allowance can be used when sizing the relief valve. This is +10% for PD 5500, and ASME Section VIII div 1 & 2 (with an additional +10% allowance in ASME Section VIII for a fire relief case). ASME has different criteria for steam boilers. Maximum expected operating pressure (MEOP) is the highest expected operating pressure, which is synonymous with maximum operating pressure (MOP). See also Massachusetts gas explosions - a series of gas-related explosions and fires caused by gas pipelines that had exceeded their MAOP References Fluid dynamics Pressure vessels
Maximum allowable operating pressure
[ "Physics", "Chemistry", "Engineering" ]
433
[ "Structural engineering", "Chemical equipment", "Chemical engineering", "Physical systems", "Hydraulics", "Piping", "Pressure vessels", "Fluid dynamics" ]
14,902,155
https://en.wikipedia.org/wiki/Hematopoietic%20growth%20factor
Hematopoietic growth factor is a group of glycoproteins that causes blood cells to grow and mature (Haematopoiesis). "A group of at least seven substances involved in the production of blood cells, including several interleukins and erythropoietin." External links Hematopoietic growth factor entry in the public domain NCI Dictionary of Cancer Terms References Blood cells Growth factors
Hematopoietic growth factor
[ "Chemistry" ]
92
[ "Growth factors", "Signal transduction" ]
14,902,305
https://en.wikipedia.org/wiki/Sequon
A sequon is a sequence of consecutive amino acids in a protein that can serve as the attachment site to a polysaccharide, frequently an N-linked-Glycan. The polysaccharide is linked to the protein via the nitrogen atom in the side chain of asparagine (Asn). The sequon for N-glycosylation is either Asn-X-Ser or Asn-X-Thr, where X is any amino acid except proline, Ser denoting serine and Thr threonine. Occasionally, other amino acids can take the place of Ser and Thr, such as in the leukocyte surface protein (CD69), where the amino acid sequence Asn-X-Cys is an acceptable sequon for the addition of N-linked glycans. References Peptide sequences Glycoproteins
Sequon
[ "Chemistry" ]
191
[ "Glycoproteins", "Glycobiology" ]
4,343,822
https://en.wikipedia.org/wiki/OPLS
The OPLS (Optimized Potentials for Liquid Simulations) force field was developed by Prof. William L. Jorgensen at Purdue University and later at Yale University, and is being further developed commercially by Schrödinger, Inc. Functional form The functional form of the OPLS force field is very similar to that of AMBER: with the combining rules and . Intramolecular nonbonded interactions are counted only for atoms three or more bonds apart; 1,4 interactions are scaled down by the "fudge factor" , otherwise . All the interaction sites are centered on the atoms; there are no "lone pairs". Parameterization Several sets of OPLS parameters have been published. There is OPLS-ua (united atom), which includes hydrogen atoms next to carbon implicitly in the carbon parameters, and can be used to save simulation time. OPLS-aa (all atom) includes every atom explicitly. Later publications include parameters for other specific functional groups and types of molecules such as carbohydrates. OPLS simulations in aqueous solution typically use the TIP4P or TIP3P water model. A distinctive feature of the OPLS parameters is that they were optimized to fit experimental properties of liquids, such as density and heat of vaporization, in addition to fitting gas-phase torsional profiles. Implementation The reference implementations of the OPLS force field are the BOSS and MCPRO programs developed by Jorgensen. Other packages such as TINKER, GROMACS, PCMODEL, Abalone, LAMMPS, Desmond and NAMD also implement OPLS force fields. References Force fields (chemistry) Molecular dynamics Computational chemistry Structural bioinformatics
OPLS
[ "Physics", "Chemistry", "Biology" ]
342
[ "Molecular physics", "Theoretical chemistry stubs", "Computational physics", "Bioinformatics", "Molecular dynamics", "Computational chemistry", "Computational chemistry stubs", "Theoretical chemistry", "Structural biology", "Structural bioinformatics", "Physical chemistry stubs", "Force fields...
4,350,392
https://en.wikipedia.org/wiki/Guarding%20of%20Machinery%20Convention%2C%201963
Guarding of Machinery Convention, 1963 is an International Labour Organization Convention. It was established in 1963, with the preamble stating: Ratifications As of 2013, the convention has been ratified by 52 states. External links Text. Ratifications. International Labour Organization conventions Treaties concluded in 1963 Treaties entered into force in 1965 Occupational safety and health treaties Machinery Treaties of Algeria Treaties of Azerbaijan Treaties of the Byelorussian Soviet Socialist Republic Treaties of Bosnia and Herzegovina Treaties of Brazil Treaties of the Central African Republic Treaties of the Republic of the Congo Treaties of Croatia Treaties of Cyprus Treaties of the Democratic Republic of the Congo (1964–1971) Treaties of Denmark Treaties of the Dominican Republic Treaties of Ecuador Treaties of Finland Treaties of Ghana Treaties of Guatemala Treaties of Guinea Treaties of Ba'athist Iraq Treaties of Italy Treaties of Japan Treaties of Jordan Treaties of Kuwait Treaties of Kyrgyzstan Treaties of Latvia Treaties of Luxembourg Treaties of Madagascar Treaties of Malaysia Treaties of Malta Treaties of Moldova Treaties of Montenegro Treaties of Morocco Treaties of Nicaragua Treaties of Niger Treaties of Norway Treaties of Panama Treaties of Paraguay Treaties of the Polish People's Republic Treaties of the Soviet Union Treaties of San Marino Treaties of Serbia and Montenegro Treaties of Yugoslavia Treaties of Sierra Leone Treaties of Slovenia Treaties of Francoist Spain Treaties of Switzerland Treaties of Sweden Treaties of Syria Treaties of Tajikistan Treaties of North Macedonia Treaties of Tunisia Treaties of Turkey Treaties of the Ukrainian Soviet Socialist Republic Treaties of the United Kingdom 1963 in labor relations
Guarding of Machinery Convention, 1963
[ "Physics", "Technology", "Engineering" ]
286
[ "Physical systems", "Machines", "Machinery", "Mechanical engineering" ]
7,541,497
https://en.wikipedia.org/wiki/Locally%20simply%20connected%20space
In mathematics, a locally simply connected space is a topological space that admits a basis of simply connected sets. Every locally simply connected space is also locally path-connected and locally connected. The circle is an example of a locally simply connected space which is not simply connected. The Hawaiian earring is a space which is neither locally simply connected nor simply connected. The cone on the Hawaiian earring is contractible and therefore simply connected, but still not locally simply connected. All topological manifolds and CW complexes are locally simply connected. In fact, these satisfy the much stronger property of being locally contractible. A strictly weaker condition is that of being semi-locally simply connected. Both locally simply connected spaces and simply connected spaces are semi-locally simply connected, but neither converse holds. References Properties of topological spaces
Locally simply connected space
[ "Mathematics" ]
161
[ "Properties of topological spaces", "Space (mathematics)", "Topology stubs", "Topological spaces", "Topology" ]
7,541,505
https://en.wikipedia.org/wiki/Frank%E2%80%93Caro%20process
The Frank–Caro process, also called cyanamide process, is the nitrogen fixation reaction of calcium carbide with nitrogen gas in a reactor vessel at about 1,000 °C. The reaction is exothermic and self-sustaining once the reaction temperature is reached. Originally the reaction took place in large steel cylinders with an electrical resistance element providing initial heat to start the reaction. Modern production uses rotating ovens. The synthesis produces a solid mixture of calcium cyanamide (CaCN2), also known as nitrolime, and carbon. CaC2 + N2 → CaCN2 + C History The Frank–Caro process was the first commercial process that was used worldwide to fix atmospheric nitrogen. The product was used as fertilizer and commercially known as Lime-Nitrogen. Nitrolim or Kalkstickstoff in German. The method was developed by the German chemists Adolph Frank and Nikodem Caro between 1895 and 1899. In its first decades, the world market for inorganic fertilizer was dominated by factories utilizing the cyanamide process. Production facilities The first full-scale factories were established in 1905 in Piano d´Orta (Italy) and Westeregeln (Germany). From 1908 the Frank–Caro process was used at North Western Cyanamide Company at Odda, Norway. With an annual production capacity of 12,000 ton from 1909, the factory at Odda was by far the largest in the world. At this time, first phase factories were established in Briançon (France), Martigny (Switzerland), Bromberg (Prussia/Poland) and Knapsack (Germany). The cyanamide factory at Odda ceased operation in 2002. It is still intact and is a Norwegian candidate to the UNESCO World Heritage List. Haber process In the 1920s the more energy-efficient Haber process gradually took over in the nitrogen fertilizer production, but Frank-Caro process has continued to produce a useful chemical feedstock. In 1945 the production of calcium cyanamide reached a peak of an estimated 1.5 million tons a year. Patent German patent nr. DE 88363 (1895) See also Odda process Birkeland–Eyde process Haber–Bosch process Linde–Frank–Caro process, a method to produce hydrogen from water gas References External links Guide to the Papers of Adolf Frank (1834–1916) Cyanamides Fertilizers Chemical processes
Frank–Caro process
[ "Chemistry" ]
506
[ "Fertilizers", "Functional groups", "Chemical processes", "Soil chemistry", "Cyanamides", "nan", "Chemical process engineering" ]
7,541,598
https://en.wikipedia.org/wiki/Cryoseism
A cryoseism, ice quake or frost quake, is a seismic event caused by a sudden cracking action in frozen soil or rock saturated with water or ice, or by stresses generated at frozen lakes. As water drains into the ground, it may eventually freeze and expand under colder temperatures, putting stress on its surroundings. This stress builds up until relieved explosively in the form of a cryoseism. The requirements for a cryoseism to occur are numerous; therefore, accurate predictions are not entirely possible and may constitute a factor in structural design and engineering when constructing in an area historically known for such events. Speculation has been made between global warming and the frequency of cryoseisms. Effects Cryoseisms are often mistaken for minor intraplate earthquakes. Initial indications may appear similar to those of an earthquake with tremors, vibrations, ground cracking and related noises, such as thundering or booming sounds. Cryoseisms can, however, be distinguished from earthquakes through meteorological and geological conditions. Cryoseisms can have an intensity of up to VI on the Modified Mercalli Scale. Furthermore, cryoseisms often exhibit high intensity in a very localized area, in the immediate proximity of the epicenter, as compared to the widespread effects of an earthquake. Due to lower-frequency vibrations of cryoseisms, some seismic monitoring stations may not record their occurrence. Cryoseisms release less energy than most tectonic events. Since cryoseisms occur at the ground surface they can cause effects right at the site, enough to jar people awake. Some reports have indicated the presence of "distant flashing lights" before or during a cryoseism, possibly because of electrical changes when rocks are compressed. Cracks and fissures may also appear as surface areas contract and split apart from the cold. The sometime superficial to moderate occurrences may range from a few centimeters to several kilometers long, with either singular or multiple linear fracturing and vertical or lateral displacement possible. Occurrences Glacial cryoseisms A glacial cryoseism or glacial ice quake is a non-tectonic seismic event of the glacial cryosphere. A large variety of seismogenic glacial processes arising from internal, ocean calving, or basal processes have been identified and studied. Very large calving events in Greenland and Antarctica have been observed to generate seismic events of magnitude 5 or larger. Extremely large icebergs can also generate seismic signals that are observable at distances up to thousands of kilometers when they collide or grind across the ocean floor. Basal glacial motion be enhanced due to water accumulation underneath a glacier sourced from surface or basal ice melt. Hydraulic pressure of subglacial water can reduce the friction at the bed, allowing the glacier to suddenly shift and generate seismic waves. This type of cryoseism can be very brief, or may last for many minutes. Location United States Geocryological processes were identified as a possible cause of tremors as early as 1818. In the United States, such events have been reported throughout the Midwestern, Northern and Northeastern United States. Canada Cryoseisms also occur in Canada, especially along the Great Lakes/St. Lawrence corridor, where winter temperatures can shift very rapidly. They have surfaced in Ontario, Quebec, Alberta and the Maritime Provinces. Other places Glacier-related cryoseism phenomena have been reported in Alaska, Greenland, Iceland (Grímsvötn), Finland, Ross Island, and the Antarctic Prince Charles Mountains. Precursors There are four main precursors for a frost quake cryoseism event to occur: A region must be susceptible to cold air masses The ground must undergo saturation from thaw or liquid precipitation prior to an intruding cold air mass Most frost quakes are associated with minor snow cover on the ground without a significant amount of snow to insulate the ground (i.e., less than ) A rapid temperature drop from approximately freezing to near or below , which ordinarily occurred on a timescale of 16 to 48 hours. Cryoseisms typically occur when temperatures rapidly decrease from above freezing to subzero, and are more than likely to occur between midnight and dawn (during the coldest parts of night). However, due to the permanent nature of glacial ice, glacier-related cryoseisms may also occur in the warmer months of summer. In general, cryoseisms may occur 3 to 4 hours after significant changes in temperature. Perennial or seasonal frost conditions involved with cryoseisms limit these events to temperate climates that experience seasonal variation with subzero winters. Additionally, the ground must be saturated with water, which can be caused by snowmelt, rain, sleet or flooding. Geologically, areas of permeable materials like sand or gravel, which are susceptible to frost action, are likelier candidates for cryoseisms. Following large cryoseisms, little to no seismic activity will be detected for several hours, indicating that accumulated stress has been relieved. See also Cryosphere Glacial earthquake Glacial lake outburst flood References External links Google Maps-based reporting website Geological hazards Snow or ice weather phenomena Weather hazards Seismology
Cryoseism
[ "Physics" ]
1,019
[ "Weather", "Physical phenomena", "Weather hazards" ]
3,191,803
https://en.wikipedia.org/wiki/Light%20field%20camera
A light field camera, also known as a plenoptic camera, is a camera that captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the precise direction that the light rays are traveling in space. This contrasts with conventional cameras, which record only light intensity at various wavelengths. One type uses an array of micro-lenses placed in front of an otherwise conventional image sensor to sense intensity, color, and directional information. Multi-camera arrays are another type. A holographic image is a type of film-based light field image. History Early research The first light field camera was proposed by Gabriel Lippmann in 1908. He called his concept "integral photography". Lippmann's experimental results included crude integral photographs made by using a plastic sheet embossed with a regular array of microlenses, or by partially embedding small glass beads, closely packed in a random pattern, into the surface of the photographic emulsion. In 1992, Adelson and Wang proposed a design that reduced the correspondence problem in stereo matching. To achieve this, an array of microlenses is placed at the focal plane of the camera main lens. The image sensor is positioned slightly behind the microlenses. Using such images, the displacement of image parts that are not in focus can be analyzed and depth information can be extracted. Standard plenoptic camera The "standard plenoptic camera" is a mathematical model used by researchers to compare designs. By definition it has microlenses placed one focal length away from the image plane of a sensor. In 2004, a team at Stanford University Computer Graphics Laboratory used a 16-megapixel camera to demonstrate that pictures can be refocused after they are taken. The system used a 90,000-microlens array, yielding a resolution of 90 kilopixels. Research has shown that its maximum baseline is confined to the main lens entrance pupil size which is small relative to stereoscopic setups. This implies that the "standard plenoptic camera" may be intended for close-range applications as it exhibits increased depth resolution at distances that can be metrically predicted based on the camera's parameters. Focused plenoptic camera Lumsdaine and Georgiev described a design in which the microlens array can be positioned before or behind the focal plane of the main lens. This modification samples the light field in a way that trades angular resolution for higher spatial resolution. With this design, images can be refocused with a much higher spatial resolution than images from a standard plenoptic camera. However, the lower angular resolution can introduce aliasing artifacts. Coded aperture camera A design that used a low-cost printed film mask instead of a microlens array was proposed in 2007. This design reduces the chromatic aberrations and loss of boundary pixels seen in microlens arrays, and allows greater spatial resolution. However, the mask-based design reduces the amount of light that reaches the image sensor, reducing brightness. Features Features include: Variable depth of field and "refocusing": Lytro's "Focus Spread" feature allows the depth of field (depth of focus) of a 2 dimensional representation of a Lytro image to be adjusted after a picture has been taken. Instead of setting the focus at a particular distance, "Focus Spread" allows more of a 2D image to be in focus. In some cases this may be the entire 2D image field. Users also are able to "refocus" 2D images at particular distances for artistic effects. The Illum allows the "refocus-able" and "Focus Spreadable" range to be selected using the optical focus and zoom rings on the lens. The Illum also features "focus bracketing" to extend the refocusable range by capturing 3 or 5 consecutive images at different depths. Speed: Because there is less need to focus the lens before taking a picture, a light field camera can capture images more quickly than conventional point-and-shoot digital cameras. This is an advantage in sports photography, for example, where many pictures are lost because the camera’s auto-focus system cannot precisely track a fast moving subject. Low-light sensitivity: The ability to adjust focus in post-processing allows the use of larger apertures than are feasible on conventional cameras, thus enabling photography in low-light environments. 3D views: Since a plenoptic camera records depth information, 3D views can be constructed in software from a single plenoptic image capture. 3D views are different from solely stereo images in this case. Stereo images may also be constructed. Metalens array In 2022, NIST announced a device with a focal range of to . The device employed a 39x39-element titanium dioxide metalens array. Each metalens is either right- or left-circle polarized to create a different focal length. Each metalens was rectangular in shape. The light is routed separately through the shorter and longer sides of the rectangle, producing two focal points in the image. Differences among the metalenses were corrected algorithmically. Manufacturers Products Lytro was founded by Stanford University Computer Graphics Laboratory alumnus Ren Ng to commercialize the light field camera he developed as a graduate student. Lytro's light field sensor uses an array of micro-lenses placed in front of an otherwise conventional image sensor; to sense intensity, color, and directional information. Software then uses this data to create displayable 2D or 3D images. Lytro trades maximum 2D resolution, at a given distance, for enhanced resolution at other distances. Users can convert the Lytro camera's proprietary image into a regular 2D image file, at any desired focal distance. The maximum Illum 2D resolution is 2450 × 1634 (4.0 megapixels), The 3D light field resolution is 40 "megarays". It has a maximum 2D resolution of 1080 × 1080 pixels (roughly 1.2 megapixels), Lytro ceased operations in March 2018. Raytrix has offered several models of plenoptic cameras for industrial and scientific applications since 2010, with field of view starting from 1 megapixel. d'Optron and Rebellion Photonics offer plenoptic cameras, specializing in microscopy and gas leak detection, respectively. Prototypes Stanford University Computer Graphics Laboratory developed a prototype light field microscope using a microlens array similar to the one used in their light field camera. The prototype is built around a Nikon Eclipse transmitted light microscope/wide-field fluorescence microscope and standard CCD cameras. Light field capture is obtained by a module containing a microlens array and other optical components placed in the light path between the objective lens and camera, with the final multifocused image rendered using deconvolution. A later prototype added a light field illumination system consisting of a video projector (allowing computational control of illumination) and a second microlens array in the illumination light path of the microscope. The addition of a light field illumination system both allowed for additional types of illumination (such as oblique illumination and quasi-dark-field) and correction for optical aberrations. The Adobe light field camera is a prototype 100-megapixel camera that takes a three-dimensional photo of the scene in focus using 19 uniquely configured lenses. Each lens takes a 5.2-megapixel photo of the scene. Each image can be focused later in any way. CAFADIS is a plenoptic camera developed by University of La Laguna (Spain). CAFADIS stands (in Spanish) for phase-distance camera, since it can be used for distance and optical wavefront estimation. From a single shot it can produce images focused at different distances, depth maps, all-in-focus images and stereo pairs. A similar optical design can be used in adaptive optics in astrophysics. Mitsubishi Electric Research Laboratories's (MERL) light field camera is based on the principle of optical heterodyning and uses a printed film (mask) placed close to the sensor. Any hand-held camera can be converted into a light field camera using this technology by simply inserting a low-cost film on top of the sensor. A mask-based design avoids the problem of loss of resolution, since a high-resolution photo can be generated for the focused parts of the scene. Pelican Imaging has thin multi-camera array systems intended for consumer electronics. Pelican's systems use from 4 to 16 closely spaced micro-cameras instead of a micro-lens array image sensor. Nokia invested in Pelican Imaging to produce a plenoptic camera system with 16-lens array that was expected to be implemented in Nokia smartphones in 2014. Pelican moved to designing supplementary cameras that add depth-sensing capabilities to a device's main camera, rather than stand-alone array cameras. A collaboration between University of Bedfordshire and ARRI resulted in a custom-made plenoptic camera with a ray model for the validation of light-field geometries and real object distances. In November 2021 the German-based company K|Lens announced the first light field lens available for any standard lens mount on Kickstarter. The project was canceled in January of 2022. The modification of standard digital cameras requires little more than suitable sheets of micro-lens material, hence a number of hobbyists have produced cameras whose images can be processed to give either selective depth of field or direction information. Applications In a 2017 study, researchers observed that incorporation of light field photographed images into an online anatomy module did not result in better learning outcomes compared to an identical module with traditional photographs of dissected cadavers. Plenoptic cameras are good for imaging fast-moving objects that outstrip autofocus capabilities, and for imaging objects where autofocus is not practical such as with security cameras. A recording from a security camera based upon plenoptic technology could be used to produce an accurate 3D model of a subject. Software Lytro Desktop is a cross-platform application to render light field photographs taken by Lytro cameras. It remains closed source and is not maintained since Google’s acquisition of Lytro. Several open-source tools have been released meanwhile. A Matlab tool for Lytro-type camera processing can be found. PlenoptiCam is a GUI-based application considering Lytro's and custom-built plenoptic cameras with cross-platform compatibility and the source code being made available online. See also Angle-sensitive pixel Bokeh Compound eye Femto-photography Integral imaging Light-in-flight imaging Photo finish Streak camera Strip photography References External links Article by Ren Ng of Stanford (now at Lytro) Say Sayonara to Blurry Pics. Wired. Fourier slice photography Light Field Microscopy video by Stanford Computer Graphics Laboratory. IEEE Spectrum article May 2012 Lightfield photography revolutionizes imaging, with sample images and diagrams of operation, retrieved 2012 May 11 www.plenoptic.info Website explaining the plenoptic camera with animations. Cameras by type Microscopes Optical devices
Light field camera
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
2,262
[ "Glass engineering and science", "Optical devices", "Measuring instruments", "Microscopes", "Microscopy" ]
3,192,041
https://en.wikipedia.org/wiki/Radiophobia
Radiophobia is an irrational or excessive fear of ionizing radiation, leading to overestimating the health risks of radiation compared to other risks. It can impede rational decision-making and contribute to counter-productive behavior and policies. Radiophobia is primarily a social phenomenon as opposed to a purely psychological dynamic. The term is also used to describe the opposition to the use of nuclear technology (i.e. nuclear power) arising from concerns disproportionately greater than actual risks would merit. Early use The term was used in a paper entitled "Radio-phobia and radio-mania" presented by Dr Albert Soiland of Los Angeles in 1903. In the 1920s, the term was used to describe people who were afraid of radio broadcasting and receiving technology. In 1931, radiophobia was referred to in The Salt Lake Tribune as a "fear of loudspeakers", an affliction that Joan Crawford was reported as suffering. The term "radiophobia" was also printed in Australian newspapers in the 1930s and 1940s, assuming a similar meaning. The 1949 poem by Margarent Mercia Baker entitled "Radiophobia" laments the intrusion of advertising into radio broadcasts. The term remained in use with its original association with radios and radio broadcasting during the 1940s and 1950s. During the 1950s and 1960s, the Science Service associated the term with fear of gamma radiation and the medical use of x-rays. A Science Service article published in several American newspapers proposed that "radiophobia" could be attributed to the publication of information regarding the "genetic hazards" of exposure to ionising radiation by the National Academy of Sciences in 1956. In a newspaper column published in 1970, Dr Harold Pettit MD wrote:"A healthy respect for the hazards of radiation is desirable. When atomic testing began in the early 1950s, these hazards were grossly exaggerated, producing a new psychological disorder which has been called "radiophobia" or "nuclear neurosis". Castle Bravo and its influence on public perception On March 1, 1954, the operation Castle Bravo, testing a first-of-its-kind experimental thermonuclear Shrimp device, overshot its predicted TNT equivalent yield of 4–6 Mt and instead produced 15 Mt. This produced an unanticipated amount of Bikini snow or visible particles of nuclear fallout, which caught in its plume the Japanese fishing boat the Daigo Fukuryū Maru or Lucky Dragon outside the initially predicted ~5 Mt fallout area cordoned off for Castle Bravo. Approximately 2 weeks after the test and fallout exposure, the 23-member fishing crew began to fall ill with acute radiation sickness, largely brought on by beta burns caused by the direct contact their bare hands had scooping the Bikini snow into bags. Kuboyama Aikichi, the boat's chief radioman, died 7 months later, on September 23, 1954. It was later estimated that about a hundred fishing boats were contaminated to some degree by fallout from the test. Inhabitants of the Marshall Islands were also exposed to fallout, and a number of islands had to be evacuated. This incident, due to the era of secrecy around nuclear weapons, created widespread fear of uncontrolled and unpredictable nuclear weapons, and also of radioactively contaminated fish affecting the Japanese food supply. With the publication of Joseph Rotblat's findings that the contamination caused by the fallout from the Castle Bravo test was nearly a thousand times greater than that stated officially, outcry in Japan reached such a level that the incident was dubbed by some as "a second Hiroshima". To prevent the subsequent strong anti-nuclear movement from turning into an anti-American movement, the Japanese and U.S. governments agreed on compensation of 2 million dollars for the contaminated fishery, with the surviving 22 crew men receiving about ¥2 million each ($5,556 in 1954, $ in ). The surviving crew members, and their family, would later experience prejudice and discrimination, as local people thought that radiation was contagious. In popular culture The Castle Bravo test and the new fears of radioactive fallout inspired a new direction in art and cinema. The Godzilla films, beginning with Ishirō Honda's landmark 1954 film Gojira, are strong metaphors for post-war radiophobia. The opening scene of Gojira echoes the story of the Daigo Fukuryū Maru, from the initial distant flash of light to survivors being found with radiation burns. Although he found the special effects unconvincing, Roger Ebert stated that the film was "an important one" and "properly decoded, was the Fahrenheit 9/11 of its time." A year after the Castle Bravo test, Akira Kurosawa examined one person's unreasoning terror of radiation and nuclear war in his 1955 film I Live in Fear. At the end of the film, the foundry worker who lives in fear has been declared incompetent by his family, but the possible partial validity of his fears has transferred over to his doctor. Nevil Shute's 1957 novel On the Beach depicts a future just six years later, based on the premise that a nuclear war has released so much radioactive fallout that all life in the Northern Hemisphere has been killed. The novel is set in Australia, which, along with the rest of the Southern Hemisphere, awaits a similar and inevitable fate. Helen Caldicott describes reading the novel in adolescence as 'a formative event' in her becoming part of the anti-nuclear movement. Radiophobia and Chernobyl In the former Soviet Union, many patients with negligible radioactive exposure after the Chernobyl disaster displayed extreme anxiety about low level radiation exposure; they developed many psychosomatic problems, with an increase in fatalistic alcoholism also being observed. As Japanese health and radiation specialist Shunichi Yamashita noted: The term "radiation phobia syndrome" was introduced in 1987 by L. A. Ilyin and O. A. Pavlovsky in their report "Radiological consequences of the Chernobyl accident in the Soviet Union and measures taken to mitigate their impact". The author of Chernobyl Poems Lyubov Sirota wrote in her poem "Radiophobia": Is this only—a fear of radiation? Perhaps rather—a fear of wars? Perhaps—the dread of betrayal, Cowardice, stupidity, lawlessness? The term has been criticized by Adolph Kharash, Science Director at the Moscow State University: It treats the normal impulse to self-protection, natural to everything living, your moral suffering, your anguish and your concern about the fate of your children, relatives and friends, and your own physical suffering and sickness as a result of delirium, of pathological perversion. However, the psychological phobia of radiation in sufferers may not coincide with an actual life-threatening exposure to an individual or their children. Radiophobia refers only to a display of anxiety disproportionate to the actual quantity of radiation one is exposed to, with, in many cases, radiation exposure values equal to, or not much higher than, what individuals are naturally exposed to every day from background radiation. Anxiety following a response to an actual life-threatening level of exposure to radiation is not considered to be radiophobia, nor misplaced anxiety, but a normal, appropriate response. Marvin Goldman is an American doctor who provided commentary to newspapers claiming that radiophobia had taken a larger toll than the fallout itself had, and that radiophobia was to blame. Chernobyl abortions Following the accident, journalists mistrusted many medical professionals (such as the spokesman from the UK National Radiological Protection Board), and in turn encouraged the public to mistrust them. Throughout the European continent, in nations where abortion is legal, many requests for induced abortions, of otherwise normal pregnancies, were obtained out of fears of radiation from Chernobyl; including an excess number of abortions of healthy human fetuses in Denmark in the months following the accident. In Greece, following the accident there was panic and false rumors which led to many obstetricians initially thinking it prudent to interrupt otherwise wanted pregnancies and/or were unable to resist requests from worried pregnant mothers over fears of radiation; within a few weeks misconceptions within the medical profession were largely cleared up, although worries persisted in the general population. Although it was determined that the effective dose to Greeks would not exceed 1 mSv (0.1 rem), a dose much lower than that which could induce embryonic abnormalities or other non-stochastic effects, there was an observed 2500 excess of otherwise wanted pregnancies being terminated, probably out of fear in the mother of some kind of perceived radiation risk. A "slightly" above the expected number of induced abortions by request occurred in Italy, where, upon initial request, "a week of reflection" followed by a 2 to 3 week "health system" delay usually occur before the procedure. Radiophobia and health effects The term "radiophobia" is also sometimes used in the arguments against proponents of the conservative LNT concept (Linear no-threshold response model for ionizing radiation) of radiation security proposed by the U.S. National Council on Radiation Protection and Measurements (NCRP) in 1949. The "no-threshold" position effectively assumes, from data extrapolated from the atomic bombings on Hiroshima and Nagasaki, that even negligible doses of radiation increase one's risk of cancer linearly as the exposure increases from a value of 0 up to high dose rates. The LNT model therefore suggests that radiation exposure from naturally occurring background radiation may be harmful. There is no biological evidence and weak statistical evidence that doses below 100 mSv have any biological effect. After the Fukushima disaster, the German news magazine Der Spiegel reported that Japanese residents were suffering from radiophobia. British medical scientist Geraldine Thomas has also attributed suffering of the Japanese to radiophobia in interviews and formal presentations. Four years after the event The New York Times reported that ″about 1,600 people died from the stress of the evacuation″. The forced evacuation of 154,000 people ″was not justified by the relatively moderate radiation levels″, but was ordered because ″the government basically panicked″. At the same time as part of the public fears radiation, some commercial products are also promoted on the basis of their radioactive content, such as "negative ion" bracelets or radon spas. Radiophobia and industrial and healthcare use Radiation, most commonly in the form of X-rays, is used frequently in society in order to produce positive outcomes. The primary uses of radiation in healthcare are in radiographic examination and procedures, and radiotherapy in the treatment of cancerous conditions. Radiophobia can be a fear which patients experience before and after either of these procedures; it is therefore the responsibility of the healthcare professional at the time, often a Radiographer or Radiation Therapist, to reassure the patients about the stochastic and deterministic effects of radiation on human physiology. Advising patients and other irradiated persons of the various radiation protection measures that are enforced, including the use of lead-rubber aprons, dosimetry and Automatic Exposure Control (AEC) is a common method of informing and reassuring radiophobia sufferers. Similarly, in industrial radiography, there is the possibility of persons experiencing radiophobia when radiophobia sufferers are near industrial radiographic equipment. See also Electromagnetic hypersensitivity Atomic Age Background radiation Backscatter X-ray Chernobyl: Consequences of the Catastrophe for People and the Environment Dirty bomb Electromagnetic radiation and health Fear mongering Nuclear power debate References External links Environmental phobias Radiation
Radiophobia
[ "Physics", "Chemistry" ]
2,362
[ "Transport phenomena", "Waves", "Physical phenomena", "Radiation" ]
3,192,214
https://en.wikipedia.org/wiki/Cycloheptene
Cycloheptene is a 7-membered cycloalkene with a flash point of −6.7 °C. It is a raw material in organic chemistry and a monomer in polymer synthesis. Cycloheptene can exist as either the cis- or the trans-isomer. trans-Cycloheptene With cycloheptene, the cis-isomer is always assumed but the trans-isomer does also exist. One procedure for the organic synthesis of trans-cycloheptene is by singlet photosensitization of cis-cycloheptene with methyl benzoate and ultraviolet light at −35 °C. The double bond in the trans isomer is very strained. The directly attached atoms on a simple alkene are all coplanar. In trans-cycloheptene, however, the size of the ring makes it impossible for the alkene and its two attached carbons to have this geometry because the remaining three carbons could not reach far enough to close the ring (see also Bredt's rule). There would have to be unusually large angles (angle strain), unusually long bond-lengths, or the atoms of the alkane-like loop would collide with the alkene part (steric strain). Part of the strain is relieved by pyramidalization of each alkene carbon and their rotation relative to each other. The pyramidalization angle is estimated at 37° (compared to an angle of 0° for an atom with normal trigonal–planar geometry) and the p-orbital misalignment is 30.1°. Because the barrier for rotation of the double bond in ethylene is approximately 65 kcal/mol (270 kJ/mol) and can only be lowered by the estimated strain energy of 30 kcal/mol (125 kJ/mol) present in the trans-isomer, trans-cycloheptene should be a stable molecule just as its homologue trans-cyclooctene. In fact, it is not: unless the temperature is kept very low, rapid isomerization to the cis-isomer takes place. The trans-cycloheptene isomerization mechanism is not simple alkene-bond rotation, but rather an alternative lower energy pathway. Based on the experimentally observed second order reaction kinetics for isomerization, two trans-cycloheptene molecules in the proposed pathway first form a diradical dimer. The two heptane radical rings then untwist to an unstrained conformation, and finally the dimer fragments back into two cis-cycloheptene molecules. Note that the photoisomerization of maleic acid to fumaric acid with bromine is also bimolecular. References External links MSDS Cycloalkenes Monomers Seven-membered rings
Cycloheptene
[ "Chemistry", "Materials_science" ]
604
[ "Monomers", "Polymer chemistry" ]
3,192,875
https://en.wikipedia.org/wiki/Pearson%E2%80%93Anson%20effect
The Pearson–Anson effect, discovered in 1922 by Stephen Oswald Pearson and Horatio Saint George Anson, is the phenomenon of an oscillating electric voltage produced by a neon bulb connected across a capacitor, when a direct current is applied through a resistor. This circuit, now called the Pearson-Anson oscillator, neon lamp oscillator, or sawtooth oscillator, is one of the simplest types of relaxation oscillator. It generates a sawtooth output waveform. It has been used in low frequency applications such as blinking warning lights, stroboscopes, tone generators in electronic organs and other electronic music circuits, and in time base generators and deflection circuits of early cathode-ray tube oscilloscopes. Since the development of microelectronics, these simple negative resistance oscillators have been superseded in many applications by more flexible semiconductor relaxation oscillators such as the 555 timer IC. Neon bulb as a switching device A neon bulb, often used as an indicator lamp in appliances, consists of a glass bulb containing two electrodes, separated by an inert gas such as neon at low pressure. Its nonlinear current-voltage characteristics (diagram below) allow it to function as a switching device. When a voltage is applied across the electrodes, the gas conducts almost no electric current until a threshold voltage is reached (point b), called the firing or breakdown voltage, Vb. At this voltage electrons in the gas are accelerated to a high enough speed to knock other electrons off gas atoms, which go on to knock off more electrons in a chain reaction. The gas in the bulb ionizes, starting a glow discharge, and its resistance drops to a low value. In its conducting state the current through the bulb is limited only by the external circuit. The voltage across the bulb drops to a lower voltage called the maintaining voltage Vm. The bulb will continue to conduct current until the applied voltage drops below the extinction voltage Ve (point d), which is usually close to the maintaining voltage. Below this voltage, the current provides insufficient energy to keep the gas ionized, so the bulb switches back to its high resistance, nonconductive state (point a). The bulb's "turn on" voltage Vb is higher than its "turn off" voltage Ve. This property, called hysteresis, allows the bulb to function as an oscillator. Hysteresis is due to the bulb's negative resistance, the fall in voltage with increasing current after breakdown, which is a property of all gas-discharge lamps. Up until the 1960s sawtooth oscillators were also built with thyratrons. These were gas-filled triode electron tubes. These worked somewhat similarly to neon bulbs, the tube would not conduct until the cathode to anode voltage reached a breakdown voltage. The advantage of the thyratron was that the breakdown voltage could be controlled by the voltage on the grid. This allowed the frequency of the oscillation to be changed electronically. Thyratron oscillators were used as time base generators in oscilloscopes. Operation In the Pearson-Anson oscillator circuit (top) a capacitor C is connected across the neon bulb N The capacitor is continuously charged by current through the resistor R until the bulb conducts, discharging it again, after which it charges up again. The detailed cycle is illustrated by the hysteresis loop abcd on the current-voltage diagram at right: When the supply voltage is turned on, the neon bulb is in its high resistance condition and acts like an open circuit. The current through the resistor begins to charge the capacitor and its voltage begins to rise toward the supply voltage. When the voltage across the capacitor reaches b, the breakdown voltage of the bulb Vb, the bulb turns on and its resistance drops to a low value. The charge on the capacitor discharges rapidly through the bulb in a momentary pulse of current (c). When the voltage drops to the extinction voltage Ve of the bulb (d), the bulb turns off and the current through it drops to a low level (a). The current through the resistor begins charging the capacitor up again, and the cycle repeats. The circuit thus functions as a low-frequency relaxation oscillator, the capacitor voltage oscillating between the breakdown and extinction voltages of the bulb in a sawtooth wave. The period is proportional to the time constant RC. The neon lamp produces a brief flash of light each time it conducts, so the circuit can also be used as a "flasher" circuit. The dual function of the lamp as both light source and switching device gives the circuit a lower parts count and cost than many alternative flasher circuits. Conditions for oscillation The supply voltage VS must be greater than the bulb breakdown voltage Vb or the bulb can never conduct. Most small neon lamps have breakdown voltages between 80 and 150 volts, so they can operate on 120 Vrms mains voltage, which has a peak voltage of about 170 V. If the supply voltage is close to the breakdown voltage, the capacitor voltage will be in the "tail" of its exponential curve by the time it reaches Vb, so the frequency will depend sensitively on the breakdown threshold and supply voltage levels, causing variations in frequency. Therefore, the supply voltage is usually made significantly higher than the bulb firing voltage. This also makes the charging more linear, and the sawtooth wave more triangular. The resistor R must also be within a certain range of values for the circuit to oscillate. This is illustrated by the load line (blue) on the IV graph. The slope of the load line is equal to R. The possible DC operating points of the circuit are at the intersection of the load line and the neon lamp's IV curve (black) In order for the circuit to be unstable and oscillate, the load line must intersect the IV curve in its negative resistance region, between b and d, where the voltage declines with increasing current. This is defined by the shaded region on the diagram. If the load line crosses the IV curve where it has positive resistance, outside the shaded region, this represents a stable operating point, so the circuit will not oscillate: If R is too large, of the same order as the "off" leakage resistance of the bulb, the load line will cross the IV curve between the origin and b. In this region, the current through R from the supply is so low that the leakage current through the bulb bleeds it off, so the capacitor voltage never reaches Vb and the bulb never fires. The leakage resistance of most neon bulbs is greater than 100MΩ, so this is not a serious limitation. If R is too small, the load line will cross the IV curve between c and d. In this region the current through R is too large; once the bulb has turned on, the current through R will be large enough to keep it conducting without current from the capacitor, and the voltage across the bulb will never fall to Ve so the bulb will never turn off. Small neon bulbs will typically oscillate with values of R between 500kΩ and 20MΩ. If C is not small, it may be necessary to add a resistor in series with the neon bulb, to limit current through it to prevent damage when the capacitor discharges. This will increase the discharge time and decrease the frequency slightly, but its effect will be negligible at low frequencies. Frequency The period of oscillation can be calculated from the breakdown and extinction voltage thresholds of the lamp used. During the charging period, the bulb has high resistance and can be considered an open circuit, so the rest of the oscillator constitutes an RC circuit with the capacitor voltage approaching VS exponentially, with time constant RC. If v(t) is the output voltage across the capacitor Solving for the time Although the first period is longer than the others because the voltage starts from zero, the voltage waveforms of subsequent periods are identical to the first between Ve and Vb. So the period T is the interval between the time when the voltage reaches Ve, and the time when the voltage reaches Vb This formula is only valid for oscillation frequencies up to about 200 Hz; above this various time delays cause the actual frequency to be lower than this. Due to the time required to ionize and deionize the gas, neon lamps are slow switching devices, and the neon lamp oscillator is limited to a top frequency of about 20 kHz. The breakdown and extinction voltages of neon lamps may vary between similar parts; manufacturers usually specify only wide ranges for these parameters. So if a precise frequency is desired the circuit must be adjusted by trial and error. The thresholds also change with temperature, so the frequency of neon lamp oscillators is not particularly stable. Forced oscillations and chaotic behavior Like other relaxation oscillators, the neon bulb oscillator has poor frequency stability, but it can be synchronized (entrained) to an external periodic voltage applied in series with the neon bulb. Even if the external frequency is different from the natural frequency of the oscillator, the peaks of the applied signal can exceed the breakdown threshold of the bulb, discharging the capacitor prematurely, so that the period of the oscillator becomes locked to the applied signal. Interesting behavior can result from varying the amplitude and frequency of the external voltage. For instance, the oscillator may produce an oscillating voltage whose frequency is a submultiple of the external frequency. This phenomenon is known as "submultiplication" or "demultiplication", and was first observed in 1927 by Balthasar van der Pol and his collaborator Jan van der Mark. In some cases the ratio of the external frequency to the frequency of the oscillation observed in the circuit may be a rational number, or even an irrational one (the latter case is known as the "quasiperiodic" regime). When the periodic and quasiperiodic regimes overlap, the behavior of the circuit may become aperiodic, meaning that the pattern of the oscillations never repeats. This aperiodicity correspond to the behavior of the circuit becoming chaotic (see chaos theory). The forced neon bulb oscillator was the first system in which chaotic behavior was observed. Van der Pol and van der Mark wrote, concerning their experiments with demultiplication, that Any periodic oscillation would have produced a musical tone; only aperiodic, chaotic oscillations would produce an "irregular noise". This is thought to have been the first observation of chaos, although van der Pol and van der Mark didn't realize its significance at the time. See also Relaxation oscillator Schmitt trigger 555 timer Negative resistance Notes References S. O. Pearson and H. St. G. Anson, Demonstration of Some Electrical Properties of Neon-filled Lamps, Proceedings of the Physical Society of London, vol.34, no. 1 (December 1921), pp. 175–176 S. O. Pearson and H. St. G. Anson, The Neon Tube as a Means of Producing Intermittent Currents, Proceedings of the Physical Society of London, vol. 34, no. 1 (December 1921), pp. 204–212 Analog circuits Electronic oscillators
Pearson–Anson effect
[ "Engineering" ]
2,378
[ "Analog circuits", "Electronic engineering" ]
3,193,758
https://en.wikipedia.org/wiki/Pumping%20lemma%20for%20context-free%20languages
In computer science, in particular in formal language theory, the pumping lemma for context-free languages, also known as the Bar-Hillel lemma, is a lemma that gives a property shared by all context-free languages and generalizes the pumping lemma for regular languages. The pumping lemma can be used to construct a refutation by contradiction that a specific language is not context-free. Conversely, the pumping lemma does not suffice to guarantee that a language is context-free; there are other necessary conditions, such as Ogden's lemma, or the Interchange lemma. Formal statement If a language is context-free, then there exists some integer (called a "pumping length") such that every string in that has a length of or more symbols (i.e. with ) can be written as with substrings and , such that 1. , 2. , and 3. for all . Below is a formal expression of the Pumping Lemma. Informal statement and explanation The pumping lemma for context-free languages (called just "the pumping lemma" for the rest of this article) describes a property that all context-free languages are guaranteed to have. The property is a property of all strings in the language that are of length at least , where is a constant—called the pumping length—that varies between context-free languages. Say is a string of length at least that is in the language. The pumping lemma states that can be split into five substrings, , where is non-empty and the length of is at most , such that repeating and the same number of times () in produces a string that is still in the language. It is often useful to repeat zero times, which removes and from the string. This process of "pumping up" with additional copies of and is what gives the pumping lemma its name. Finite languages (which are regular and hence context-free) obey the pumping lemma trivially by having equal to the maximum string length in plus one. As there are no strings of this length the pumping lemma is not violated. Usage of the lemma The pumping lemma is often used to prove that a given language is non-context-free, by showing that arbitrarily long strings are in that cannot be "pumped" without producing strings outside . For example, if is infinite but does not contain an (infinite) arithmetic progression, then is not context-free. In particular, neither the prime numbers nor the square numbers are context-free. For example, the language can be shown to be non-context-free by using the pumping lemma in a proof by contradiction. First, assume that is context free. By the pumping lemma, there exists an integer which is the pumping length of language . Consider the string in . The pumping lemma tells us that can be written in the form , where , and are substrings, such that , , and for every integer . By the choice of and the fact that , it is easily seen that the substring can contain no more than two distinct symbols. That is, we have one of five possibilities for : for some . for some and with for some . for some and with . for some . For each case, it is easily verified that does not contain equal numbers of each letter for any . Thus, does not have the form . This contradicts the definition of . Therefore, our initial assumption that is context free must be false. In 1960, Scheinberg proved that is not context-free using a precursor of the pumping lemma. While the pumping lemma is often a useful tool to prove that a given language is not context-free, it does not give a complete characterization of the context-free languages. If a language does not satisfy the condition given by the pumping lemma, we have established that it is not context-free. On the other hand, there are languages that are not context-free, but still satisfy the condition given by the pumping lemma, for example for with e.g. j≥1 choose to consist only of bs, for choose to consist only of as; in both cases all pumped strings are still in L. References — Reprinted in: Section 1.4: Nonregular Languages, pp. 77–83. Section 2.3: Non-context-free Languages, pp. 115–119. Formal languages Lemmas
Pumping lemma for context-free languages
[ "Mathematics" ]
909
[ "Formal languages", "Mathematical logic", "Mathematical problems", "Mathematical theorems", "Lemmas" ]
3,193,851
https://en.wikipedia.org/wiki/Spectral%20band
Spectral bands are regions of a given spectrum, having a specific range of wavelengths or frequencies. Most often, it refers to electromagnetic bands, regions of the electromagnetic spectrum. More generally, spectral bands may also be means in the spectra of other types of signals, e.g., noise spectrum. A frequency band is an interval in the frequency domain, limited by a lower frequency and an upper frequency. For example, it may refer to a radio band, such as wireless communication standards set by the International Telecommunication Union. In nuclear physics, spectral bands refer to the electromagnetic emission of polyatomic systems, including condensed materials, large molecules, etc. Each spectral line corresponds to the difference in two energy levels of an atom. In molecules these levels can split. When the number of atoms is large, one gets a continuum of energy levels, the so-called "spectral bands". They are often labeled in the same way as the monatomic lines. The bands may overlap. In general, the energy spectrum can be given by a density function, describing the number of energy levels of the quantum system for a given interval. Spectral bands have constant density, and when the bands overlap, the corresponding densities are added. Band spectra is the name given to a group of lines that are closely spaced and arranged in a regular sequence that appears to be a band. It is a colored band, separated by dark spaces on the two sides and arranged in a regular sequence. In one band, there are various sharp and wider color lines, that are closer on one side and wider on other. The intensity in each band falls off from definite limits and indistinct on the other side. In complete band spectra, there is a number lines in a band. This spectra is produced when the emitting substance is in the molecular state. Therefore, they are also called molecular spectra. It is emitted by a molecule in vacuum tube, C-arc core with metallic salt. The band spectrum is the combination of many different spectral lines, resulting from molecular vibrational, rotational, and electronic transition. Spectroscopy studies spectral bands for astronomy and other purposes. Many systems are characterized by the spectral band to which they respond. For example: Musical instruments produce different ranges of notes within the hearing range. The electromagnetic spectrum can be divided into many different ranges such as visible light, infrared or ultraviolet radiation, radio waves, X-rays and so on, and each of these ranges can in turn be divided into smaller ranges. A radio communications signal must occupy a range of frequencies carrying most of its energy, called its bandwidth. A frequency band may represent one communication channel or be subdivided into many. Allocation of radio frequency ranges to different uses is a major function of radio spectrum allocation. See also References Electromagnetic spectrum Spectroscopy Spectrum (physical sciences)
Spectral band
[ "Physics", "Chemistry" ]
563
[ "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Electromagnetic spectrum", "Waves", "Spectroscopy" ]
3,194,204
https://en.wikipedia.org/wiki/Charcot%E2%80%93Leyden%20crystals
Charcot–Leyden crystals are microscopic crystals composed of eosinophil protein galectin-10 found in people who have allergic diseases such as asthma or parasitic infections such as parasitic pneumonia or ascariasis. Appearance Charcot–Leyden crystals are composed of an eosinophilic lysophospholipase binding protein called Galectin-10. They vary in size and may be as large as 50 μm in length. Charcot–Leyden crystals are slender and pointed at both ends, consisting of a pair of hexagonal pyramids joined at their bases. Normally colorless, they are stained purplish-red by trichrome. Clinical significance They are indicative of a disease involving eosinophilic inflammation or proliferation, such as is found in allergic reactions (asthma, bronchitis, allergic rhinitis and rhinosinusitis) and parasitic infections such as Entamoeba histolytica, Necator americanus, and Ancylostoma duodenale. Charcot–Leyden crystals are often seen pathologically in patients with bronchial asthma. History Friedrich Albert von Zenker was the first to notice these crystals, doing so in 1851, after which they were described jointly by Jean-Martin Charcot and Charles-Philippe Robin in 1853, then in 1872 by Ernst Viktor von Leyden. See also Curschmann's Spirals References External links Tulane Lung pathology Charcot Leyden crystals at UDEL Scientists solve a century-old mystery to treat asthma and airway inflammation Pathology
Charcot–Leyden crystals
[ "Biology" ]
320
[ "Pathology" ]
3,195,803
https://en.wikipedia.org/wiki/Nearly%20free%20electron%20model
In solid-state physics, the nearly free electron model (or NFE model and quasi-free electron model) is a quantum mechanical model of physical properties of electrons that can move almost freely through the crystal lattice of a solid. The model is closely related to the more conceptual empty lattice approximation. The model enables understanding and calculation of the electronic band structures, especially of metals. This model is an immediate improvement of the free electron model, in which the metal was considered as a non-interacting electron gas and the ions were neglected completely. Mathematical formulation The nearly free electron model is a modification of the free-electron gas model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. This model, like the free-electron model, does not take into account electron–electron interactions; that is, the independent electron approximation is still in effect. As shown by Bloch's theorem, introducing a periodic potential into the Schrödinger equation results in a wave function of the form where the function has the same periodicity as the lattice: (where is a lattice translation vector.) Because it is a nearly free electron approximation we can assume that where denotes the volume of states of fixed radius (as described in Gibbs paradox). A solution of this form can be plugged into the Schrödinger equation, resulting in the central equation: where is the total energy, and the kinetic energy is characterized by which, after dividing by , reduces to if we assume that is almost constant and The reciprocal parameters and are the Fourier coefficients of the wave function and the screened potential energy , respectively: The vectors are the reciprocal lattice vectors, and the discrete values of are determined by the boundary conditions of the lattice under consideration. Before doing the perturbation analysis, let us first consider the base case to which the perturbation is applied. Here, the base case is , and therefore all the Fourier coefficients of the potential are also zero. In this case the central equation reduces to the form This identity means that for each , one of the two following cases must hold: , If is a non-degenerate energy level, then the second case occurs for only one value of , while for the remaining , the Fourier expansion coefficient is zero. In this case, the standard free electron gas result is retrieved: If is a degenerate energy level, there will be a set of lattice vectors with . Then there will be independent plane wave solutions of which any linear combination is also a solution: Now let be nonzero and small. Non-degenerate and degenerate perturbation theory, respectively, can be applied in these two cases to solve for the Fourier coefficients of the wavefunction (correct to first order in ) and the energy eigenvalue (correct to second order in ). An important result of this derivation is that there is no first-order shift in the energy in the case of no degeneracy, while there is in the case of degeneracy (and near-degeneracy), implying that the latter case is more important in this analysis. Particularly, at the Brillouin zone boundary (or, equivalently, at any point on a Bragg plane), one finds a twofold energy degeneracy that results in a shift in energy given by: . This energy gap between Brillouin zones is known as the band gap, with a magnitude of . Results Introducing this weak perturbation has significant effects on the solution to the Schrödinger equation, most significantly resulting in a band gap between wave vectors in different Brillouin zones. Justifications In this model, the assumption is made that the interaction between the conduction electrons and the ion cores can be modeled through the use of a "weak" perturbing potential. This may seem like a severe approximation, for the Coulomb attraction between these two particles of opposite charge can be quite significant at short distances. It can be partially justified, however, by noting two important properties of the quantum mechanical system: The force between the ions and the electrons is greatest at very small distances. However, the conduction electrons are not "allowed" to get this close to the ion cores due to the Pauli exclusion principle: the orbitals closest to the ion core are already occupied by the core electrons. Therefore, the conduction electrons never get close enough to the ion cores to feel their full force. Furthermore, the core electrons shield the ion charge magnitude "seen" by the conduction electrons. The result is an effective nuclear charge experienced by the conduction electrons which is significantly reduced from the actual nuclear charge. See also Empty lattice approximation Electronic band structure Tight binding model Bloch's theorem Kronig–Penney model References Electronic band structures Quantum models
Nearly free electron model
[ "Physics", "Chemistry", "Materials_science" ]
978
[ "Electron", "Quantum mechanics", "Quantum models", "Electronic band structures", "Condensed matter physics" ]
3,196,393
https://en.wikipedia.org/wiki/Niobium%E2%80%93tin
Niobium–tin is an intermetallic compound of niobium (Nb) and tin (Sn), used industrially as a type-II superconductor. This intermetallic compound has a simple structure: A3B. It is more expensive than niobium–titanium (NbTi), but remains superconducting up to a magnetic flux density of , compared to a limit of roughly 15 T for NbTi. Nb3Sn was discovered to be a superconductor in 1954. The material's ability to support high currents and magnetic fields was discovered in 1961 and started the era of large-scale applications of superconductivity. The critical temperature is . Application temperatures are commonly around , the boiling point of liquid helium at atmospheric pressure. In April 2008 a record non-copper current density was claimed of 2,643 A mm−2 at 12 T and 4.2 K. History Nb3Sn was discovered to be a superconductor in 1954, one year after the discovery of V3Si, the first example of an A3B superconductor. In 1961 it was discovered that niobium–tin still exhibits superconductivity at large currents and strong magnetic fields, thus becoming the first known material to support the high currents and fields necessary for making useful high-power magnets and electric power machinery. Notable uses The central solenoid and toroidal field superconducting magnets for the planned experimental ITER fusion reactor use niobium–tin as a superconductor. The central solenoid coil will produce a field of . The toroidal field coils will operate at a maximum field of 11.8 T. Estimated use is of Nb3Sn strands and 250 metric tonnes of NbTi strands. At the Large Hadron Collider at CERN, extra-strong quadrupole magnets (for focussing beams) made with niobium–tin are being installed in key points of the accelerator between late 2018 and early 2020. Niobium tin had been proposed in 1986 as an alternative to niobium–titanium, since it allowed coolants less complex than superfluid helium, but this was not pursued in order to avoid delays while competing with the then-planned US-led Superconducting Super Collider. Composite wire Mechanically, Nb3Sn is extremely brittle and thus cannot be easily drawn into a wire, which is necessary for winding superconducting magnets. To overcome this, wire manufacturers typically draw down composite wires containing ductile precursors. The "internal tin" process includes separate alloys of Nb, Cu and Sn. The "bronze" process contains Nb in a copper–tin bronze matrix. With both processes the strand is typically drawn to final size and coiled into a solenoid or cable before heat treatment. It is only during heat treatment that the Sn reacts with the Nb to form the brittle, superconducting niobium–tin compound. The powder-in-tube process is also used. The high field section of modern NMR magnets are composed of niobium–tin wire. Strain effects Inside a magnet the wires are subjected to high Lorentz forces as well as thermal stresses during cooling. Any strain in the niobium tin causes a decrease in the superconducting performance of the material, and can cause the brittle material to fracture. Because of this, the wires need to be as stiff as possible. The Young's modulus of niobium tin is around 140 GPa at room temperature. However, the stiffness drops down to as low as 50 GPa when the material is cooled below . Engineers must therefore find ways of improving the strength of the material. Strengthening fibers are often incorporated in the composite niobium tin wires to increase their stiffness. Common strengthening materials include Inconel, stainless steel, molybdenum, and tantalum because of their high stiffness at cryogenic temperatures. Since the thermal expansion coefficients of the matrix, fiber, and niobium tin are all different, significant amounts of strain can be generated after the wire is annealed and cooled all the way down to operating temperatures. This strain is referred to as the pre-strain in the wire. Since any strain in the niobium tin generally decreases the superconducting performance of the material, a proper combination of materials must be used to minimize this value. The pre-strain in a composite wire can be calculated by the formula where εm is the pre-strain, ΔL/Lc and ΔL/Lf are changes in length due to thermal expansion of the niobium tin conduit and strengthening fiber respectively; Vc, Vf, Vcu, and Vbz are the volume fractions of conduit, fiber, copper, and bronze; σcu,y, and σbz,y are the yield stresses of copper and bronze; and Ec, and Ef are the Young's modulus of the conduit and the fiber. Since the copper and bronze matrix deforms plastically during cooldown, they apply a constant stress equal to their yield stress. The conduit and fiber, however, deform elastically by design. Commercial superconductors manufactured by the bronze process generally have a pre-strain value around 0.2% to 0.4%. The so-called strain effect causes a reduction in the superconducting properties of many materials including niobium tin. The critical strain, the maximum allowable strain over which superconductivity is lost, is given by the formula where εc is the critical strain, εco is a material dependent parameter equal to 1.5% in tension (−1.8% in compression) for niobium tin, B is the applied magnetic field, and Bc2m is the maximum upper critical field of the material. Strain in the niobium tin causes tetragonal distortions in the crystal lattice, which changes the electron-phonon interaction spectrum. This is equivalent to an increase in disorder in the A15 crystal structure. At high enough strain, around 1%, the niobium tin conduit will develop fractures and the current carrying capability of the wire will be irreversibly damaged. In most circumstances, except for high field conditions, the niobium tin conduit will fracture before the critical strain is reached. Developments and future uses Hafnium or zirconium added to niobium–tin increases the maximum current density in a magnetic field. This may allow it to be used at 16 tesla for CERN's planned Future Circular Collider. See also Niobium–titanium, more ductile than Nb-Sn References External links European Advanced Superconductors Superconductors Niobium compounds Tin compounds Intermetallics
Niobium–tin
[ "Physics", "Chemistry", "Materials_science" ]
1,430
[ "Inorganic compounds", "Metallurgy", "Superconductivity", "Alloys", "Intermetallics", "Condensed matter physics", "Superconductors" ]
16,576,182
https://en.wikipedia.org/wiki/U%20Aquarii
U Aquarii, abbreviated U Aqr, is a variable star in the equatorial constellation of Aquarius. It is invisible to the naked eye, having an apparent visual magnitude that ranges from 10.6 down to as low as 15.9. Based on parallax measurements, the distance to this star is approximately . In 1990, W. A. Lawson and associates provided a distance estimate of based on the assumption of a bolometric magnitude of −5. It appears to lie several kiloparsecs below the galactic plane, and thus may belong to an old stellar population. Christian Heinrich Friedrich Peters discovered that U Aquarii is a variable star based on observations made from 1875 to 1878. Its variable star designation was published in Annie Jump Cannon's Second catalogue of variable stars in 1907, at which time the class of variable star it belonged to was still unknown. The stellar classification of this star is C-Hd, and it is classified as a R Coronae Borealis variable. It is a carbon star with a hydrogen-deficient spectra that also shows evidence of s-process elements, including overabundances of strontium and yttrium, but no barium. This combination of properties is exceptionally rare; only one other example has been found as of 2012. The elemental abundances are explained as the result of a single neutron exposure event, which is difficult to reconcile with a conjecture that this may be a post-AGB-type star. In 1999, U Aqr was proposed to be a Thorne-Zytkow object, instead of being a simple R Coronae Borealis variable. References Carbon stars R Coronae Borealis variables Aquarius (constellation) Durchmusterung objects 108876 Aquarii, U
U Aquarii
[ "Astronomy" ]
363
[ "Constellations", "Aquarius (constellation)" ]