id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
64,324,611 | https://en.wikipedia.org/wiki/Gauge%20theory%20%28mathematics%29 | In mathematics, and especially differential geometry and mathematical physics, gauge theory is the general study of connections on vector bundles, principal bundles, and fibre bundles. Gauge theory in mathematics should not be confused with the closely related concept of a gauge theory in physics, which is a field theory that admits gauge symmetry. In mathematics theory means a mathematical theory, encapsulating the general study of a collection of concepts or phenomena, whereas in the physical sense a gauge theory is a mathematical model of some natural phenomenon.
Gauge theory in mathematics is typically concerned with the study of gauge-theoretic equations. These are differential equations involving connections on vector bundles or principal bundles, or involving sections of vector bundles, and so there are strong links between gauge theory and geometric analysis. These equations are often physically meaningful, corresponding to important concepts in quantum field theory or string theory, but also have important mathematical significance. For example, the Yang–Mills equations are a system of partial differential equations for a connection on a principal bundle, and in physics solutions to these equations correspond to vacuum solutions to the equations of motion for a classical field theory, particles known as instantons.
Gauge theory has found uses in constructing new invariants of smooth manifolds, the construction of exotic geometric structures such as hyperkähler manifolds, as well as giving alternative descriptions of important structures in algebraic geometry such as moduli spaces of vector bundles and coherent sheaves.
History
Gauge theory has its origins as far back as the formulation of Maxwell's equations describing classical electromagnetism, which may be phrased as a gauge theory with structure group the circle group. Work of Paul Dirac on magnetic monopoles and relativistic quantum mechanics encouraged the idea that bundles and connections were the correct way of phrasing many problems in quantum mechanics. Gauge theory in mathematical physics arose as a significant field of study with the seminal work of Robert Mills and Chen-Ning Yang on so-called Yang–Mills gauge theory, which is now the fundamental model that underpins the standard model of particle physics.
The mathematical investigation of gauge theory has its origins in the work of Michael Atiyah, Isadore Singer, and Nigel Hitchin on the self-duality equations on a Riemannian manifold in four dimensions. In this work the moduli space of self-dual connections (instantons) on Euclidean space was studied, and shown to be of dimension where is a positive integer parameter. This linked up with the discovery by physicists of BPST instantons, vacuum solutions to the Yang–Mills equations in four dimensions with . Such instantons are defined by a choice of 5 parameters, the center and scale , corresponding to the -dimensional moduli space. A BPST instanton is depicted to the right.
Around the same time Atiyah and Richard Ward discovered links between solutions to the self-duality equations and algebraic bundles over the complex projective space . Another significant early discovery was the development of the ADHM construction by Atiyah, Vladimir Drinfeld, Hitchin, and Yuri Manin. This construction allowed for the solution to the anti-self-duality equations on Euclidean space from purely linear algebraic data.
Significant breakthroughs encouraging the development of mathematical gauge theory occurred in the early 1980s. At this time the important work of Atiyah and Raoul Bott about the Yang–Mills equations over Riemann surfaces showed that gauge theoretic problems could give rise to interesting geometric structures, spurring the development of infinite-dimensional moment maps, equivariant Morse theory, and relations between gauge theory and algebraic geometry. Important analytical tools in geometric analysis were developed at this time by Karen Uhlenbeck, who studied the analytical properties of connections and curvature proving important compactness results. The most significant advancements in the field occurred due to the work of Simon Donaldson and Edward Witten.
Donaldson used a combination of algebraic geometry and geometric analysis techniques to construct new invariants of four manifolds, now known as Donaldson invariants. With these invariants, novel results such as the existence of topological manifolds admitting no smooth structures, or the existence of many distinct smooth structures on the Euclidean space could be proved. For this work Donaldson was awarded the Fields Medal in 1986.
Witten similarly observed the power of gauge theory to describe topological invariants, by relating quantities arising from Chern–Simons theory in three dimensions to the Jones polynomial, an invariant of knots. This work and the discovery of Donaldson invariants, as well as novel work of Andreas Floer on Floer homology, inspired the study of topological quantum field theory.
After the discovery of the power of gauge theory to define invariants of manifolds, the field of mathematical gauge theory expanded in popularity. Further invariants were discovered, such as Seiberg–Witten invariants and Vafa–Witten invariants. Strong links to algebraic geometry were realised by the work of Donaldson, Uhlenbeck, and Shing-Tung Yau on the Kobayashi–Hitchin correspondence relating Yang–Mills connections to stable vector bundles. Work of Nigel Hitchin and Carlos Simpson on Higgs bundles demonstrated that moduli spaces arising out of gauge theory could have exotic geometric structures such as that of hyperkähler manifolds, as well as links to integrable systems through the Hitchin system. Links to string theory and Mirror symmetry were realised, where gauge theory is essential to phrasing the homological mirror symmetry conjecture and the AdS/CFT correspondence.
Fundamental objects of interest
The fundamental objects of interest in gauge theory are connections on vector bundles and principal bundles. In this section we briefly recall these constructions, and refer to the main articles on them for details. The structures described here are standard within the differential geometry literature, and an introduction to the topic from a gauge-theoretic perspective can be found in the book of Donaldson and Peter Kronheimer.
Principal bundles
The central objects of study in gauge theory are principal bundles and vector bundles. The choice of which to study is essentially arbitrary, as one may pass between them, but principal bundles are the natural objects from the physical perspective to describe gauge fields, and mathematically they more elegantly encode the corresponding theory of connections and curvature for vector bundles associated to them.
A principal bundle with structure group , or a principal -bundle, consists of a quintuple where is a smooth fibre bundle with fibre space isomorphic to a Lie group , and represents a free and transitive right group action of on which preserves the fibres, in the sense that for all , for all . Here is the total space, and the base space. Using the right group action for each and any choice of , the map defines a diffeomorphism between the fibre over and the Lie group as smooth manifolds. Note however there is no natural way of equipping the fibres of with the structure of Lie groups, as there is no natural choice of element for every .
The simplest examples of principal bundles are given when is the circle group. In this case the principal bundle has dimension where . Another natural example occurs when is the frame bundle of the tangent bundle of the manifold , or more generally the frame bundle of a vector bundle over . In this case the fibre of is given by the general linear group .
Since a principal bundle is a fibre bundle, it locally has the structure of a product. That is, there exists an open covering of and diffeomorphisms commuting with the projections and , such that the transition functions defined by satisfy the cocycle condition
on any triple overlap . In order to define a principal bundle it is enough to specify such a choice of transition functions, The bundle is then defined by gluing trivial bundles along the intersections using the transition functions. The cocycle condition ensures precisely that this defines an equivalence relation on the disjoint union and therefore that the quotient space is well-defined. This is known as the fibre bundle construction theorem and the same process works for any fibre bundle described by transition functions, not just principal bundles or vector bundles.
Notice that a choice of local section satisfying is an equivalent method of specifying a local trivialisation map. Namely, one can define where is the unique group element such that .
Vector bundles
A vector bundle is a triple where is a fibre bundle with fibre given by a vector space where is a field. The number is the rank of the vector bundle. Again one has a local description of a vector bundle in terms of a trivialising open cover. If is such a cover, then under the isomorphism
one obtains distinguished local sections of corresponding to the coordinate basis vectors of , denoted . These are defined by the equation
To specify a trivialisation it is therefore equivalent to give a collection of local sections which are everywhere linearly independent, and use this expression to define the corresponding isomorphism. Such a collection of local sections is called a frame.
Similarly to principal bundles, one obtains transition functions for a vector bundle, defined by
If one takes these transition functions and uses them to construct the local trivialisation for a principal bundle with fibre equal to the structure group , one obtains exactly the frame bundle of , a principal -bundle.
Associated bundles
Given a principal -bundle and a representation of on a vector space , one can construct an associated vector bundle with fibre the vector space . To define this vector bundle, one considers the right action on the product defined by and defines as the quotient space with respect to this action.
In terms of transition functions the associated bundle can be understood more simply. If the principal bundle has transition functions with respect to a local trivialisation , then one constructs the associated vector bundle using the transition functions .
The associated bundle construction can be performed for any fibre space , not just a vector space, provided is a group homomorphism. One key example is the capital A adjoint bundle with fibre , constructed using the group homomorphism defined by conjugation . Note that despite having fibre , the Adjoint bundle is neither a principal bundle, or isomorphic as a fibre bundle to itself. For example, if is Abelian, then the conjugation action is trivial and will be the trivial -fibre bundle over regardless of whether or not is trivial as a fibre bundle. Another key example is the lowercase a adjoint bundle constructed using the adjoint representation where is the Lie algebra of .
Gauge transformations
A gauge transformation of a vector bundle or principal bundle is an automorphism of this object. For a principal bundle, a gauge transformation consists of a diffeomorphism commuting with the projection operator and the right action . For a vector bundle a gauge transformation is similarly defined by a diffeomorphism commuting with the projection operator which is a linear isomorphism of vector spaces on each fibre.
The gauge transformations (of or ) form a group under composition, called the gauge group, typically denoted . This group can be characterised as the space of global sections of the adjoint bundle, or in the case of a vector bundle, where denotes the frame bundle.
One can also define a local gauge transformation as a local bundle isomorphism over a trivialising open subset . This can be uniquely specified as a map (taking in the case of vector bundles), where the induced bundle isomorphism is defined by
and similarly for vector bundles.
Notice that given two local trivialisations of a principal bundle over the same open subset , the transition function is precisely a local gauge transformation . That is, local gauge transformations are changes of local trivialisation for principal bundles or vector bundles.
Connections on principal bundles
A connection on a principal bundle is a method of connecting nearby fibres so as to capture the notion of a section being constant or horizontal. Since the fibres of an abstract principal bundle are not naturally identified with each other, or indeed with the fibre space itself, there is no canonical way of specifying which sections are constant. A choice of local trivialisation leads to one possible choice, where if is trivial over a set , then a local section could be said to be horizontal if it is constant with respect to this trivialisation, in the sense that for all and one . In particular a trivial principal bundle comes equipped with a trivial connection.
In general a connection is given by a choice of horizontal subspaces of the tangent spaces at every point , such that at every point one has where is the vertical bundle defined by . These horizontal subspaces must be compatible with the principal bundle structure by requiring that the horizontal distribution is invariant under the right group action: where denotes right multiplication by . A section is said to be horizontal if where is identified with its image inside , which is a submanifold of with tangent bundle . Given a vector field , there is a unique horizontal lift . The curvature of the connection is given by the two-form with values in the adjoint bundle defined by
where is the Lie bracket of vector fields. Since the vertical bundle consists of the tangent spaces to the fibres of and these fibres are isomorphic to the Lie group whose tangent bundle is canonically identified with , there is a unique Lie algebra-valued two-form corresponding to the curvature. From the perspective of the Frobenius integrability theorem, the curvature measures precisely the extent to which the horizontal distribution fails to be integrable, and therefore the extent to which fails to embed inside as a horizontal submanifold locally.
The choice of horizontal subspaces may be equivalently expressed by a projection operator which is equivariant in the correct sense, called the connection one-form. For a horizontal distribution , this is defined by where denotes the decomposition of a tangent vector with respect to the direct sum decomposition . Due to the equivariance, this projection one-form may be taken to be Lie algebra-valued, giving some .
A local trivialisation for is equivalently given by a local section and the connection one-form and curvature can be pulled back along this smooth map. This gives the local connection one-form which takes values in the adjoint bundle of . Cartan's structure equation says that the curvature may be expressed in terms of the local one-form by the expression
where we use the Lie bracket on the Lie algebra bundle which is identified with on the local trivialisation .
Under a local gauge transformation so that , the local connection one-form transforms by the expression
where denotes the Maurer–Cartan form of the Lie group . In the case where is a matrix Lie group, one has the simpler expression
Connections on vector bundles
A connection on a vector bundle may be specified similarly to the case for principal bundles above, known as an Ehresmann connection. However vector bundle connections admit a more powerful description in terms of a differential operator. A connection on a vector bundle is a choice of -linear differential operator
such that
for all and sections . The covariant derivative of a section in the direction of a vector field is defined by
where on the right we use the natural pairing between and . This is a new section of the vector bundle , thought of as the derivative of in the direction of . The operator is the covariant derivative operator in the direction of . The curvature of is given by the operator with values in the endomorphism bundle, defined by
In a local trivialisation the exterior derivative acts as a trivial connection (corresponding in the principal bundle picture to the trivial connection discussed above). Namely for a local frame one defines
where here we have used Einstein notation for a local section .
Any two connections differ by an -valued one-form . To see this, observe that the difference of two connections is -linear:
In particular since every vector bundle admits a connection (using partitions of unity and the local trivial connections), the set of connections on a vector bundle has the structure of an infinite-dimensional affine space modelled on the vector space . This space is commonly denoted .
Applying this observation locally, every connection over a trivialising subset differs from the trivial connection by some local connection one-form , with the property that on . In terms of this local connection form, the curvature may be written as
where the wedge product occurs on the one-form component, and one composes endomorphisms on the endomorphism component. To link back to the theory of principal bundles, notice that where on the right we now perform wedge of one-forms and commutator of endomorphisms.
Under a gauge transformation of the vector bundle , a connection transforms into a connection by the conjugation . The difference where here is acting on the endomorphisms of . Under a local gauge transformation one obtains the same expression
as in the case of principal bundles.
Induced connections
A connection on a principal bundle induces connections on associated vector bundles. One way to see this is in terms of the local connection forms described above. Namely, if a principal bundle connection has local connection forms , and is a representation of defining an associated vector bundle , then the induced local connection one-forms are defined by
Here is the induced Lie algebra homomorphism from , and we use the fact that this map induces a homomorphism of vector bundles .
The induced curvature can be simply defined by
Here one sees how the local expressions for curvature are related for principal bundles and vector bundles, as the Lie bracket on the Lie algebra is sent to the commutator of endomorphisms of under the Lie algebra homomorphism .
Space of connections
The central object of study in mathematical gauge theory is the space of connections on a vector bundle or principal bundle. This is an infinite-dimensional affine space modelled on the vector space (or in the case of vector bundles). Two connections are said to be gauge equivalent if there exists a gauge transformation such that . Gauge theory is concerned with gauge equivalence classes of connections. In some sense gauge theory is therefore concerned with the properties of the quotient space , which is in general neither a Hausdorff space or a smooth manifold.
Many interesting properties of the base manifold can be encoded in the geometry and topology of moduli spaces of connections on principal bundles and vector bundles over . Invariants of , such as Donaldson invariants or Seiberg–Witten invariants can be obtained by computing numeral quantities derived from moduli spaces of connections over . The most famous application of this idea is Donaldson's theorem, which uses the moduli space of Yang–Mills connections on a principal -bundle over a simply connected four-manifold to study its intersection form. For this work Donaldson was awarded a Fields Medal.
Notational conventions
There are various notational conventions used for connections on vector bundles and principal bundles which will be summarised here.
The letter is the most common symbol used to represent a connection on a vector bundle or principal bundle. It comes from the fact that if one chooses a fixed connection of all connections, then any other connection may be written for some unique one-form . It also comes from the use of to denote the local form of the connection on a vector bundle, which subsequently comes from the electromagnetic potential in physics. Sometimes the symbol is also used to refer to the connection form, usually on a principal bundle, and usually in this case refers to the global connection one-form on the total space of the principal bundle, rather than the corresponding local connections forms. This convention is usually avoided in the mathematical literature as it often clashes with the use of for a Kähler form when the underlying manifold is a Kähler manifold.
The symbol is most commonly used to represent a connection on a vector bundle as a differential operator, and in that sense is used interchangeably with the letter . It is also used to refer to the covariant derivative operators . Alternative notation for the connection operator and covariant derivative operators is to emphasize the dependence on the choice of , or or .
The operator most commonly refers to the exterior covariant derivative of a connection (and so is sometimes written for a connection ). Since the exterior covariant derivative in degree 0 is the same as the regular covariant derivative, the connection or covariant derivative itself is often denoted instead of .
The symbol or is most commonly used to refer to the curvature of a connection. When the connection is referred to by , the curvature is referred to by rather than . Other conventions involve or or , by analogy with the Riemannian curvature tensor in Riemannian geometry which is denoted by .
The letter is often used to denote a principal bundle connection or Ehresmann connection when emphasis is to be placed on the horizontal distribution . In this case the vertical projection operator corresponding to (the connection one-form on ) is usually denoted , or , or . Using this convention the curvature is sometimes denoted to emphasize the dependence, and may refer to either the curvature operator on the total space , or the curvature on the base .
The Lie algebra adjoint bundle is usually denoted , and the Lie group adjoint bundle by . This disagrees with the convention in the theory of Lie groups, where refers to the representation of on , and refers to the Lie algebra representation of on itself by the Lie bracket. In the Lie group theory the conjugation action (which defines the bundle ) is often denoted by .
Dictionary of mathematical and physical terminology
The mathematical and physical fields of gauge theory involve the study of the same objects, but use different terminology to describe them. Below is a summary of how these terms relate to each other.
As a demonstration of this dictionary, consider an interacting term of an electron-positron particle field and the electromagnetic field in the Lagrangian of quantum electrodynamics:
Mathematically this might be rewritten
where is a connection on a principal bundle , is a section of an associated spinor bundle and is the induced Dirac operator of the induced covariant derivative on this associated bundle. The first term is an interacting term in the Lagrangian between the spinor field (the field representing the electron-positron) and the gauge field (representing the electromagnetic field). The second term is the regular Yang–Mills functional which describes the basic non-interacting properties of the electromagnetic field (the connection ). The term of the form is an example of what in physics is called minimal coupling, that is, the simplest possible interaction between a matter field and a gauge field .
Yang–Mills theory
The predominant theory that occurs in mathematical gauge theory is Yang–Mills theory. This theory involves the study of connections which are critical points of the Yang–Mills functional defined by
where is an oriented Riemannian manifold with the Riemannian volume form and an -norm on the adjoint bundle . This functional is the square of the -norm of the curvature of the connection , so connections which are critical points of this function are those with curvature as small as possible (or higher local minima of ).
These critical points are characterised as solutions of the associated Euler–Lagrange equations, the Yang–Mills equations
where is the induced exterior covariant derivative of on and is the Hodge star operator. Such solutions are called Yang–Mills connections and are of significant geometric interest.
The Bianchi identity asserts that for any connection, . By analogy for differential forms a harmonic form is characterised by the condition
If one defined a harmonic connection by the condition that
the then study of Yang–Mills connections is similar in nature to that of harmonic forms. Hodge theory provides a unique harmonic representative of every de Rham cohomology class . Replacing a cohomology class by a gauge orbit , the study of Yang–Mills connections can be seen as trying to find unique representatives for each orbit in the quotient space of connections modulo gauge transformations.
Self-duality and anti-self-duality equations
In dimension four the Hodge star operator sends two-forms to two-forms, , and squares to the identity operator, . Thus the Hodge star operating on two-forms has eigenvalues , and the two-forms on an oriented Riemannian four-manifold split as a direct sum
into the self-dual and anti-self-dual two-forms, given by the and eigenspaces of the Hodge star operator respectively. That is, is self-dual if , and anti-self dual if , and every differential two-form admits a splitting into self-dual and anti-self-dual parts.
If the curvature of a connection on a principal bundle over a four-manifold is self-dual or anti-self-dual then by the Bianchi identity , so the connection is automatically a Yang–Mills connection. The equation
is a first order partial differential equation for the connection , and therefore is simpler to study than the full second order Yang–Mills equation. The equation is called the self-duality equation, and the equation is called the anti-self-duality equation, and solutions to these equations are self-dual connections or anti-self-dual connections respectively.
Dimensional reduction
One way to derive new and interesting gauge-theoretic equations is to apply the process of dimensional reduction to the Yang–Mills equations. This process involves taking the Yang–Mills equations over a manifold (usually taken to be the Euclidean space ), and imposing that the solutions of the equations be invariant under a group of translational or other symmetries. Through this process the Yang–Mills equations lead to the Bogomolny equations describing monopoles on , Hitchin's equations describing Higgs bundles on Riemann surfaces, and the Nahm equations on real intervals, by imposing symmetry under translations in one, two, and three directions respectively.
Gauge theory in one and two dimensions
Here the Yang–Mills equations when the base manifold is of low dimension is discussed. In this setting the equations simplify dramatically due to the fact that in dimension one there are no two-forms, and in dimension two the Hodge star operator on two-forms acts as .
Yang–Mills theory
One may study the Yang–Mills equations directly on a manifold of dimension two. The theory of Yang–Mills equations when the base manifold is a compact Riemann surface was carried about by Michael Atiyah and Raoul Bott. In this case the moduli space of Yang–Mills connections over a complex vector bundle admits various rich interpretations, and the theory serves as the simplest case to understand the equations in higher dimensions. The Yang–Mills equations in this case become
for some topological constant depending on . Such connections are called projectively flat, and in the case where the vector bundle is topologically trivial (so ) they are precisely the flat connections.
When the rank and degree of the vector bundle are coprime, the moduli space of Yang–Mills connections is smooth and has a natural structure of a symplectic manifold. Atiyah and Bott observed that since the Yang–Mills connections are projectively flat, their holonomy gives projective unitary representations of the fundamental group of the surface, so that this space has an equivalent description as a moduli space of projective unitary representations of the fundamental group of the Riemann surface, a character variety. The theorem of Narasimhan and Seshadri gives an alternative description of this space of representations as the moduli space of stable holomorphic vector bundles which are smoothly isomorphic to the . Through this isomorphism the moduli space of Yang–Mills connections gains a complex structure, which interacts with the symplectic structure of Atiyah and Bott to make it a compact Kähler manifold.
Simon Donaldson gave an alternative proof of the theorem of Narasimhan and Seshadri that directly passed from Yang–Mills connections to stable holomorphic structures. Atiyah and Bott used this rephrasing of the problem to illuminate the intimate relationship between the extremal Yang–Mills connections and the stability of the vector bundles, as an infinite-dimensional moment map for the action of the gauge group , given by the curvature map itself. This observation phrases the Narasimhan–Seshadri theorem as a kind of infinite-dimensional version of the Kempf–Ness theorem from geometric invariant theory, relating critical points of the norm squared of the moment map (in this case Yang–Mills connections) to stable points on the corresponding algebraic quotient (in this case stable holomorphic vector bundles). This idea has been subsequently very influential in gauge theory and complex geometry since its introduction.
Nahm equations
The Nahm equations, introduced by Werner Nahm, are obtained as the dimensional reduction of the anti-self-duality in four dimensions to one dimension, by imposing translational invariance in three directions. Concretely, one requires that the connection form does not depend on the coordinates . In this setting the Nahm equations between a system of equations on an interval for four matrices satisfying the triple of equations
It was shown by Nahm that the solutions to these equations (which can be obtained fairly easily as they are a system of ordinary differential equations) can be used to construct solutions to the Bogomolny equations, which describe monopoles on . Nigel Hitchin showed that solutions to the Bogomolny equations could be used to construct solutions to the Nahm equations, showing solutions to the two problems were equivalent. Donaldson further showed that solutions to the Nahm equations are equivalent to rational maps of degree from the complex projective line to itself, where is the charge of the corresponding magnetic monopole.
The moduli space of solutions to the Nahm equations has the structure of a hyperkähler manifold.
Hitchin's equations and Higgs bundles
Hitchin's equations, introduced by Nigel Hitchin, are obtained as the dimensional reduction of the self-duality equations in four dimensions to two dimensions by imposing translation invariance in two directions. In this setting the two extra connection form components can be combined into a single complex-valued endomorphism , and when phrased in this way the equations become conformally invariant and therefore are natural to study on a compact Riemann surface rather than . Hitchin's equations state that for a pair on a complex vector bundle where , that
where is the -component of . Solutions of Hitchin's equations are called Hitchin pairs.
Whereas solutions to the Yang–Mills equations on a compact Riemann surface correspond to projective unitary representations of the surface group, Hitchin showed that solutions to Hitchin's equations correspond to projective complex representations of the surface group. The moduli space of Hitchin pairs naturally has (when the rank and degree of the bundle are coprime) the structure of a Kähler manifold. Through an analogue of Atiyah and Bott's observation about the Yang–Mills equations, Hitchin showed that Hitchin pairs correspond to so-called stable Higgs bundles, where a Higgs bundle is a pair where is a holomorphic vector bundle and is a holomorphic endomorphism of with values in the canonical bundle of the Riemann surface . This is shown through an infinite-dimensional moment map construction, and this moduli space of Higgs bundles also has a complex structure, which is different to that coming from the Hitchin pairs, leading to two complex structures on the moduli space of Higgs bundles. These combine to give a third making this moduli space a hyperkähler manifold.
Hitchin's work was subsequently vastly generalised by Carlos Simpson, and the correspondence between solutions to Hitchin's equations and Higgs bundles over an arbitrary Kähler manifold is known as the nonabelian Hodge theorem.
Gauge theory in three dimensions
Monopoles
The dimensional reduction of the Yang–Mills equations to three dimensions by imposing translational invariance in one direction gives rise to the Bogomolny equations for a pair where is a family of matrices. The equations are
When the principal bundle has structure group the circle group, solutions to the Bogomolny equations model the Dirac monopole describing a magnetic monopole in classical electromagnetism. The work of Nahm and Hitchin shows that when the structure group is the special unitary group solutions to the monopole equations correspond to solutions to the Nahm equations, and by work of Donaldson these further correspond to rational maps from to itself of degree where is the charge of the monopole. This charge is defined as the limit
of the integral of the pairing over spheres in of increasing radius .
Chern–Simons theory
Chern–Simons theory in 3 dimensions is a topological quantum field theory with an action functional proportional to the integral of the Chern–Simons form, a three-form defined by
Classical solutions to the Euler–Lagrange equations of the Chern–Simons functional on a closed 3-manifold correspond to flat connections on the principal -bundle . However, when has a boundary the situation becomes more complicated. Chern–Simons theory was used by Edward Witten to express the Jones polynomial, a knot invariant, in terms of the vacuum expectation value of a Wilson loop in Chern–Simons theory on the three-sphere . This was a stark demonstration of the power of gauge theoretic problems to provide new insight in topology, and was one of the first instances of a topological quantum field theory.
In the quantization of the classical Chern–Simons theory, one studies the induced flat or projectively flat connections on the principal bundle restricted to surfaces inside the 3-manifold. The classical state spaces corresponding to each surface are precisely the moduli spaces of Yang–Mills equations studied by Atiyah and Bott. The geometric quantization of these spaces was achieved by Nigel Hitchin and Axelrod–Della Pietra–Witten independently, and in the case where the structure group is complex, the configuration space is the moduli space of Higgs bundles and its quantization was achieved by Witten.
Floer homology
Andreas Floer introduced a type of homology on a 3-manifolds defined in analogy with Morse homology in finite dimensions. In this homology theory, the Morse function is the Chern–Simons functional on the space of connections on an principal bundle over the 3-manifold . The critical points are the flat connections, and the flow lines are defined to be the Yang–Mills instantons on that restrict to the critical flat connections on the two boundary components. This leads to instanton Floer homology. The Atiyah–Floer conjecture asserts that instanton Floer homology agrees with the Lagrangian intersection Floer homology of the moduli space of flat connections on the surface defining a Heegaard splitting of , which is symplectic due to the observations of Atiyah and Bott.
In analogy with instanton Floer homology one may define Seiberg–Witten Floer homology where instantons are replaced with solutions of the Seiberg–Witten equations. By work of Clifford Taubes this is known to be isomorphic to embedded contact homology and subsequently Heegaard Floer homology.
Gauge theory in four dimensions
Gauge theory has been most intensively studied in four dimensions. Here the mathematical study of gauge theory overlaps significantly with its physical origins, as the standard model of particle physics can be thought of as a quantum field theory on a four-dimensional spacetime. The study of gauge theory problems in four dimensions naturally leads to the study of topological quantum field theory. Such theories are physical gauge theories that are insensitive to changes in the Riemannian metric of the underlying four-manifold, and therefore can be used to define topological (or smooth structure) invariants of the manifold.
Anti-self-duality equations
In four dimensions the Yang–Mills equations admit a simplification to the first order anti-self-duality equations for a connection on a principal bundle over an oriented Riemannian four-manifold . These solutions to the Yang–Mills equations represent the absolute minima of the Yang–Mills functional, and the higher critical points correspond to the solutions that do not arise from anti-self-dual connections. The moduli space of solutions to the anti-self-duality equations, , allows one to derive useful invariants about the underlying four-manifold.
This theory is most effective in the case where is simply connected. For example, in this case Donaldson's theorem asserts that if the four-manifold has negative-definite intersection form (4-manifold), and if the principal bundle has structure group the special unitary group and second Chern class , then the moduli space is five-dimensional and gives a cobordism between itself and a disjoint union of copies of with its orientation reversed. This implies that the intersection form of such a four-manifold is diagonalisable. There are examples of simply connected topological four-manifolds with non-diagonalisable intersection form, such as the E8 manifold, so Donaldson's theorem implies the existence of topological four-manifolds with no smooth structure. This is in stark contrast with two or three dimensions, in which topological structures and smooth structures are equivalent: any topological manifold of dimension less than or equal to 3 has a unique smooth structure on it.
Similar techniques were used by Clifford Taubes and Donaldson to show that Euclidean space admits uncountably infinitely many distinct smooth structures. This is in stark contrast to any dimension other than four, where Euclidean space has a unique smooth structure.
An extension of these ideas leads to Donaldson theory, which constructs further invariants of smooth four-manifolds out of the moduli spaces of connections over them. These invariants are obtained by evaluating cohomology classes on the moduli space against a fundamental class, which exists due to analytical work showing the orientability and compactness of the moduli space by Karen Uhlenbeck, Taubes, and Donaldson.
When the four-manifold is a Kähler manifold or algebraic surface and the principal bundle has vanishing first Chern class, the anti-self-duality equations are equivalent to the Hermitian Yang–Mills equations on the complex manifold . The Kobayashi–Hitchin correspondence proven for algebraic surfaces by Donaldson, and in general by Uhlenbeck and Yau, asserts that solutions to the HYM equations correspond to stable holomorphic vector bundles. This work gave an alternate algebraic description of the moduli space and its compactification, because the moduli space of semistable holomorphic vector bundles over a complex manifold is a projective variety, and therefore compact. This indicates one way of compactifying the moduli space of connections is to add in connections corresponding to semi-stable vector bundles, so-called almost Hermitian Yang–Mills connections.
Seiberg–Witten equations
During their investigation of supersymmetry in four dimensions, Edward Witten and Nathan Seiberg uncovered a system of equations now called the Seiberg–Witten equations, for a connection and spinor field . In this case the four-manifold must admit a SpinC structure, which defines a principal SpinC bundle with determinant line bundle , and an associated spinor bundle . The connection is on , and the spinor field . The Seiberg–Witten equations are given by
Solutions to the Seiberg–Witten equations are called monopoles. The moduli space of solutions to the Seiberg–Witten equations, where denotes the choice of Spin structure, is used to derive the Seiberg–Witten invariants. The Seiberg–Witten equations have an advantage over the anti-self-duality equations, in that the equations themselves may be perturbed slightly to give the moduli space of solutions better properties. To do this, an arbitrary self-dual two-form is added on to the first equation. For generic choices of metric on the underlying four-manifold, and choice of perturbing two-form, the moduli space of solutions is a compact smooth manifold. In good circumstances (when the manifold is of simple type), this moduli space is zero-dimensional: a finite collection of points. The Seiberg–Witten invariant in this case is simply the number of points in the moduli space. The Seiberg–Witten invariants can be used to prove many of the same results as Donaldson invariants, but often with easier proofs which apply in more generality.
Gauge theory in higher dimensions
Hermitian Yang–Mills equations
A particular class of Yang–Mills connections are possible to study over Kähler manifolds or Hermitian manifolds. The Hermitian Yang–Mills equations generalise the anti-self-duality equations occurring in four-dimensional Yang–Mills theory to holomorphic vector bundles over Hermitian complex manifolds in any dimension. If is a holomorphic vector bundle over a compact Kähler manifold , and is a Hermitian connection on with respect to some Hermitian metric . The Hermitian Yang–Mills equations are
where is a topological constant depending on . These may be viewed either as an equation for the Hermitian connection or for the corresponding Hermitian metric with associated Chern connection . In four dimensions the HYM equations are equivalent to the ASD equations. In two dimensions the HYM equations correspond to the Yang–Mills equations considered by Atiyah and Bott. The Kobayashi–Hitchin correspondence asserts that solutions of the HYM equations are in correspondence with polystable holomorphic vector bundles. In the case of compact Riemann surfaces this is the theorem of Narasimhan and Seshadri as proven by Donaldson. For algebraic surfaces it was proven by Donaldson, and in general it was proven by Karen Uhlenbeck and Shing-Tung Yau. This theorem is generalised in the nonabelian Hodge theorem by Simpson, and is in fact a special case of it where the Higgs field of a Higgs bundle is set to zero.
Exceptional holonomy instantons
The effectiveness of solutions of the Yang–Mills equations in defining invariants of four-manifolds has led to interest that they may help distinguish between exceptional holonomy manifolds such as G2 manifolds in dimension 7 and Spin(7) manifolds in dimension 8, as well as related structures such as Calabi–Yau 6-manifolds and nearly Kähler manifolds.
String theory
New gauge-theoretic problems arise out of superstring theory models. In such models the universe is 10 dimensional consisting of four dimensions of regular spacetime and a 6-dimensional Calabi–Yau manifold. In such theories the fields which act on strings live on bundles over these higher dimensional spaces, and one is interested in gauge-theoretic problems relating to them. For example, the limit of the natural field theories in superstring theory as the string radius approaches zero (the so-called large volume limit) on a Calabi–Yau 6-fold is given by Hermitian Yang–Mills equations on this manifold. Moving away from the large volume limit one obtains the deformed Hermitian Yang–Mills equation, which describes the equations of motion for a D-brane in the B-model of superstring theory. Mirror symmetry predicts that solutions to these equations should correspond to special Lagrangian submanifolds of the mirror dual Calabi–Yau.
See also
Gauge theory
Introduction to gauge theory
Gauge group (mathematics)
Gauge symmetry (mathematics)
Yang–Mills theory
Yang–Mills equations
References
Differential geometry
Mathematical physics | Gauge theory (mathematics) | [
"Physics",
"Mathematics"
] | 8,831 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
64,325,880 | https://en.wikipedia.org/wiki/Galilei-covariant%20tensor%20formulation | The Galilei-covariant tensor formulation is a method for treating non-relativistic physics using the extended Galilei group as the representation group of the theory. It is constructed in the light cone of a five dimensional manifold.
Takahashi et al., in 1988, began a study of Galilean symmetry, where an explicitly covariant non-relativistic field theory could be developed. The theory is constructed in the light cone of a (4,1) Minkowski space. Previously, in 1985, Duval et al. constructed a similar tensor formulation in the context of Newton–Cartan theory. Some other authors also have developed a similar Galilean tensor formalism.
Galilean manifold
The Galilei transformations are
where stands for the three-dimensional Euclidean rotations, is the relative velocity determining Galilean boosts, a stands for spatial translations and b, for time translations. Consider a free mass particle ; the mass shell relation is given by .
We can then define a 5-vector,
,
with .
Thus, we can define a scalar product of the type
where
is the metric of the space-time, and .
Extended Galilei algebra
A five dimensional Poincaré algebra leaves the metric invariant,
We can write the generators as
The non-vanishing commutation relations will then be rewritten as
An important Lie subalgebra is
is the generator of time translations (Hamiltonian), Pi is the generator of spatial translations (momentum operator), is the generator of Galilean boosts, and stands for a generator of rotations (angular momentum operator). The generator is a Casimir invariant and is an additional Casimir invariant. This algebra is isomorphic to the extended Galilean Algebra in (3+1) dimensions with , The central charge, interpreted as mass, and .
The third Casimir invariant is given by , where is a 5-dimensional analog of the Pauli–Lubanski pseudovector.
Bargmann structures
In 1985 Duval, Burdet and Kunzle showed that four-dimensional Newton–Cartan theory of gravitation can be reformulated as Kaluza–Klein reduction of five-dimensional Einstein gravity along a null-like direction. The metric used is the same as the Galilean metric but with all positive entries
This lifting is considered to be useful for non-relativistic holographic models. Gravitational models in this framework have been shown to precisely calculate the Mercury precession.
See also
Galilean group
Representation theory of the Galilean group
Lorentz group
Poincaré group
Pauli–Lubanski pseudovector
References
Rotational symmetry
Quantum mechanics
Representation theory of Lie groups | Galilei-covariant tensor formulation | [
"Physics"
] | 549 | [
"Theoretical physics",
"Quantum mechanics",
"Symmetry",
"Rotational symmetry"
] |
65,804,492 | https://en.wikipedia.org/wiki/Pressure%20bag%20moulding | Pressure bag moulding is a process for moulding reinforced plastics. This process is related to vacuum bag molding.
Procedure
A solid female mold is used along with a flexible male mold. The reinforcement is placed inside the female mold with just enough resin to permit the fabric to stick in place (wet lay-up). A measured amount of resin is then liberally brushed indiscriminately into the mold and the mold is then clamped to a machine that includes the male flexible mold. Then, the flexible male membrane is inflated with heated compressed air or possibly steam. The female mold can also be heated. Excess resin is forced out along with trapped air. Due to the lower cost of unskilled labor, this method is used extensively in the production of composite helmets. For a helmet bag moulding machine, cycle times vary from 20 to 45 minutes, but if the molds are heated, the finished shells require no further curing.
References
Composite materials
Composite material fabrication techniques | Pressure bag moulding | [
"Physics"
] | 202 | [
"Materials",
"Composite materials",
"Matter"
] |
65,804,516 | https://en.wikipedia.org/wiki/Autoclave%20moulding | Autoclave moulding is an advanced composite manufacturing process.
Procedure
It is a process that uses a two-sided mould set that forms both surfaces of the panel. On the upper side is a flexible membrane made from silicone or an extruded polymer film such as nylon and on the lower side is a rigid mould. Reinforcement materials can be placed manually or robotically. They involve continuous fibre forms fashioned into textile constructions. Usually, they are pre-impregnated with the resin in the form of prepreg fabrics or unidirectional tapes. In some situations, a film of resin is placed upon the lower mould, and dry reinforcement is placed above. The upper mould is installed, and the vacuum is applied to the mould cavity. The assembly is placed into an autoclave. This process is generally performed at both elevated pressure and elevated temperature. The use of elevated pressure facilitates a high fibre volume fraction and low void content for maximum structural efficiency.
References
Composite materials
Composite material fabrication techniques | Autoclave moulding | [
"Physics"
] | 208 | [
"Materials",
"Composite materials",
"Matter"
] |
65,804,540 | https://en.wikipedia.org/wiki/Resin%20transfer%20moulding | Resin transfer moulding (RTM) is a process for producing high performance composite components.
Procedure
It is a process using a rigid two-sided mould set that forms both surfaces of the panel. Usually, the mould is formed from aluminum or steel, but sometimes composite molds are used. The two sides fit together to make a mould cavity. The distinctive feature of resin transfer moulding is that the reinforcement materials are placed into this cavity, and before the introduction of the matrix material, the mould set is closed. Resin transfer moulding involves numerous varieties which differ in the mechanics of how the resin is introduced to the reinforcement in the mould cavity. These variations include everything from the RTM methods used in out of autoclave composite manufacturing for high-tech aerospace components to vacuum infusion (for resin infusion see also boat building) to vacuum assisted resin transfer moulding (VARTM). This method can be done at either ambient or elevated temperature and is suitable for manufacturing high-performance composite components in medium volumes (1,000s to 10,000s of parts).
References
Composite materials
Composite material fabrication techniques | Resin transfer moulding | [
"Physics"
] | 234 | [
"Materials",
"Composite materials",
"Matter"
] |
65,810,090 | https://en.wikipedia.org/wiki/NGC%204848 | NGC 4848 is a barred spiral galaxy in the constellation Coma Berenices. It is circa 340 million light-years from Earth, which, given its apparent dimensions, means that NGC 4848 is about 170,000 light years across. It was discovered by Heinrich d'Arrest on April 21, 1865. It is considered part of the Coma Cluster, which is in its northwest part. The galaxy has been stripped of its gas as it passed through the cluster.
Characteristics
NGC 4848 is a spiral galaxy viewed nearly edge-on that is classified as SBab by de Vaucouleurs. Its nucleus is active, and it has been categorised as an HII region. A number of bright HII regions form a ring around the nucleus with a radius of 5–10 arcseconds. The star formation rate is estimated to be 9 per year based on the H-alpha, ultraviolet, infrared and radio luminosity.
The galaxy distribution of hydrogen gas is asymmetrical and forms a tail pointing away from the cluster center. The tail has projected dimensions of 62.5 by 18.5 kpc and an estimated hydrogen mass of . The tail was probably formed as a result of ram pressure as the galaxy passed through the Coma Cluster and its intergalactic medium at a speed of about 1,330 km/s, starting 200 million years ago according to Fossati et al., while a previous study indicated a timeline of 400 million years. The lost hydrogen is estimated to comprise two thirds of the original hydrogen content of the galaxy. A few star-forming regions, probably HII regions, are in the tail.
A dwarf galaxy may cross the disk of NGC 4848; however, its mass is too low to be a source of the hydrogen tail.
See also
NGC 4921, a spiral galaxy in the Coma Cluster that has lost its hydrogen
References
External links
Barred spiral galaxies
Coma Cluster
Coma Berenices
4848
08082
44405
Galaxies discovered in 1865
Astronomical objects discovered in 1865
Discoveries by Heinrich Louis d'Arrest
+05-31-039 | NGC 4848 | [
"Astronomy"
] | 425 | [
"Coma Berenices",
"Constellations"
] |
62,020,761 | https://en.wikipedia.org/wiki/Microcracks%20in%20rock | Microcracks in rock, also known as microfractures and cracks, are spaces in rock with the longest length of 1000 μm and the other two dimensions of 10 μm. In general, the ratio of width to length of microcracks is between 10−3 to 10−5.
Due to the scale, microcracks are observed using microscope to obtain their basic characteristics. Microcrack formation provides insights into the strength and deformation behavior of rocks. Experimental and numerical results both play an important role in studying microcracks, especially their kinematics and dynamics. Microcracks in rock have been studied to understand geologic problems such as the early stage of earthquakes and fault formation. In engineering, microcracks in rock have been linked to underground engineering problems, such as deep geological repository.
Types
In general, microcracks in rock can be subdivided into four groups:
Grain boundary cracks: microcracks are along the grain boundary.
Intragranular cracks: microcracks are within a grain. In addition, intragranular cracks along a cleavage plane are cleavage cracks.
Intergranular cracks: microcracks are along the boundaries of two or more grains.
Transgranular cracks: microcracks are across the grains or are across the grains from a grain boundary. They are the most abundant in rock specimens in the experiment.
Characteristics
The characteristics of microcracks are orientation, length, width, aspect ratio, number, and density. These characteristics have been tried to be explained by mathematical functions. For example, distribution of microcrack lengths away from the fault has been described by lognormal or exponential distributions.
Orientation
The orientations of microcracks are random in unstressed rock. Once a rock has been stressed, the microcracks will have a trend of orientations more or less parallel to the maximum applied stress or the fault strike. For example, the average orientation of microcracks of stressed Westerly granite is 30° to the fault strike.
Length, width, and aspect ratio
In a thin section, the observed length and width may not necessarily be the true length and width of a microcrack in three dimensions. The aspect ratio is the ratio of width to length. It is generally10−3 to 10−5. The crack length increases with increasing maximum applied stress, resulting in a decrease in the aspect ratio.
Number and density
Density of microcracks can be either the number of microcracks per unit area or per grain or the microcrack length per unit area. Densities of microcracks near a fault are dramatically high, but they decrease rapidly within a few mineral grains away from a fault.
Formation mechanism
Microcracks in rock can be induced by the applied stress or temperature.
Mechanically induced
A microcrack is formed when the stresses exceed the local strength of grains. The strength of materials is the ability to resist an applied load so that failure will not occur. The intrinsic properties of rock such as mineralogical heterogeneity give diverse types of mechanically induced microcracking. The following mechanisms have strong correlations to the locations that allow stress concentration in grain-scale.
Twin induced microcracking: stresses are concentrated at twin lamella.
Kink band and deformation lamellae associated microcracking: kink bands and deformation lamella can become a zone for stored strain energy to be concentrated.
Cleavage separations: cleavage planes are the weaknesses in crystals. Therefore, stresses are likely to be concentrated on these weakness planes first.
Microcracking from stress concentrations at grain boundaries: the contacts between grain boundaries provide space for stresses to be concentrated, especially tensile stresses.
Microcracking from stress concentrations around cavities: pre-existing cracks and pores within a grain allow stress concentration. This kind of stress concentration depends on the orientation and geometry of these pre-existing microcavity, as well as the mechanical properties of the surrounding material.
Elastic mismatches induced microcracking: each mineral type has its own elastic property. When two distinct minerals have a good contact between their boundaries, the applied stress will pull the stiffer mineral's boundary away from the contact. Therefore, the formed microcracks in the stiffer mineral are extensional cracks.
Grain translations and rotations: in crystalline rock, sliding along grain boundaries can be induced from deviatoric stresses, resulting grain boundary cracks. In clastic rock, the grains may be rotated by neighbor grains, forming cracks in the cement or along the grain boundary.
Thermally induced
Thermally induced microcracking refers to microcrack formation due to thermal effects. Heating or cooling can cause thermal expansion or contraction between grains, respectively. Minerals with different thermo-elastic properties have different reactions to cooling or heating, resulting in microcrack formation. Also, thermal gradients at internal boundaries of grains may also allow stress concentration, thus forming microcracks.
Evolution
The evolution of microcracks has been studied through experiment. When force is applied to a rock sample, microcracks initially form randomly in space. They then become more and more localized and intense with continuous loading. This phenomenon is called the crack localization. A theory of failure helps to explain the evolution of microcracks with increased loading:
The formation of microcracks starts at pre-existing microcracks.
The newly formed microcracks grow in size individually.
The number of growing microcracks also increases.
The growing microcracks starts interaction as more and more cracks form and grow.
The growth of the microcracks suddenly becomes intense and localized, leading to macroscopic failure.
After failure, the overall microcrack density increases near the fault and decreases rapidly away from the fault. In addition, the density of transgranular cracks increases near the fault, whereas the density of grain boundary cracks is lower. Connecting locally dense crack regions, crack arrays, and grain boundary eventually forms a macrocrack.
Before forming a fault, there is a fracture process zone (FPZ). It is a region of microcracks near the tip of a rock failure. It is associated with the crack localization and related to energy dissipation. The size of a fracture process zone is related to the specimen size. The larger the specimen size, the large the size of the fracture process zone. This relationship no longer exists when the specimen size is larger than a certain size.
The heterogeneity of rock makes the microcracking behavior much more complicated than other simple materials. Factors controlling microcracking behavior still have been identified and studied:
Rock type and composition: rock types can be classified into crystalline rocks including igneous rocks and metamorphic rocks, as well as sedimentary rocks including clastic and chemical sedimentary rocks. For example, many studies show that quartz content of a rock has a great impact on the number of microcracks.
Pre-existing weaknesses: they are already in rock, for example, cleavage planes of minerals, pores, and cracks.
Stress state: the state of a rock experiencing the stresses.
Recovery
In addition to microcracks formation, microcracks in rock can be recovered either by microcrack closure or microcrack healing. Microcrack recovery will directly cause a decrease in permeability of rock.
Microcrack closure
It can be either caused by increase in the applied stress or decrease in the effective stress. For example, microcracks perpendicular to the maximum stress direction will close. However, in nature, parts of a microcrack can be in different directions. For this reason, it will result in incomplete closure that some parts of the microcrack are closed while some parts are still open.
Microcrack healing
It is driven by transportation of chemical fluid in microcracks. For example, healing of microcracks in quartz is activated by temperature. Healing in quartz becomes fast when the temperature is above 400 °C. The rate of healing also depends on the crack sizes. The smaller the cracks, the faster the healing.
Influence
Microcracks affect the properties of rock including stiffness, strength, elastic modulus, permeability, fracture toughness, and elastic wave velocity.
Methodology to study microcracks
Studies of microcracks are focused on their distributions of the characteristics and microcracking behavior. Many experiments to study microcracks in rock have been conducted in the past decades, whereas numerical study also has been widely used to study microcracks in recent years because of the technology development. These studies have been used to compare with natural conditions.
Experimental study
Experimental study is to analyze the rock specimens that have been subjected to applied stress in laboratory. There are two popular methods to study microcracks. Observation of thin section using microscope is to obtain the distributions of microcrack lengths, widths and aspect ratios, numbers and densities, as well as orientations. Another method is using acoustic emission to detect and monitor microcrack growth. Experimental results can help scientists develop numerical models, such as simulation of fracture pattern growth.
Many experiments on rock fracture mechanism have been done in laboratory, but these experiments may have different requirement of specimen configuration and loading scheme. They are the two important factors controlling microcracking behavior such as microcrack development.
Specimen configuration
Specimen configuration refers to the dimensions of a specimen and its man-made crack. Rock samples are usually obtained from rock cores. Therefore, cylinder shape, chevron-bend shape, and semi-circular-bend shape (SCB) are the common specimen shapes used in experimental study. For example, a semi-circular bend specimen has a man-made crack, called a notch. It is used to control the morphology of rock fracture. Two notch types can be induced: a straight-through notch or a chevron notch. A straight-through notch semi-circular-bend (SNCCB) specimen has a flat-ended notch, whereas a chevron notch semi-circular-bend (CNSCB) specimen has a V-shaped opening to the air.
Loading scheme
In fracture mechanics, there are three types of loading modes to make a crack able to propagate. They are mode I (opening), mode II (in-plane shear), and mode III (out-plane shear). These loading modes can be achieved by the designed loading scheme. Mode I fractures are the most common microcracks in rock in natural.
Acoustic emission
An acoustic emission (AE) is a high-frequency elastic wave. It is generated from microcrack formations, and is correlated to rapid microcrack growth. Acoustic emission sensors are attached to the surface of the specimen. They collect the signals generated during microcrack formation. The data can be used to describe the microcrack behavior. Noted that one detected acoustic emission event is not necessary to be one microcrack formation.
The types of data collected from acoustic emission sensors are:
Acoustic emission count and acoustic emission count rate: the acoustic emission count is the number of acoustic emission events detected, whereas the acoustic emission count rate is the acoustic emission count per unit time.
Acoustic emission waveform: an acoustic emission waveform includes the delay time, threshold level, triggered time, duration time, and maximum amplitude.
These two types of data imply the following information:
Event counting: acoustic emission counting events over time can be compared with measured quantity, such as stress and strain.
Source location: the source location of an acoustic emission event can be obtained from multiple measurement of waveforms of the same acoustic emission event.
Energy release and the Gutenberg-Richter relation: it is used to describe a relationship between magnitudes of earthquakes and their numbers, but it is also representative of the acoustic emission energy if more sensors have been used.
Source mechanism: if the polarity of the initial P wave motion has been recorded at several sensors, source mechanism can be analyzed from a fault plane solution .
Limitation
Observation of microcracks under microscope: microcracks sometimes are difficult to be distinguished. For example, it is difficult to distinguish intergranular cracks from intragranular cracks. It is also difficult to tell whether it is single transgranular crack or a multi-intragranular cracks that are connected. Also, the length of an intergranular crack may include the lengths of grain boundary cracks. The lengths and widths of microcracks are recorded from a two dimensional perspective. It may not totally reflect their true dimensions.
Variations of experimental results from different specimen configuration and loading scheme: there are several specimen configurations and loading schemes. Using different configuration and loading scheme, fracture properties of the same rock including microcrack behavior can be various. The most suitable specimen configuration and loading scheme are still on debate.
Numerical study
Numerical study is used to help understanding the complicated rock mechanics problems. Four types of models using in modelling microcracks in rock are particle-based models, block-based models, grain-based models, and node-based models. Since grain-based models can consider all types of microcrack, they are good at understanding microcracking behavior.
Geological implication
Experimental study of microcracks provides insights into faulting and microcracks formation in nature. Microcracks studies with CL and fluid-inclusion studies are able to reconstruct the growth of fractures from microcracks. Population of microcracks is useful to distinguish whether the detachment is due to landslide or tectonic in origin. The fracture process zone (FPZ) can be used to understand the permeability of fault zones which controls fluid flow. Therefore, microcracks can be useful for assessing the stress history or fluid movement history of rock. Acoustic emission from microcrack growth may help to understand earthquakes.
Implications of underground engineering problems
Microcracks can affect the thermal and transport properties of rock. Studies of microcracks in rock provide an important insights into underground engineering problems as follows:
Deep geological repository
A deep geological repository is an underground repository for radioactive waste disposal, such as nuclear fuel. It is at depth of hundred metres in a stable rock mass. Deep geological repositories are all over the world, such as the United States (WIPP) and Finland (Olkiluoto Nuclear Power Plant).
Geothermal reservoir
A geothermal reservoir is one of the three components of a geothermal system that acts an energy source. It is a porous and permeable rock mass so that convection of trapped hot water and steam and recharge of heat supply can occur. The ideal geothermal reservoir is a highly permeable, fractured rock matrix.
Hydrocarbon reservoir
A hydrocarbon reservoir is an underground reservoir that keeps hydrocarbons trapped inside. Reservoir rocks have high porosity and permeability while the surrounding rocks that act as barriers have low permeability. Therefore, hydrocarbons that exist as liquid and/or gas can only stay in the reservoir rocks.
Underground storage of CO2
Underground storage of CO2 is a solution to remove CO2 in the atmosphere. It is composed of porous rocks surrounded by nonporous rocks so that it can trap the CO2 for a long time. A depleted oil and gas reservoir that is out of energy source is one of the examples used for underground storage.
See also
Fracture (geology)
Fracture mechanics
Acoustic emission
References
Petrology
Rock mechanics
Fracture mechanics | Microcracks in rock | [
"Materials_science",
"Engineering"
] | 3,160 | [
"Structural engineering",
"Materials degradation",
"Materials science",
"Fracture mechanics"
] |
62,023,816 | https://en.wikipedia.org/wiki/Cornelius%20de%20Vos | Cornelius de Vos or de Vois or Devosse (fl. 1565-1585), was a Dutch or Flemish mine entrepreneur and mineral prospector working in England and Scotland. He was said to have been a "picture-maker" or portrait artist. De Vos is known for gold mining in Scotland and founding saltworks at Newhaven near Edinburgh.
Career
In 1558 Cornelius de Vos was in London, and married Helen, the widow of a butcher, Nicholas Howe, and John Gylmyne. He was recorded as a member of the French church in Farringdon in 1568.
De Vos was granted rights to mine and make copperas and alum in England on the Isle of Wight and in Devon by letters patent in 1564, and pursued mining concessions in Ireland. According to his rival for Irish mining rights, William Humfrey, Cornelius de Vos obtained patents for mine drainage methods previously granted to Burchard Kranich. He worked for James Blount, 6th Baron Mountjoy at Canford Cliffs in Dorset, with little success.
Searching for Scottish gold
Cornelius de Vos was a shareholder in the English Company of Mines Royal. He went prospecting for gold in Crawford Muir in Scotland in 1566. There was already competition, Mary, Queen of Scots and Lord Darnley had granted a concession to three Edinburgh burgesses, James Carmichael the warden of the mint, Master James Lyndsay, and Andrew Stevenson, while the mint-master John Acheson and John Aslowan were already working in Wanlockhead and Glengonnar.
In October 1566 Cornelius de Vos arrived in Keswick in Cumbria with an English and a Scottish partner (whose names are unknown). He brought a sample of sand in a napkin from the Scottish gold fields, found by a woman worker washing for gold, a "mayde of Scotlande". The German miners at Keswick tested the sample and told him the sand was rich in gold. The supervisor at Keswick, Thomas Thurland, noted this as suspicious activity, possibly against his or Company interests, and reported it to William Cecil. Thurland also wrote to Queen Elizabeth in alarmist terms about "secret practices with merchant strangers and by some foreign princes to have of the Scottish queen (Mary, Queen of Scots) the mines in Crawford Moor nigh adjoining to your majesty's west borders", mines he hoped to work himself. Thurland was in a partnership with a German miner, Daniel Houghstetter or Hechstetter, between 1565 and 1577, with 24 investors.
The Company of Mines Royal tried to get an interest in Scottish gold mining and panning from Mary, Queen of Scots. Meanwhile, Cornelius de Vos and his business partners, two London merchants Anthony Hickman and John Achillay, gained a permit to work salt at Newhaven from Mary and the Earl of Bothwell shortly after their marriage in May 1567. These salt works were revived by Eustachius Roche in 1592.
De Vos was awarded a traditional 19 year "tack" of the gold mines by Regent Moray in 1568. Cornelius appeared before the Privy Council of Scotland on 4 March 1568 to register his exclusive contract to work all the gold and silver mines in Scotland. He was obliged to start work before June 1569. If any lead, tin, or copper was found he was to extract it and pay the profits to the Scottish crown. For every hundred ounces of native gold or silver he was to pay eight ounces to the treasury, and four ounces for any metal that needed to be refined. He set up his own joint-stock company to recover the gold. Cornelius however still lacked knowledge of chemistry and mineralogy and, as reported by George Nedham, again had to send one of his workers, a Dutch miner called Rennius, to Daniel Hechstetter at Keswick to assay samples of sand.
Digging at Crawford Moor continued, but Regent Morton was unhappy with the contract. In June 1574 Morton went to Crawford Moor in person to see the workings and set miners to work. Cornelius de Vos approached the English ambassador Henry Killigrew in August 1574 with a message for William Cecil about the mines, presumably seeking investment and sponsorship. On 7 February 1575 Morton lent £500 to Cornelius de Vos and his three German or "Almain" partners, Abraham Peterson, Johnne Kelliner, and Helias Clutene.
In June 1575 Morton wrote to James MacGill of Nether Rankeillour, who was now Lord Clerk Register, who had witnessed the 1568 contract. He described the terms of his contract as "captious and doubtful in many points and nothing to the king's profit". Soon after, the mining concession was granted to one of de Vos' partners Abraham Peterson in February 1576.
In 1580, although he had lost his political power, Morton received gold which was coined to the value of £678, possibly connected with mining. The goldmining concession was given to Thomas Foulis in 1594.
In London his relationship with Margriete van der Eertbrugghe came into scrutiny by the Dutch Church in October 1570. In 1573 he is known to have written letters to the Mayor of London, Lionel Duckett, and others via his cousin Arnold. As he is linked with the painter Arnold van Bronckorst in Stephen Atkinson's story, it has been suggested that this Arnold was the same person.
A "Cornelis Clewtinge de Vos" , Dutchman, was buried at St Nicholas Acons in London on 11 December 1586, who was perhaps this mining entrepreneur. The name "Clewtinge" seems to be the surname of Helias Clutene, the partner of Cornelius in 1575. Mine entrepreneurs in Scotland of the next generation included George Douglas of Parkhead, George Bowes, and Bevis Bulmer.
Stephen Atkinson's account of Cornelius de Vos in Scotland
In 1619 an English gold prospector Stephen Atkinson wrote a kind of historical prospectus for gold mining in Scotland. This includes the story of "Master Cornelius" or "Cornelius Devosse". Atkinson described Cornelius Devosse as "a most cunning picture maker, and excellent in art for the trial of mineral and mineral stones", although the archival record of his activity shows that he lacked lapidary or chemical knowledge and no other source mentions him as portrait painter. According to Atkinson, the painter Nicholas Hilliard invested in the Scottish gold mine with another painter Arnold Bronckhorst. The historian Elizabeth Goldring dates Hilliard's involvement to the years 1573 or 1574.
Atkinson's narrative seems in part based on hearsay but he describes using a "book of record" of Cornelius de Vos' mining operation at Crawford Moor and a record of the works of George Bowes. This seems to have been an account of wages. Atkinson says that he himself had worked with Daniel Hechstetter, the miner who Cornelius de Vos had consulted at Keswick. He wrote that Nicholas Hilliard, then still alive, would confirm that he also lost money in the venture.
Atkinson states that Cornelius de Vos went into Scotland with a recommendation from Elizabeth I of England, was given permission to prospect and found rich ore, which describes the events of October 1566. He gives the names of four partners in the enterprise; the Earl of Morton, "Robert Bellenden (Secretary of Scotland)" perhaps intending John Bellenden (Lord Justice Clerk), Abraham Peterson a Dutch man residing in Edinburgh, and James Reid an Edinburgh burgess.
Cornelius de Vos and his partners raised capital and he was given a commission by Regent Moray, (in March 1568). Atkinson says that Cornelius had 120 men at work and employed men and women, "lads and lasses, idle men and women", who had been begging before. Most of the gold was bought by the Scottish mint for coins. The mines were apparently worked by sub-contractors. Atkinson mentions a Scottish workman John Gibson of Crawford town who worked at "Glengaber Water" (Glengonnar), who he claims to have met, and another Dutch miner, Abraham Grey, who he found in the records. Grey, known as "Grey Beard", worked at Wanlockhead, (and sometimes said to be the same person as Abraham Peterson). Regent Morton had a basin made of Wanlockhead gold and presented it to the king of France, apparently to advertise Scotland's mineral wealth.
Atkinson takes up the subject of Cornelius de Vos again, as a story from the reign of Elizabeth, "some forty years past", after describing his own recent personal involvement with Scottish gold, Hilderston silver, and John Murray of the Bedchamber. In this version, a young Cornelius persuaded his friend the painter Nicholas Hilliard to join his Scottish goldmining venture. Hilliard sent his associate, Arnold Bronckhorst, a painter and mineralogist, into Scotland. Hilliard's efforts and influence secured a patent for Cornelius de Vos, (perhaps meaning a letter of recommendation from Elizabeth to Morton). Atkinson then describes Cornelius de Vos exporting gold ore for assay. Arnold Bronckhorst was intended to be the agent for selling the gold to Scottish mint in Edinburgh but failed to secure a contract. He was, however, appointed to be the royal portrait painter.
Bronckhorst was officially appointed as royal painter in Scotland in 1581, a few years after the goldmining events Atkinson described. However, portraits made during the years of Morton's regency have been attributed to him. Several of the individuals named by Atkinson appear in the record. Abraham Peterson, the partner and successor of Cornelius de Vos, was a Dutch or Flemish metal worker or artist, as well as a mining entrepreneur, who worked in the Scottish mint and designed coins for Regent Morton, including placks and bawbees in April 1576. James Reid, who Atkinson identified as a partner of Cornelius de Vos, stood security with James Skathowie of the Canongate for the £500 loan from Regent Morton in 1575. Cornelius never repaid this loan, and after Morton was executed in 1581, Reid and Skathowie's heirs were liable to repay the money to the Earl of Lennox.
References
External links
Contract to Cornelius de Vos and partners to make salt, signed by Mary, Queen of Scots and her husband James Hepburn, Duke of Orkney
BBC News: Mary Queen of Scots documents found at Museum of Edinburgh
Mining engineers
Gold mines in Scotland
Flemish metallurgists | Cornelius de Vos | [
"Chemistry",
"Engineering"
] | 2,191 | [
"Metallurgists",
"Mining engineering",
"Flemish metallurgists",
"Mining engineers"
] |
54,414,184 | https://en.wikipedia.org/wiki/Muller%E2%80%93Schupp%20theorem | In mathematics, the Muller–Schupp theorem states that a finitely generated group G has context-free word problem if and only if G is virtually free. The theorem was proved by David Muller and Paul Schupp in 1983.
Word problem for groups
Let G be a finitely generated group with a finite marked generating set X, that is a set X together with the map such that the subset generates G. Let be the group alphabet and let be the free monoid on that is is the set of all words (including the empty word) over the alphabet .
The map extends to a surjective monoid homomorphism, still denoted by , .
The word problem of G with respect to X is defined as
where is the identity element of G.
That is, if G is given by a presentation with X finite, then consists of all words over the alphabet that are equal to in G.
Virtually free groups
A group G is said to be virtually free if there exists a subgroup of finite index H in G such that H is isomorphic to a free group. If G is a finitely generated virtually free group and H is a free subgroup of finite index in G then H itself is finitely generated, so that H is free of finite rank.
The trivial group is viewed as the free group of rank 0, and thus all finite groups are virtually free.
A basic result in Bass–Serre theory says that a finitely generated group G is virtually free if and only if G splits as the fundamental group of a finite graph of finite groups.
Precise statement of the Muller–Schupp theorem
The modern formulation of the Muller–Schupp theorem is as follows:
Let G be a finitely generated group with a finite marked generating set X. Then G is virtually free if and only if is a context-free language.
Sketch of the proof
The exposition in this section follows the original 1983 proof of Muller and Schupp.
Suppose G is a finitely generated group with a finite generating set X such that the word problem is a context-free language. One first observes that for every finitely generated subgroup H of G is finitely presentable and that for every finite marked generating set Y of H the word problem is also context-free. In particular, for a finitely generated group the property of having context word problem does not depend on the choice of a finite marked generating set for the group, and such a group is finitely presentable.
Muller and Schupp then show, using the context-free grammar for the language , that the Cayley graph of G with respect to X is K-triangulable for some integer K>0. This means that every closed path in can be, by adding several ``diagonals", decomposed into triangles in such a way that the label of every triangle is a relation in G of length at most K over X.
They then use this triangulability property of the Cayley graph to show that either G is a finite group, or G has more than one end. Hence, by a theorem of Stallings, either G is finite or G splits nontrivially as an amalgamated free product or an HNN-extension where C is a finite group. Then are again finitely generated groups with context-free word-problem, and one can apply the entire preceding argument to them.
Since G is finitely presentable and therefore accessible, the process of iterating this argument eventually terminates with finite groups, and produces a decomposition of G as the fundamental group of a finite graph-of-groups with finite vertex and edge groups. By a basic result of Bass–Serre theory it then follows that G is virtually free.
The converse direction of the Muller–Schupp theorem is more straightforward. If G is a finitely generated virtually free group, then G admits a finite index normal subgroup N such that N is a finite rank free group. Muller and Schupp use this fact to directly verify that G has context-free word problem.
Remarks and further developments
The Muller–Schupp theorem is a far-reaching generalization of a 1971 theorem of Anisimov which states that for a finitely generated group G with a finite marked generating set X the word problem is a regular language if and only if the group G is finite.
At the time the 1983 paper of Muller and Schupp was published, accessibility of finitely presented groups has not yet been established. Therefore, the original formulation of the Muller–Schupp theorem said that a finitely generated group is virtually free if and only if this group is accessible and has context-free word problem. A 1985 paper of Dunwoody proved that all finitely presented groups are accessible. Since finitely generated groups with context-free word problem are finitely presentable, Dunwoody's result together with the original Muller–Schupp theorem imply that a finitely generated group is virtually free if and only if it has context-free word problem (which is the modern formulation of the Muller–Schupp theorem).
A 1983 paper of Linnell established accessibility of finitely generated groups where the orders of finite subgroups are bounded. It was later observed (see ) that Linnell's result together with the original Muller–Schupp theorem were sufficient to derive the modern statement of the Muller–Schupp theorem, without having to use Dunwoody's result.
In the case of torsion-free groups, the situation is simplified as the accessibility results are not needed and one instead uses Grushko theorem about the rank of a free product. In this setting, as noted in the original Muller and Schupp paper, the Muller–Schupp theorem says that a finitely generated torsion-free group has context-free word problem if and only if this group is free.
In a subsequent related paper, Muller and Schupp proved that a ``finitely generated" graph Γ has finitely many end isomorphism types if and only if Γ is the transition graph of a push-down automaton. As a consequence, they show that the monadic theory of a ``context-free" graph (such as the Cayley graph of a virtually free group) is decidable, generalizing a classic result of Rabin for binary trees. Later Kuske and Lohrey proved that virtually free groups are the only finitely generated groups whose Cayley graphs have decidable monadic theory.
Bridson and Gilman applied the Muller–Schupp theorem to show that a finitely generated group admits a ``broom-like" combing if and only if that group is virtually free.
Sénizergues used the Muller–Schupp theorem to show that the isomorphism problem for finitely generated virtually free group is primitive recursive.
Gilman, Hermiller, Holt and Rees used the Muller–Schupp theorem to prove that a finitely generated group G is virtually free if and only if there exist a finite generating set X for G and a finite set of length-reducing rewrite rules over X whose application reduces any word to a geodesic word.
Ceccherini-Silberstein and Woess consider the setting of a finitely generated group G with a finite generating set X, and a subgroup K of G such that the set of all words over the alphabet representing elements of H is a context-free language.
Generalizing the setting of the Muller–Schupp theorem, Brough studied groups with poly-context-free word problem, that is where the word problem is the intersection of finitely many context-free languages. Poly-context-free groups include all finitely generated groups commensurable with groups embeddable in a direct product of finitely many free groups, and Brough conjectured that every poly-context-free group arises in this way. Ceccherini-Silberstein, Coornaert, Fiorenzi, Schupp, and Touikan introduced the notion of a multipass automaton, which are nondeterministic automata accepting precisely all the finite intersections of context-free languages. They also obtained results providing significant evidence in favor of the above conjecture of Brough.
Nyberg-Brodda generalised the Muller-Schupp theorem from groups to ``special monoids", a class of semigroups containing, but strictly larger than, the class of groups, characterising such semigroups with a context-free word problem as being precisely those with a virtually free maximal subgroup.
Subsequent to the 1983 paper of Muller and Schupp, several authors obtained alternate or simplified proofs of the Muller–Schupp theorem.
See also
Infinite tree automaton
Word problem (mathematics)
Formal language
References
External links
Context-free groups and their structure trees, expository talk by Armin Weiß
Geometric group theory
Formal languages | Muller–Schupp theorem | [
"Physics",
"Mathematics"
] | 1,829 | [
"Geometric group theory",
"Group actions",
"Formal languages",
"Mathematical logic",
"Symmetry"
] |
54,414,446 | https://en.wikipedia.org/wiki/Graph%20matching | Graph matching is the problem of finding a similarity between graphs.
Graphs are commonly used to encode structural information in many fields, including computer vision and pattern recognition, and graph matching is an important tool in these areas. In these areas it is commonly assumed that the comparison is between the data graph and the model graph.
The case of exact graph matching is known as the graph isomorphism problem. The problem of exact matching of a graph to a part of another graph is called subgraph isomorphism problem.
Inexact graph matching refers to matching problems when exact matching is impossible, e.g., when the number of vertices in the two graphs are different. In this case it is required to find the best possible match. For example, in image recognition applications, the results of image segmentation in image processing typically produces data graphs with the numbers of vertices much larger than in the model graphs data expected to match against. In the case of attributed graphs, even if the numbers of vertices and edges are the same, the matching still may be only inexact.
Two categories of search methods are the ones based on identification of possible and impossible pairings of vertices between the two graphs and methods that formulate graph matching as an optimization problem. Graph edit distance is one of similarity measures suggested for graph matching. The class of algorithms is called error-tolerant graph matching.
See also
String matching
Pattern matching
References
Computational problems in graph theory | Graph matching | [
"Mathematics",
"Technology"
] | 287 | [
"Computational problems in graph theory",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Computer science stubs",
"Computer science",
"Mathematical relations",
"Computing stubs",
"Mathematical problems"
] |
54,416,793 | https://en.wikipedia.org/wiki/Fiber%20network%20mechanics | Fiber network mechanics is a subject within physics and mechanics that deals with the deformation of networks made by the connection of slender fibers. Fiber networks are used to model the mechanics of fibrous materials such as biopolymer networks and paper products. Depending on the mechanical behavior of individual filaments, the networks may be composed of mechanical elements such as Hookean springs, Euler-Bernoulli beams, and worm-like chains. The field of fiber network mechanics is closely related to the mechanical analysis of frame structures, granular materials, critical phenomena, and lattice dynamics.
References
Biophysics
Solid mechanics | Fiber network mechanics | [
"Physics",
"Biology"
] | 123 | [
"Solid mechanics",
"Applied and interdisciplinary physics",
"Biophysics",
"Mechanics"
] |
54,421,490 | https://en.wikipedia.org/wiki/Radiative%20levitation | Radiative levitation is the name given to a phenomenon that causes the spectroscopically-derived abundance of heavy elements in the photospheres of hot stars to be very much higher than solar abundance or than the expected bulk abundance; for example, the spectrum of the star Feige 86 has gold and platinum abundances three to ten thousand times higher than solar norms.
The mechanism is that heavier elements have large photon absorption cross-sections when partially ionized (see opacity), so efficiently absorb photons from the radiation coming from the core of the star, and some of the energy of the photons gets converted to outward momentum, effectively 'kicking' the heavy atom towards the photosphere. The effect is strong enough that very hot white dwarfs are significantly less bright in the EUV and X-ray bands than would be expected from a black-body model.
The countervailing process is gravitational settling, where, in very high gravitational fields, the effects of diffusion even in a hot atmosphere are cancelled out to the point that the heavier elements will sink unobservably to the bottom and lighter elements settle on the top.
See also
Chemically peculiar star
References
Stellar phenomena | Radiative levitation | [
"Physics"
] | 242 | [
"Physical phenomena",
"Stellar phenomena"
] |
54,423,179 | https://en.wikipedia.org/wiki/Trait%C3%A9%20de%20m%C3%A9canique%20c%C3%A9leste | Traité de mécanique céleste () is a five-volume treatise on celestial mechanics written by Pierre-Simon Laplace and published from 1798 to 1825 with a second edition in 1829. In 1842, the government of Louis Philippe gave a grant of 40,000 francs for a 7-volume national edition of the Oeuvres de Laplace (1843–1847); the Traité de mécanique céleste with its four supplements occupies the first 5 volumes.
Tome I. (1798)
Livre I. Des lois générales de l'équilibre et du mouvement
Chap. I. De l'équilibre et de la composition des forces qui agissent sur un point matériel
Chap. II. Du mouvement d'un point matériel
Chap. III. De l'équilibre d'un système de corps
Chap. IV. De l'équilibre des fluides
Chap. V. Principes généraux du mouvement d'un système de corps
Chap. VI. Des lois du mouvement d'un système de corps, dans toutes les relations mathématiquement possibles entre la force et la vitesse
Chat. VII. Des mouvemens d'un corps solide de figure quelconque
Chap. VIII. Du mouvement des fluides
Livre II. De la loi pesanteur universelle, et du mouvement des centres de gravité des corps célestes
Tome II. (1798)
Livre III. De la figure des corps céleste
Livre IV. Des oscillations de la mer et de l'atmosphère
Livre V. Des mouvemens des corps célestes, autour de leurs propre centres de gravité
Tome III. (1802)
Livre VI. Théorie particulières des mouvemens célestes
Livre VII. Théorie de la lune
Tome IV. (1805)
Livre VIII. Théorie des satellites de Jupiter, de Saturne et d'Uranus
Livre IX. Théorie des comètes
Livre X. Sur différens points relatifs au système du monde
This book contains a discussion of continued fractions
and a computation of the complementary error function in terms
of what became to be called the Laplace continued fraction,
1/(1+q/(1+2q/(1+3q/(...))).
Tome V. (1825)
Livre XI. De la figure et de la rotation de la terre
Livre XII. De l'attraction et de la répulsion des sphères, et des lois de l'equilibre et du mouvement des fluides élastiques
Livre XIII. Des oscillations des fluides qui recouvrent les planètes
Livre XIV. Des mouvemens des corps célestes autour de leurs centres de gravité
Livre XV. Du mouvement des planètes et des comètes
Livre XVI. Du mouvement des satellites
English translations
During the early nineteenth century at least five English translations of Mécanique Céleste were published. In 1814 the Reverend John Toplis prepared a translation of Book 1 entitled The Mechanics of Laplace. Translated with Notes and Additions. In 1821 Thomas Young anonymously published a further translation into English of the first book; beyond just translating from French to English he claimed in the preface to have translated the style of mathematics: The translator flatters himself, however, that he has not expressed the author's meaning in English words alone, but that he has rendered it perfectly intelligible to any person, who is conversant with the English mathematicians of the old school only, and that his book will serve as a connecting link between the geometrical and algebraical modes of representation.The Reverend Henry Harte, a fellow at Trinity College, Dublin translated the entire first volume of Mécanique Céleste, with Book 1 published in 1822 and Book 2 published separately in 1827. Similarly to Bowditch (see below), Harte felt that Laplace's exposition was too brief, making his work difficult to understand:... it may be safely asserted, that the chief obstacle to a more general knowledge of the work, arises from the summary manner in which the Author passes over the intermediate steps in several of his most interesting investigations.
Bowditch's translation
The famous American mathematician Nathaniel Bowditch translated the first four volumes of the Traité de mécanique céleste but not the fifth volume; however, Bowditch did make use of relevant portions of the fifth volume in his extensive commentaries for the first four volumes.
Somerville's translation
In 1826, it was still felt by Henry Brougham, president of the Society for the Diffusion of Useful Knowledge, that the British reader was lacking a readable translation of Mécanique Céleste. He thus approached Mary Somerville, who began to prepare a translation which would "explain to the unlearned the sort of thing it is - the plan, the vast merit, the wonderful truths unfolded or methodized - and the calculus by which all this is accomplished". In 1830, John Herschel wrote to Somerville and enclosed a copy of Bowditch's 1828 translation of Volume 1 which Herschel had just received. Undeterred, Somerville decided to continue with the preparation of her own work as she felt the two translations differed in their aims; whereas Bowditch's contained an overwhelming number of footnotes to explain each mathematical step, Somerville instead wished to state and demonstrate the results as clearly as possible.
A year later, in 1831, Somerville's translation was published under the title Mechanism of the Heavens. It received great critical acclaim, with complimentary reviews appearing in the Quarterly Review, the Edinburgh Review, and the Monthly Notices of the Royal Astronomical Society.
References
External links
Translation by Nathaniel Bowditch
Volume I, 1829
Volume II, 1832
Volume III, 1834
Volume IV, 1839 with a memoir of the translator by his son
Historical physics publications
Physics books
Mathematics books
1798 non-fiction books
French books
Celestial mechanics | Traité de mécanique céleste | [
"Physics"
] | 1,262 | [
"Celestial mechanics",
"Classical mechanics",
"Astrophysics"
] |
54,423,957 | https://en.wikipedia.org/wiki/Load%20path%20analysis | Load path analysis is a technique of mechanical and structural engineering used to determine the path of maximum stress in a non-uniform load-bearing member in response to an applied load. Load path analysis can be used to minimize the material needed in the load-bearing member to support the design load.
Load path analysis may be performed using the concept of a load transfer index, U*. In a structure, the main portion of the load is transferred through the stiffest route. The U* index represents the internal stiffness of every point within the structure. Consequently, the line connecting the highest U* values is the main load path. In other words, the main load path is the ridge line of the U* distribution (contour) This method of analysis has been verified in physical experimentation.
Load path calculation using U* index
In a structure, the main portion of the load is transferred through the stiffest route. The U* index represents the internal stiffness of every point within the structure. Consequently, the line connecting the highest U* values is the main load path. In other words, the main load path is the ridge line of the U* distribution (contour). The U* index theory has been validated through two different physical experiments.
Since the U* index predicts the load paths based on the structural stiffness, it is not affected by the stress concentration problems. The load transfer analysis using the U* index is a new design paradigm for vehicle structural design. It has been applied in design analysis and optimization by automotive manufacturers like Honda and Nissan.
In the image to the right, a structural member with a central hole is placed under load bearing stress. Figure (a) shows the U* distribution and the resultant load paths while figure (b) is the von Mises Stress distribution. As can be seen from figure (b), higher stresses can be observed at the vicinity of the hole. However, it is unreasonable to conclude the main load passes that area with stress concentration because the hole (which has no material) is not important for carrying the load. The stress concentration caused by the structural singularities like a hole or a notch makes the load transfer analysis more difficult.
References
Mechanical engineering | Load path analysis | [
"Physics",
"Engineering"
] | 446 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
44,380,573 | https://en.wikipedia.org/wiki/Muscle%20tissue%20engineering | Muscle tissue engineering is a subset of the general field of tissue engineering, which studies the combined use of cells and scaffolds to design therapeutic tissue implants. Within the clinical setting, muscle tissue engineering involves the culturing of cells from the patient's own body or from a donor, development of muscle tissue with or without the use of scaffolds, then the insertion of functional muscle tissue into the patient's body. Ideally, this implantation results in full regeneration of function and aesthetic within the patient's body. Outside the clinical setting, muscle tissue engineering is involved in drug screening, hybrid mechanical muscle actuators, robotic devices, and the development of engineered meat as a new food source.
Innovations within the field of muscle tissue engineering seek to repair and replace defective muscle tissue, thus returning normal function.The practice begins by harvesting and isolating muscle cells from a donor site, then culturing those cells in media. The cultured cells form cell sheets and finally muscle bundles which are implanted into the patient.
Overview
Muscle is a naturally aligned organ, with individual muscle fibers packed together into larger units called muscle fascicles. The uniaxial alignment of muscle fibers allows them to simultaneously contract in the same direction and properly propagate force on the bones via the tendons. Approximately 45% of the human body is composed of muscle tissue, and this tissue can be classified into three different groups: skeletal muscle, cardiac muscle, and smooth muscle. Muscle plays a role in structure, stability, and movement in mammalian bodies. The basic unit for a muscle is a muscle fiber, which is made up of myofilaments actin and myosin. This muscle fiber contains sarcomeres which generate the force required for contraction.
A major focus of muscle tissue engineering is to create constructs with the functionality of native muscle and ability to contract. To this end, alignment of the tissue engineered construct is extremely important. It has been shown that cells grown on substrates with alignment cues form more robust muscle fibers. Several other design criteria considered in muscle tissue engineering include the scaffold porosity, stiffness, biocompatibility, and degradation timeline. Substrate stiffness should ideally be in the myogenic range, which has been shown to be 10-15 kPa.
The purpose of muscle tissue engineering is to reconstruct functional muscular tissue which has been lost via traumatic injury, tumor ablation, or functional damage caused by myopathies. Until now, the only method used to restore muscular tissue function and aesthetic was free tissue transfer. Full function is typically not restored, however, which results in donor site morbidity and volume deficiency. The success of tissue engineering as it pertains to the regeneration of skin, cartilage, and bone indicates that the same success will be found in engineering muscular tissue. Early innovations in the field yielded in vitro cell culturing and regeneration of muscle tissue which would be implanted in the body, but advances in recent years have shown that there may be potential for in vivo muscle tissue engineering using scaffolding.
Etymology
The term muscle tissue engineering, while it is a subset of the much larger discipline, tissue engineering, was first coined in 1988 when Herman Vandenburgh, a surgeon, cultured avian myotubes in collagen-coated culture plates. This started a new era of in vitro tissue engineering. The ideal was officially adopted in 1988 in Vandenburgh's publication titled Maintenance of Highly Contractile Tissue-Cultured Avian Skeletal Myotubes in Collagen Gel. In 1989, the same group determined that mechanical stimulation of myoblasts in vitro facilitates engineered skeletal muscle growth.
History
19th Century
A rudimentary understanding of muscle tissue began to develop as early as 1835, when embryonic myogenesis was first described. In the 1860s, it was shown that muscle is capable of regeneration and an experimental regeneration was conducted to better understand the specific method by which this was done in vivo. Following this discovery, muscle generation and degeneration in man were described for the first time. Researchers consequently assessed several aspects of muscle regeneration in vivo, including "the continuous or discontinuous regeneration depending on tissue type" to increase functional understanding of the phenomena. It was not until the 1960s, however, that researchers determined what components were required for muscle regeneration.
20th Century
In 1957, it was determined via DNA content that myoblasts proliferate, but myonuclei do not. Following this discovery, the satellite cell was experimentally uncovered by Mauro and Katz as stem cells which sit on the surface of the myofibre and have the capability to differentiate into muscle cells. Satellite cells provide myoblasts for growth, differentiation, and repair of muscle tissue. Muscle tissue engineering officially began as a discipline in 1988 when Herman Vandenburgh cultured avian myotubes in collagen-coated culture plates. Following this development, it was found in 1989 that mechanical stimulation of myoblasts in vitro facilitates engineered skeletal muscle growth. Most of the modern innovations in the field of muscle tissue engineering are found in the 21st century.
21st Century
Between 2000 and 2010, the effects of volumetric muscle loss (VML) were assessed as it pertains to muscle tissue engineering. VML can be caused by a variety of injuries or diseases, including general trauma, postoperative damage, cancer ablation, congenital defects, and degenerative myopathy. Although muscle contains a stem cell population called satellite cells that are capable of regenerating small muscle injuries, muscle damage in VML is so extensive that it overwhelms muscle's natural regenerative capabilities. Currently VML is treated through an autologous muscle flap or graft but there are various problems associated with this procedure. Donor site morbidity, lack of donor tissue, and inadequate vascularization all limit the ability of doctors to adequately treat VML. The field of muscle tissue engineering attempts to address this problem through the design of a functional muscle construct that can be used to treat the damaged muscle instead of harvesting an autologous muscle flap from elsewhere on the patient's body.
Research conducted between 2000 and 2010 informed the conclusion that functional analysis of a tissue engineered muscle construct is important to illustrate its potential to help regenerate muscle. A variety of assays are generally used to evaluate a tissue engineered muscle construct including immunohistochemistry, RT-PCR, electrical stimulation and resulting peak-to-peak voltage, scanning electron microscope imaging, and in vivo response.
The most recent advances in the field include cultured meat, biorobotic systems, and biohybrid impants in regenerative medicine or disease modeling.
Examples
The majority of current advancements in muscle tissue engineering reside in the skeletal muscle category, so the majority of these examples will have to do with skeletal muscle engineering and regeneration. We will review a couple of examples of smooth muscle tissue engineering and cardiac muscle tissue engineering in this section as well.
Skeletal Muscle Tissue Engineering (SMTE)
Avian myotubes: highly contractile skeletal myotubes cultured and differentiated in vitro on collagen-coated culture plates
Cultured Meat (CM): cultured, cell based, lab grown, in vitro, clean meat obtained through cellular agriculture
Human Bio-Artificial Muscle (BAM): formed through a seven day, in vitro tissue engineering procedure in which human myoblasts fuse and differentiate into aligned myofibres in an extracellular matrix; these constructs are used for intramuscular drug injection to replace pre- or non-clinical injection models and complement animal studies
Myoblast transfer in the treatment of Duchenne's Muscular Dystrophy (DMD): an in vivo technique to replace dystrophin, a skeletal muscle protein which is deficient in patients with DMD; myoblasts fuse with muscle fibers and contribute their nuclei which then replace deficient gene products in the host nuclei
Autologous hematopoetic stem cell transplantation (AHSCT) as a method for treating Multiple Sclerosis (MS): an in vivo technique for treating MS in which the immune system is destroyed and is reconstituted with hematopoetic stem cells; has been shown to reduce the effects of MS for 4-5 years in 70-80% of patients
Volumetric muscle loss repair using Muscle Derived Stem Cells (MDSCs): an in situ technique for muscle loss repair in which patients have suffered from trauma or combat injuries; MDSCs cast in an in situ fibrin gel were capable of forming new myofibres that became engrafted in a muscle defect that was created by a partial-thickness wedge resection in the tibialis anterior muscle of laboratory mice
Development of skeletal muscle organoids to model neuromuscular disorders and muscular dystrophies; an in vitro technique in which human pluripotent stem cells (hPSCs) are differentiated into functional 3D human skeletal muscle organoid (hSkMOs); hPSCs were guided towards the paraxial mesodermal lineage which then gives rise to myogenic pregenitor cells and myoblasts in well plates with no scaffold; organoids were round, uniformly sized, and exhibited homogeneous morphology upon full development and were shown to successfully model muscle development and regeneration
Bioprinted Tibialis Anterior (TA) Muscle in Rats: an in vitro technique in which bioengineered skeletal muscle tissue composed of human primary muscle pregenitor cells (hMPCs) was fabricated – upon implantation, the bioprinted material reached 82% functional recovery in rodent models of the TA muscle
Smooth Muscle Tissue Engineering
Autologous MDSC Injections to Treat Urinary Incontinence: an in vivo injection technique for pure stress incontinence in female subjects in which defective muscle cells were replaced with stem cells that would differentiate to become functioning smooth muscle cells in the urinary sphincter
Vascular Smooth Muscle regeneration using induced pluripotent stem cells (iPSCs); an in vitro technique in which iPSCs were differentiated into proliferative smooth muscle cells using a nanofibrous scaffold.
Formation of coiled three-dimensional (3D) cellular constructs containing smooth muscle-like cells differentiated from dedifferentiated fat (DFAT) cells: an in vitro technique for controlling the 3D organization of smooth muscle cells in which DFAT cells are suspended in a mixture of extracellular proteins with optimized stiffness so that they differentiate into smooth muscle-like cells with specific 3D orientation; a muscle tissue engineered construct for a smooth muscle cell precursor
Cardiac Muscle Tissue Engineering
Intracoronary Administration of Bone Marrow-Derived Progenitor Cells: an in vivo technique in which progenitor cells derived from bone marrow are administered into an infarct artery to differentiate into functional cardiac cells and recover contractile function after an acute, ST-elevation myocardial infarction, thus preventing adverse remodeling of the left ventricle.
Human Cardiac Organoids:an in vitro, scaffold-free technique for producing a functioning cardiac organoid; cardiac spheroids made from a mixed cell population derived from human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) cultured on gelatin-coated well plates, without a scaffold, resulted in the generation of a functioning cardiac organoid
Methods
Muscle tissue engineering methods are consistently categorized across literature into three groups: in situ, in vivo, and in vitro muscle tissue engineering. We will assess each of these categories and detail specific practices used in each one.
In Situ
“In situ” is a latin phrase whose literal translation is “on site.” It is a term that has been used in the English language since the mid-eighteenth century to describe something that is in its original place or position. In the context of muscle tissue engineering, in situ tissue engineering involves the introduction and implantation of an acellular scaffold into the site of injury or degenerated tissue. The goal of in situ muscle tissue engineering is to encourage host cell recruitment, natural scaffold formation, and proliferation and differentiation of host cells. The main idea which in situ muscle tissue engineering is based on is the self-healing, regenerative properties of the mammalian body. The primary method for in situ muscle tissue engineering is described in the following section:
As described in Biomaterials for In Situ Tissue Regeneration: A Review (Abdulghani & Mitchell, 2019), in situ muscle tissue engineering requires very specific biomaterials which have the capability to recruit stem cells or progenitor cells to the site of the muscle defect, thus allowing regeneration of tissue without implantation of seed cells. The key to a successful scaffold is the appropriate properties (i.e. biocompatibility, mechanical strength, elasticity, biodegradability) and the correct shape and volume for the specific muscle defect in which they are implanted. This scaffold should effectively mimic the cellular response of the host tissue, and Mann et al. have found that Polyethylene glycol-based hydrogels are very successful as in situ biomaterial scaffolds because they are chemically modified to be degraded by biological enzymes, thus encouraging cell migration and proliferation. Beyond Polyethylene glycol-based hydrogels, synthetic biomaterials such as PLA and PCL are successful in situ scaffolds as they can be fully customized to each specific patient. These materials' stiffness, degradation, and porosity properties are tailored to the degenerated tissue's topology, volume, and cell type so as to provide the optimal environment for host cell migration and proliferation.
In situ engineering promotes natural regeneration of damaged tissue by effectively mimicking the mammalian body's own wound healing response. The use of both biological and synthetic biomaterials as scaffolds promotes host cell migration and proliferation directly to the defect site, thus decreasing the amount of time required for muscle tissue regeneration. Furthermore, in situ engineering effectively bypasses the risk of implant rejection by the immune system due to the biodegradable qualities in each scaffold.
In Vivo
"In vivo" is a latin phrase whose literal translation is "in a living thing." This term is used in the English language to describe a process which occurs inside of a living organism. In the realm of muscle tissue engineering, this term applies to the seeding of cells into a biomaterial scaffold immediately prior to implantation. The goal of in vivo muscle tissue engineering is to create a cell-seeded scaffold that once implanted into the wound site will preserve cell efficacy. In vivo methods provide a greater amount of control over cell phenotype, mechanical properties, and functionality of the tissue construct.
As described in Skeletal Muscle Tissue Engineering: Biomaterials-Based Strategies for the Treatment of Volumetric Muscle Loss (Carnes & Pins, 2020), in vivo muscle tissue engineering builds on the concept of in situ engineering by not only implanting a biomaterial scaffold with specific mechanical and chemical properties, but also seeding the scaffold with the specific cell type needed for regeneration of the tissue. Reid et al. describe common scaffolds utilized in the in vivo muscle tissue engineering process. These scaffolds include hydrogels infused with hyaluronic acid (HA), gelatin silk fibroin, and chitosan as these materials promote muscle cell migration and proliferation. For example, a biodegradable and renewable material derived from chitin known as chitosan, has unique mechanical properties which support smooth muscle cell differentiation and retention in the tissue regeneration site. When this scaffold is further functionalized with Arginine-Glycine-Aspartic Acid (RGD), it provides a better growth environment for smooth muscle cells. Another scaffold commonly used is decellularized extracellular matrix (ECM) tissue as it is fully biocompatible, biodegradable, and contains all of the necessary protein binding sites for full functional recovery and integration of muscle tissue. Once seeded with cells, this material becomes an optimal environment for cell proliferation and integration with existing tissue as it effectively mimics the environment in which tissue naturally regenerates in the mammalian body.
The in vivo muscle tissue engineering technique provides the wound healing process with a "head start" in development, as the body no longer needs to recruit host cells to begin regeneration. This approach also bypasses the need for cell manipulation prior to implantation, thus ensuring that they maintain all of their mechanical and functional properties.
In Vitro
"In vitro" is a latin phrase whose literal translation is "within the glass." This term is used in the English language to describe a process which occurs outside of a living organism. Within the context of muscle tissue engineering, the term "in vitro" applies to the seeding of cells into a biomaterial scaffold with growth factors and nutrients, then culturing these constructs until a functional construct, such as myofibres, is developed. These developed constructs are then implanted into the wound site with the expectation that they will continue to proliferate and integrate into host muscle tissue. The goal of in vitro muscle tissue engineering is to increase the functionality of the tissue before it is ever implanted into the body, thus increasing mechanical properties and potential to thrive in the host body.
Abdulghani & Mitchell describe in vitro muscle tissue engineering as a concept with utilizes the same basic strategies of in vivo tissue engineering. The difference between the two methods, however, is the development of a fully functional tissue engineered muscle graft (TEMG) that occurs in the in vitro technique. In vitro muscle tissue engineering includes the seeding of cells onto a biomaterial scaffold, but goes a step further by adding growth factors and biochemical and biophysical cues to promote cell growth, proliferation, differentiation, and finally regeneration into a functional muscle tissue construct. Typically, in vitro scaffolds contain specific surface features which guide the direction of cell proliferation. They are usually fibrous with aligned pores as these features encourage cell adhesion during regeneration. Beyond the types of scaffolds used in this technique, a largely important aspect of this technique is the electrical and mechanical stimulation which mimic the natural regeneration environment and encourage the expansion of intracellular communication pathways. Before TEMGs are introduced into the wound defect, they musts be vascularized to promote proper integration with the host tissue. To achieve vascularization, researchers typically seed a scaffold with multiple cell types in order to develop both muscle tissue and vascular pathways. This process prevents rejection of the TEMG upon implantation as it is able to effectively thrive in the host tissue environment. There is always a risk of immune rejection when implanting fully developed tissue, though, so this method tissue regeneration is the most closely monitored post-implantation.
The in vitro muscle tissue engineering technique is used to create muscle tissue with more successful functional and mechanical properties. According to Carnes & Pins in Skeletal Muscle Tissue Engineering: Biomaterials-Based Strategies for the Treatment of Volumetric Muscle Loss, this approach develops a microenvironment that is more conducive to enhancing tissue regeneration upon implantation, thus restoring full functionality to patients.
Future Work
Current muscle tissue engineering trends lead towards the development of skeletal muscle regeneration techniques over smooth muscle or cardiac muscle regeneration. A current trend found throughout literature is the treatment of Volumetric Muscle Loss (VML) using muscle tissue engineering techniques. VML is the result of abrupt loss of skeletal muscle due to surgical resection, trauma, or combat injuries. It has been observed that tissue grafts, the current treatment plan, do not restore full functionality or aesthetic integrity to the site of injury. Muscle tissue engineering offers an optimistic possibility for patients, as in situ, in vivo, and in vitro techniques have been proven to restore functionality to muscle tissue in the wound site. Methods being explored include acellular scaffold implantation, cell-seeded scaffold implantation, and in vitro fabrication of muscle grafts. Preliminary data from each of these methods promises a solution for patients suffering from VML.
Beyond specific technological advances in the field of muscle tissue engineering, researchers are working to establish a connection with the larger umbrella that is tissue engineering.
References
Wikipedia Student Program
Tissue engineering | Muscle tissue engineering | [
"Chemistry",
"Engineering",
"Biology"
] | 4,178 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
44,381,435 | https://en.wikipedia.org/wiki/China%20Dark%20Matter%20Experiment | The China Dark Matter Experiment (CDEX) is a search for dark matter WIMP particles at the China Jinping Underground Laboratory. CDEX was the first experiment to be hosted at CJPL, beginning construction of its shield in June 2010, the same month that laboratory construction was completed, and before CJPL's official opening on 12 December.
CDEX has p-type point-contact germanium detector surrounded by NaI(Tl) crystals, similar to the CoGeNT experiment. The CDEX-0 prototype was used to develop the current CDEX-1 detector, which has a detector mass of roughly 1 kg. Future plans include scaling to CDEX-10 and CDEX-1T.
CDEX-1 had first low mass results in 2013 and published limits on WIMP masses 6–20 GeV in 2014.
References
Experiments for dark matter search
Physics experiments | China Dark Matter Experiment | [
"Physics"
] | 180 | [
"Dark matter",
"Physics experiments",
"Unsolved problems in physics",
"Experiments for dark matter search",
"Experimental physics",
"Particle physics",
"Particle physics stubs"
] |
44,381,774 | https://en.wikipedia.org/wiki/Gamma-Re%20Transition%20Model | Gamma-Re (γ-Re) transition model is a two equation model used in Computational Fluid Dynamics (CFD) to modify turbulent transport equations to simulate laminar, laminar-to-turbulent and turbulence states in a fluid flow. The Gamma-Re model does not intend to model the physics of the problem but attempts to fit a wide range of experiments and transition methods into its formulation. The transition model calculated an intermittency factor that creates (or extinguishes) turbulence by slowly introducing turbulent production at the laminar-to-turbulent transition location.
Principle
The goal of developing the gamma-Re () transition model was to develop a transition model based on local variables which could be easily implemented into modern CFD code with unstructured grids and massive parallel execution. The majority of earlier transition models such as the model needs to know the structure of the boundary layer and the integration along it; both concepts are hard to implement in three dimensions along many subdivisions of a grid. Another key insight to the formulation of this model is that the Reynolds vorticity number can be related to the Reynolds transition onset number so there is a local way to determine the transition location. The gamma-Re transition model has two equations and is based on the two-equation turbulence models in the context of turbulence modeling. This way both local and global trends can be modelled. The intermittency or gamma determines the percentage of time the flow is turbulent (0 = fully laminar, 1 = fully turbulent). The intermittency acts on the production term of the turbulent kinetic energy transport equation in the SST model to simulate laminar/turbulence flows.
Standard Gamma-Theta model
For intermittency
For Transition Momentum Thickness Reynolds Number
Modification to SST Turbulence Model
Applications
The present model was appropriate for the prediction of an expansion swirl flow.
Other models
Following are some more models which are usually employed.
en
low-Reynolds Number
References
Computational fluid dynamics
Scientific models | Gamma-Re Transition Model | [
"Physics",
"Chemistry"
] | 401 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
44,390,229 | https://en.wikipedia.org/wiki/Circuit%20Scribe | Circuit Scribe is a ball-point pen containing silver conductive ink one can use to draw circuits instantly on flexible substrates like paper. Circuit Scribe made its way onto Kickstarter (an online site where people can fund projects) on November 19, 2013, with its goal of raising $85,000 for the manufacturing of the first batch of pens. By December 31, 2013, Circuit Scribe was able to raise a total of $674,425 with 12,277 'backers' or donors.
Similarly to drawing a picture, users can use a Circuit Scribe pen to draw lines on a simple piece of paper. They can then attach special electrical components on the drawn lines which allows the electrical currents to run through the components. This replaces the use of breadboards and wires.
Development
A team of researchers in Electroninks Incorporated, a startup company located at Research Park of the University of Illinois at Urbana-Champaign, created a water-based, non-toxic conductive ink that was noted as the Invention of the Month by Popular Science. The team began by developing a prototype using pens from a different company and replacing the ink with their special silver ink. Once completed, they started a Kickstarter campaign to earn funding for a mass production of the final form of the pens.
Team
The researching team consists of S. Brett Walker, Jennifer A. Lewis, Michael Bell, Analisa Russo, and Nancy Beardsly. Walker is the CEO of Electroninks and the co-founder along with Lewis, Bell, and the director of product development, Russo. Bell is also the chief operating officer while Beardsley is the technical support and user experience.
Prototype
The prototype pens are hand-cleaned Sakura Gelly Roll Metallic pens. The ink is replaced with the researchers' silver conductive ink. In order to have the right amount of ink flow to make smooth lines, the ink is precisely tuned.
Ink
The ink is created by placing an aqueous solution of silver nitrate into a flask of water combined with polyacrylic acid (PAA) and diethanolamine (DEA), the capping agent and reducing agent, respectively. After about twenty hours, the silver nitrate is dissolved, forming particles with a diameter of about 5 nanometers. In order to enlarge the size of the particle to an average diameter of about 400 nanometers, the flask is placed on a heated sonicator, a device that produces high-intensity ultrasound. Once cooled, the solution is poured into a larger flask and the thick precipitate, an insoluble solid which is formed is scraped out. From there, ethanol is added to coagulate the particles, or change the particles to a solid state. Most of the supernatant, the liquid lying above a layer of the precipitate, is then poured out so the remaining liquid can be centrifuged, or separated. After the process of centrifugation, the particles are placed back in water and forced through a syringe filter to remove unnecessary particles in the solution. Next, hydroxyethyl cellulose (HEC) is added as a binder and the entire mixture is homogenized. The solvents are allowed to evaporate until the ink has a desired viscosity or thickness.
Once the ink is created, a roller ball pen is dismantled and cleaned so the ink can be placed inside using a flat tip spatula. After replacing the roller ball tip, a couple blasts of compressed air is shot from the back end to force the ink into the tip. The outer cover of the pen is replaced and the prototype of the Circuit Scribe is created. From there, the team launched its Kickstarter campaign.
Kickstarter Campaign
Circuit Scribe launched its campaign on Kickstarter to receive funding and included a list of pledges which people could donate a certain amount and get a corresponding gift:
Pledge $5+: STEM Education Workbook
Pledge $20+: Circuit Scribe
Pledge $25+: Early Bird Basic Kit
Pledge $30+: Basic Kit
Pledge $35+: Early Bird Basic Kit + Book
Pledge $40+: Basic Kit + Book
Pledge $45+: Early Bird Maker Kit
Pledge $50+: Maker Kit
Pledge $90+: Gift Pack
Pledge $100+: Developer Kit
Pledge $175+: Circuit Scribe Bundle
Pledge $190+: Early Bird Classroom Kit
Pledge $200+: Classroom Kit
Pledge $500+: Component Designer
Pledge $5,000+: Electroninks Show & Tell
They also included stretch goals which include:
$250,000: Circuit Scribe Edu Platform & STEM Outreach
$650,000: Magnetic Sheet for Kit Activity Books & Maker Notebooks
$1,000,000: Resistor Pen
Modules
Circuit Scribe can be used to draw circuits that connect different types of modules, or individual components, such as:
Power
USB Power Adapter: Allows user to power the drawn circuits with either a USB port or a wall outlet.
9V Battery Adapter: Supplies a nine-volt power to the circuits.
Input
SPST Switch: An on/off switch that allows users to control the electrical circuit.
DPDT Switch: Two switches that direct the flow of current through the circuit.
Light Sensor: Shines light on the phototransistor to control an output.
Potentiometer 10k Ohm: A knob that controls the dimness, volume, and speed of circuit.
Connect
2-Pin Adapter: Allows user to connect resistors, capacitors, or sensors to circuits.
NPN Transistor: An electrical amplifier that converts small signals into large currents.
Blinker: Blinks output components on and off at adjustable rates.
DIY Boards: Allows user to solder 2, 4, 6, or 8 pin components to the board.
Connector Cables: Connects the paper circuit to DIY hardware platform.
Output
Bi-LED: Two LEDs in one that can flip directions to change the color.
Buzzer: Vibrates in response to the voltage.
Motor: Rotates with an applied voltage.
RGB LED: A red, blue, and green LED.
Uses
The Circuit Scribe allows the user to draw electrical circuits in any shape with its silver ink. With this aspect and its ability to connect different types of modules, it is possible to produce simple designs like an Arduino, an open-source electronic platform based on hardware and software.
Arduino
Circuit Scribe allows users to create a paper Arduino (or a ‘paperduino’), which is demonstrated by the research team. The team first found the schematics on the Arduino website and modified them so that they would work on a pen plotter. With a few modifications, they arranged the components and traces so that the board could be printed in a single layer. The alignments are set to 0.6 millimeters to match the width of the pen traces with a minimum distance of 0.1 millimeters. The pen plotter only prints lines and does not fill the patterns, so they designed large pads out of concentric circles and built up the pads for the components with some extra line features. This allows for a stronger conductivity. It is important to put chips close together to minimize the line resistance between them, but not so close that it is difficult to place the components. They used components from the 1206 package which are a bit larger than the original components from the Arduino. Before exporting the layout, they deselected every layer of the file except the top layer of traces. After exporting the file in .dxf format, both the wire width and the fill area options were deselected and the files were saved. Finally, the team measured the size of the board layout and dragged it onto a new sheet on Silhouette Studio. From there, the vertical height was adjusted to 2.945 inches and the speed was set to 1 in order to lay down the most ink when printed. The team went on to place components like resistors, capacitors, and LEDs on the printed silver ink. Components can be attached using tweezers and super glue, but can be reinforced using conductive epoxy.
Extras
Resistor Pen
As posted on their Kickstarter campaign, the creators of Circuit Scribe planned to develop resistor pens one can use to draw resistors the way one can use the Circuit Scribe to draw circuits. Although their stretch goal of $1,000,000 was not met by the deadline on the campaign, the team still managed to create them.
References
Electrical engineering
Pens | Circuit Scribe | [
"Engineering"
] | 1,745 | [
"Electrical engineering"
] |
44,390,248 | https://en.wikipedia.org/wiki/Nanoelectromechanical%20relay | A nanoelectromechanical (NEM) relay is an electrically actuated switch that is built on the nanometer scale using semiconductor fabrication techniques. They are designed to operate in replacement of, or in conjunction with, traditional semiconductor logic. While the mechanical nature of NEM relays makes them switch much slower than solid-state relays, they have many advantageous properties, such as zero current leakage and low power consumption, which make them potentially useful in next generation computing.
A typical NEM relay requires a potential on the order of the tens of volts in order to "pull in" and have contact resistances on the order of gigaohms. Coating contact surfaces with platinum can reduce achievable contact resistance to as low as 3 kΩ. Compared to transistors, NEM relays switch relatively slowly, on the order of nanoseconds.
Operation
A NEM relay can be fabricated in two, three, or four terminal configurations. A three terminal relay is composed of a source (input), drain (output), and a gate (actuation terminal). Attached to the source is a cantilevered beam that can be bent into contact with the drain in order to make an electrical connection. When a significant voltage differential is applied between the beam and gate, and the electrostatic force overcomes the elastic force of the beam enough to bend it into contact with the drain, the device "pulls in" and forms an electrical connection. In the off position, the source and drain are separated by an air gap. This physical separation allows NEM relays to have zero current leakage, and very sharp on/off transitions.
The nonlinear nature of the electric field, and adhesion between the beam and drain cause the device to "pull out" and lose connection at a lower voltage than the voltage at which it pulls in. This hysteresis effect means there is a voltage between the pull in voltage, and the pull out voltage that will not change the state of the relay, no matter what its initial state is. This property is very useful in applications where information needs to be stored in the circuit, such as in static random-access memory.
Fabrication
NEM relays are usually fabricated using surface micromachining techniques typical of microelectromechanical systems (MEMS). Laterally actuated relays are constructed by first depositing two or more layers of material on a silicon wafer. The upper structural layer is photolithographically patterned in order to form isolated blocks of the uppermost material. The layer below is then selectively etched away, leaving thin structures, such as the relay's beam, cantilevered above the wafer, and free to bend laterally. A common set of materials used in this process is polysilicon as the upper structural layer, and silicon dioxide as the sacrificial lower layer.
NEM relays can be fabricated using a back end of line compatible process, allowing them to be built on top of CMOS. This property allows NEM relays to be used to significantly reduce the area of certain circuits. For example, a CMOS-NEM relay hybrid inverter occupies 0.03 μm2, one-third the area of a 45 nm CMOS inverter.
History
The first switch made using silicon micro-machining techniques was fabricated in 1978. Those switches were made using bulk micromachining processes and electroplating. In the 1980s, surface micromachining techniques were developed and the technology was applied to the fabrication of switches, allowing for smaller, more efficient relays.
A major early application of MEMS relays was for switching radio frequency signals at which solid-state relays had poor performance. The switching time for these early relays was above 1 μs. By shrinking dimensions below one micrometer, and moving into the nano scale, MEMS switches have achieved switching times in the ranges of hundreds of nanoseconds.
Applications
Mechanical computing
Due to transistor leakage, there is a limit to the theoretical efficiency of CMOS logic. This efficiency barrier ultimately prevents continued increases in computing power in power-constrained applications. While NEM relays have significant switching delays, their small size and fast switching speed when compared to other relays means that mechanical computing utilizing NEM Relays could prove a viable replacement for typical CMOS based integrated circuits, and break this CMOS efficiency barrier.
A NEM relay switches mechanically about 1000 times slower than a solid-state transistor takes to switch electrically. While this makes using NEM relays for computing a significant challenge, their low resistance would allow many NEM relays to be chained together and switch all at once, performing a single large calculation. On the other hand, transistor logic has to be implemented in small cycles of calculations, because their high resistance does not allow many transistors to be chained together while maintaining signal integrity. Therefore, it would be possible to create a mechanical computer using NEM relays that operates at a much lower clock speed than CMOS logic, but performs larger, more complex calculations during each cycle. This would allow a NEM relay based logic to perform to standards comparable to current CMOS logic.
There are many applications, such as in the automotive, aerospace, or geothermal exploration businesses, in which it would be beneficial to have a microcontroller that could operate at very high temperatures. However, at high temperatures, semiconductors used in typical microcontrollers begin to fail as the electrical properties of the materials they are made of degrade, and the transistors no longer function. NEM relays do not rely on the electrical properties of materials to actuate, so a mechanical computer utilizing NEM relays would be able to operate in such conditions. NEM relays have been successfully tested at up to 500 °C, but could theoretically withstand much higher temperatures.
Field-programmable gate arrays
The zero leakage current, low energy usage, and ability to be layered on top of CMOS properties of NEM relays make them a promising candidate for usage as routing switches in field-programmable gate arrays (FPGA). A FPGA utilizing a NEM relay to replace each routing switch and its corresponding static random-access memory block could allow for a significant reduction in programming delay, power leakage, and chip area compared to a typical 22nm CMOS based FPGA. This area reduction mainly comes from the fact that the NEM relay routing layer can be built on top of the CMOS layer of the FPGA.
See also
Nanoelectromechanical systems
References
Relays
Microelectronic and microelectromechanical systems
Nanoelectronics | Nanoelectromechanical relay | [
"Materials_science",
"Engineering"
] | 1,380 | [
"Microtechnology",
"Materials science",
"Nanoelectronics",
"Nanotechnology",
"Microelectronic and microelectromechanical systems"
] |
42,944,052 | https://en.wikipedia.org/wiki/Causal%20fermion%20systems | The theory of causal fermion systems is an approach to describe fundamental physics. It provides a unification of the weak, the strong and the electromagnetic forces with gravity at the level of classical field theory. Moreover, it gives quantum mechanics as a limiting case and has revealed close connections to quantum field theory. Therefore, it is a candidate for a unified physical theory.
Instead of introducing physical objects on a preexisting spacetime manifold, the general concept is to derive spacetime as well as all the objects therein as secondary objects from the structures of an underlying causal fermion system. This concept also makes it possible to generalize notions of differential geometry to the non-smooth setting. In particular, one can describe situations when spacetime no longer has a manifold structure on the microscopic scale (like a spacetime lattice or other discrete or continuous structures on the Planck scale). As a result, the theory of causal fermion systems is a proposal for quantum geometry and an approach to quantum gravity.
Causal fermion systems were introduced by Felix Finster and collaborators.
Motivation and physical concept
The physical starting point is the fact that the Dirac equation in Minkowski space has solutions of negative energy which are usually associated to the Dirac sea. Taking the concept seriously that the states of the Dirac sea form an integral part of the physical system, one finds that many structures (like the causal and metric structures as well as the bosonic fields) can be recovered from the wave functions of the sea states. This leads to the idea that the wave functions of all occupied states (including the sea states) should be regarded as the basic physical objects, and that all structures in spacetime arise as a result of the collective interaction of the sea states with each other and with the additional particles and "holes" in the sea. Implementing this picture mathematically leads to the framework of causal fermion systems.
More precisely, the correspondence between the above physical situation and the mathematical framework is obtained as follows. All occupied states span a Hilbert space of wave functions in Minkowski space . The observable information on the distribution of the wave functions in spacetime is encoded in the local correlation operators which in an orthonormal basis have the matrix representation
(where is the adjoint spinor).
In order to make the wave functions into the basic physical objects, one considers the set as a set of linear operators on an abstract Hilbert space. The structures of Minkowski space are all disregarded, except for the volume measure , which is transformed to a corresponding measure on the linear operators (the "universal measure"). The resulting structures, namely a Hilbert space together with a measure on the linear operators thereon, are the basic ingredients of a causal fermion system.
The above construction can also be carried out in more general spacetimes. Moreover, taking the abstract definition as the starting point, causal fermion systems allow for the description of generalized "quantum spacetimes." The physical picture is that one causal fermion system describes a spacetime together with all structures and objects therein (like the causal and the metric structures, wave functions and quantum fields). In order to single out the physically admissible causal fermion systems, one must formulate physical equations. In analogy to the Lagrangian formulation of classical field theory, the physical equations for causal fermion systems are formulated via a variational principle, the so-called causal action principle. Since one works with different basic objects, the causal action principle has a novel mathematical structure where one minimizes a positive action under variations of the universal measure. The connection to conventional physical equations is obtained in a certain limiting case (the continuum limit) in which the interaction can be described effectively by gauge fields coupled to particles and antiparticles, whereas the Dirac sea is no longer apparent.
General mathematical setting
In this section the mathematical framework of causal fermion systems is introduced.
Definition
A causal fermion system of spin dimension is a triple where
is a complex Hilbert space.
is the set of all self-adjoint linear operators of finite rank on which (counting multiplicities) have at most positive and at most negative eigenvalues.
is a measure on .
The measure is referred to as the universal measure.
As will be outlined below, this definition is rich enough to encode analogs of the mathematical structures needed to formulate physical theories. In particular, a causal fermion system gives rise to a spacetime together with additional structures that generalize objects like spinors, the metric and curvature. Moreover, it comprises quantum objects like wave functions and a fermionic Fock state.
The causal action principle
Inspired by the Langrangian formulation of classical field theory, the dynamics on a causal fermion system is described by a variational principle defined as follows.
Given a Hilbert space and the spin dimension , the set is defined as above. Then for any , the product is an operator of rank at most . It is not necessarily self-adjoint because in general . We denote the non-trivial eigenvalues of the operator (counting algebraic multiplicities) by
Moreover, the spectral weight is defined by
The Lagrangian is introduced by
The causal action is defined by
The causal action principle is to minimize under variations of within the class of (positive) Borel measures under the following constraints:
Boundedness constraint: for some positive constant .
Trace constraint: is kept fixed.
The total volume is preserved.
Here on one considers the topology induced by the -norm on the bounded linear operators on .
The constraints prevent trivial minimizers and ensure existence, provided that is finite-dimensional.
This variational principle also makes sense in the case that the total volume is infinite if one considers variations of bounded variation with .
Inherent structures
In contemporary physical theories, the word spacetime refers to a Lorentzian manifold . This means that spacetime is a set of points enriched by topological and geometric structures. In the context of causal fermion systems, spacetime does not need to have a manifold structure. Instead, spacetime is a set of operators on a Hilbert space (a subset of ). This implies additional inherent structures that correspond to and generalize usual objects on a spacetime manifold.
For a causal fermion system ,
we define spacetime as the support of the universal measure,
With the topology induced by ,
spacetime is a topological space.
Causal structure
For , we denote the non-trivial eigenvalues of the operator (counting algebraic multiplicities) by .
The points and are defined to be spacelike separated if all the have the same absolute value. They are timelike separated if the do not all have the same absolute value and are all real. In all other cases, the points and are lightlike separated.
This notion of causality fits together with the "causality" of the above causal action in the sense that if two spacetime points are space-like separated, then the Lagrangian vanishes. This corresponds to the physical notion of causality that spatially separated spacetime points do not interact. This causal structure is the reason for the notion "causal" in causal fermion system and causal action.
Let denote the orthogonal projection on the subspace . Then the sign of the functional
distinguishes the future from the past. In contrast to the structure of a partially ordered set, the relation "lies in the future of" is in general not transitive. But it is transitive on the macroscopic scale in typical examples.
Spinors and wave functions
For every the spin space is defined by ; it is a subspace of of dimension at most . The spin scalar product defined by
is an indefinite inner product on of signature with .
A wave function is a mapping
On wave functions for which the norm defined by
is finite (where is the absolute value of the symmetric operator ), one can define the inner product
Together with the topology induced by the norm , one obtains a Krein space .
To any vector we can associate the wave function
(where is again the orthogonal projection to the spin space).
This gives rise to a distinguished family of wave functions, referred to as the
wave functions of the occupied states.
The fermionic projector
The kernel of the fermionic projector is defined by
(where is again the orthogonal projection on the spin space,
and denotes the restriction to ). The fermionic projector is the operator
which has the dense domain of definition given by all vectors satisfying the conditions
As a consequence of the causal action principle, the kernel of the fermionic projector has additional normalization properties which justify the name projector.
Connection and curvature
Being an operator from one spin space to another, the kernel of the fermionic projector gives relations between different spacetime points. This fact can be used to introduce a spin connection
The basic idea is to take a polar decomposition of . The construction becomes more involved by the fact that the spin connection should induce a corresponding metric connection
where the tangent space is a specific subspace of the linear operators on endowed with a Lorentzian metric.
The spin curvature is defined as the holonomy of the spin connection,
Similarly, the metric connection gives rise to metric curvature. These geometric structures give rise to a proposal for a quantum geometry.
The Euler–Lagrange equations and the linearized field equations
A minimizer of the causal action satisfies corresponding Euler–Lagrange equations. They state that the function
defined by
(with two Lagrange parameters and ) vanishes and is minimal on the support of ,
For the analysis, it is convenient to introduce jets consisting of a real-valued function on and a vector field on along , and to denote the combination of multiplication and directional derivative by . Then the Euler–Lagrange equations imply that the weak Euler–Lagrange equations
hold for any test jet .
Families of solutions of the Euler–Lagrange equations are generated infinitesimally by a jet which satisfies the linearized field equations
to be satisfied for all test jets , where the Laplacian is defined by
The Euler–Lagrange equations describe the dynamics of the causal fermion system, whereas small perturbations of the system are described by the linearized field equations.
Conserved surface layer integrals
In the setting of causal fermion systems, spatial integrals are expressed by so-called surface layer integrals. In general terms, a surface layer integral is a double integral of the form
where one variable is integrated over a subset , and the other variable is integrated over the complement of . It is possible to express the usual conservation laws for charge, energy, ... in terms of surface layer integrals. The corresponding conservation laws are a consequence of the Euler–Lagrange equations of the causal action principle and the linearized field equations. For the applications, the most important surface layer integrals are the current integral , the symplectic form , the surface layer inner product and the nonlinear surface layer integral .
Bosonic Fock space dynamics
Based on the conservation laws for the above surface layer integrals, the dynamics of a causal fermion system as described by the Euler–Lagrange equations corresponding to the causal action principle can be rewritten as a linear, norm-preserving dynamics on the bosonic Fock space built up of solutions of the linearized field equations. In the so-called holomorphic approximation, the time evolution respects the complex structure, giving rise to a unitary time evolution on the bosonic Fock space.
A fermionic Fock state
If has finite dimension , choosing an orthonormal basis of and taking the wedge product of the corresponding wave functions
gives a state of an -particle fermionic Fock space. Due to the total anti-symmetrization, this state depends on the choice of the basis of only by a phase factor. This correspondence explains why the vectors in the particle space are to be interpreted as fermions. It also motivates the name causal fermion system.
Underlying physical principles
Causal fermion systems incorporate several physical principles in a specific way:
A local gauge principle: In order to represent the wave functions in components, one chooses bases of the spin spaces. Denoting the signature of the spin scalar product at by , a pseudo-orthonormal basis of is given by
Then a wave function can be represented with component functions,
The freedom of choosing the bases independently at every spacetime point corresponds to local unitary transformations of the wave functions,
These transformations have the interpretation as local gauge transformations. The gauge group is determined to be the isometry group of the spin scalar product. The causal action is gauge invariant in the sense that it does not depend on the choice of spinor bases.
The equivalence principle: For an explicit description of spacetime one must work with local coordinates. The freedom in choosing such coordinates generalizes the freedom in choosing general reference frames in a spacetime manifold. Therefore, the equivalence principle of general relativity is respected. The causal action is generally covariant in the sense that it does not depend on the choice of coordinates.
The Pauli exclusion principle: The fermionic Fock state associated to the causal fermion system makes it possible to describe the many-particle state by a totally antisymmetric wave function. This gives agreement with the Pauli exclusion principle.
The principle of causality is incorporated by the form of the causal action in the sense that spacetime points with spacelike separation do not interact.
Limiting cases
Causal fermion systems have mathematically sound limiting cases that give a connection to conventional physical structures.
Lorentzian spin geometry of globally hyperbolic spacetimes
Starting on any globally hyperbolic Lorentzian spin manifold with spinor bundle , one gets into the framework of causal fermion systems by choosing as a subspace of the solution space of the Dirac equation. Defining the so-called local correlation operator for by
(where is the inner product on the fibre ) and introducing the universal measure as the push-forward of the volume measure on ,
one obtains a causal fermion system. For the local correlation operators to be well-defined, must consist of continuous sections, typically making it necessary to introduce a regularization on the microscopic scale . In the limit , all the intrinsic structures on the causal fermion system (like the causal structure, connection and curvature) go over to the corresponding structures on the Lorentzian spin manifold. Thus the geometry of spacetime is encoded completely in the corresponding causal fermion systems.
Quantum mechanics and classical field equations
The Euler–Lagrange equations corresponding to the causal action principle have a well-defined limit if the spacetimes of the causal fermion systems go over to Minkowski space. More specifically, one considers a sequence of causal fermion systems (for example with finite-dimensional in order to ensure the existence of the fermionic Fock state as well as of minimizers of the causal action), such that the corresponding wave functions go over to a configuration of interacting Dirac seas involving additional particle states or "holes" in the seas. This procedure, referred to as the continuum limit, gives effective equations having the structure of the Dirac equation coupled to classical field equations. For example, for a simplified model involving three elementary fermionic particles
in spin dimension two, one obtains an interaction via a classical axial gauge field described by the coupled Dirac– and Yang–Mills equations
Taking the non-relativistic limit of the Dirac equation, one obtains the Pauli equation or the Schrödinger equation, giving the correspondence to quantum mechanics. Here and depend on the regularization and determine the coupling constant as well as the rest mass.
Likewise, for a system involving neutrinos in spin dimension 4, one gets effectively a massive gauge field coupled to the left-handed component of the Dirac spinors. The fermion configuration of the standard model can be described in spin dimension 16.
The Einstein field equations
For the just-mentioned system involving neutrinos, the continuum limit also yields the Einstein field equations coupled to the Dirac spinors,
up to corrections of higher order in the curvature tensor. Here the cosmological constant is undetermined, and denotes the energy-momentum tensor of the spinors and the gauge field. The gravitation constant depends on the regularization length.
Quantum field theory in Minkowski space
Starting from the coupled system of equations obtained in the continuum limit and expanding in powers of the coupling constant, one obtains integrals which correspond to Feynman diagrams on the tree level. Fermionic loop diagrams arise due to the interaction with the sea states, whereas bosonic loop diagrams appear when taking averages over the microscopic (in generally non-smooth) spacetime structure of a causal fermion system (so-called microscopic mixing). The detailed analysis and comparison with standard quantum field theory is work in progress.
References
Further reading
Web platform on causal fermion systems
Quantum gravity
Mathematical physics
Quantum field theory | Causal fermion systems | [
"Physics",
"Mathematics"
] | 3,486 | [
"Quantum field theory",
"Applied mathematics",
"Theoretical physics",
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum gravity",
"Mathematical physics",
"Physics beyond the Standard Model"
] |
47,481,239 | https://en.wikipedia.org/wiki/Herta%20Regina%20Leng | Herta Regina Leng (24 February 1903 – 17 July 1997) was an Austrian-American physicist and educator.
Leng was born on 24 February 1903 in Vienna, Austria. She was the daughter of Arthur Leng and Paula Leng, and sister of Leopold Ignaz Leng. Leng fled Austria in 1939 and eventually emigrated to the United States in 1940. She died on 17 July 1997 in Troy, New York.
Purdue and RPI
Dr. Karl Lark-Horovitz, professor of physics at Purdue, had a keen interest in the development of the cyclotron and the application of physical techniques to solve biological problems, and sought to develop methods that utilized radioactive tracers produced from the cyclotron. With the assistance of Leng and Donald Tendam, radioactive tracers were employed following an intense regimen to develop these methods. Key studies concerned sodium and potassium in the human body and their uptake, distribution and excretion; sodium and potassium distribution in human blood cells; and the analysis of enteric coatings for medications. Leng was awarded an American Association of University Women fellowship for work at Purdue. The fellowship permitted her the freedom to pursue the pioneer research on radioactive tracer materials.
In 1943, Leng moved to New York City to accept a faculty appointment in physics at Rensselaer Polytechnic Institute (RPI) and in 1966 was promoted to become RPI's first female full professor.
Professional associations
Sigma Xi, Rensselaer Polytechnic Institute Chapter
Awards and honors
Herta Leng Memorial Lecture Series, Rensselaer Polytechnic Institute
Every year, RPI honors Leng with the Herta Leng Memorial Lecture Series.
Select publications
Adsorptionsversuche an Gläsern and Filtersubstanzen nach der Methode der radioaktiven Indikatoren. (Adsorption experiments on glasses and filter substances according to the method of radioactive indicators.)
Radioactive indicators, enteric coatings and intestinal absorption.
A new method of testing enteric coatings.
On the Existence of Single Magnetic Poles.
Pioneer woman in nuclear science.
References
1903 births
1997 deaths
20th-century American physicists
20th-century Austrian physicists
20th-century American women scientists
American women physicists
20th-century Austrian women scientists
Scientists from Vienna
Austrian emigrants to the United States
Purdue University faculty
Rensselaer Polytechnic Institute faculty
Fellows of the American Association of University Women
Radioactivity
Particle accelerators | Herta Regina Leng | [
"Physics",
"Chemistry"
] | 490 | [
"Radioactivity",
"Nuclear physics"
] |
47,482,405 | https://en.wikipedia.org/wiki/Stephen%20L.%20Buchwald | Stephen L. Buchwald (born 1955) is an American chemist and the Camille Dreyfus Professor of Chemistry at MIT. He is known for his involvement in the development of the Buchwald-Hartwig amination and the discovery of the dialkylbiaryl phosphine ligand family for promoting this reaction and related transformations. He was elected as a fellow of the American Academy of Arts and Sciences and as a member of the National Academy of Sciences in 2000 and 2008, respectively.
Early life and education
Stephen Buchwald was born in Bloomington, Indiana. He credits his "young and dynamic" high school chemistry teacher, William Lumbley, for infecting him with his enthusiasm.
In 1977 he received his Sc.B. from Brown University where he worked with Kathlyn A. Parker and David E. Cane as well as Gilbert Stork from Columbia University. In 1982 he received his Ph.D from Harvard University working under Jeremy R. Knowles.
Career
Buchwald was a postdoctoral fellow at Caltech with Robert H. Grubbs. In 1984, he joined MIT faculty as an assistant professor of chemistry. He was promoted to associate professor in 1989 and to Professor in 1993. He was named the Camille Dreyfus Professor in 1997. He has coauthored over 435 accepted academic publications and 47 accepted patents.
He is known for his involvement in the development of the Buchwald-Hartwig amination and the discovery of the dialkylbiaryl phosphine ligand family for promoting this reaction and related transformations. He was elected as a fellow of the American Academy of Arts and Sciences and as a member of the National Academy of Sciences in 2000 and 2008, respectively.
, he served as an associate editor for the academic journal, Advanced Synthesis & Catalysis.
Notable awards
Awards received by Buchwald include:
2005 - CAS Science Spotlight Award
2005 - Bristol-Myers Squibb Distinguished Achievement Award
2006 – American Chemical Society Award for Creative Work in Synthetic Organic Chemistry
2006 – Siegfried Medal Award in Chemical Methods which Impact Process Chemistry
2010 – Gustavus J. Esselen Award for Chemistry in the Public Interest
2013 – Arthur C. Cope Award
2014 – Ulysses Medal, University College Dublin
2014 – Linus Pauling Award
2014 – BBVA Foundation Frontiers of Knowledge Award in Basic Sciences
2015 – Honorary Doctorate, University of South Florida
2016 - William H. Nichols Medal
2019 – Wolf Prize in Chemistry
2019 – Roger Adams Award, American Chemical Society
2020 – Clarivate Citation Laureate
References
External links
21st-century American chemists
Massachusetts Institute of Technology School of Science faculty
Living people
Harvard University alumni
Brown University alumni
1955 births
American organic chemists
California Institute of Technology fellows
Fellows of the American Academy of Arts and Sciences
Members of the United States National Academy of Sciences | Stephen L. Buchwald | [
"Chemistry"
] | 559 | [
"Organic chemists",
"American organic chemists"
] |
47,482,842 | https://en.wikipedia.org/wiki/Surface%20plasmon%20resonance%20microscopy | Surface plasmon resonance microscopy (SPRM), also called surface plasmon resonance imaging (SPRI), is a label free analytical tool that combines the surface plasmon resonance of metallic surfaces with imaging of the metallic surface.
The heterogeneity of the refractive index of the metallic surface imparts high contrast images, caused by the shift in the resonance angle. SPRM can achieve a sub-nanometer thickness sensitivity and lateral resolution achieves values of micrometer scale. SPRM is used to characterize surfaces such as self-assembled monolayers, multilayer films, metal nanoparticles, oligonucleotide arrays, and binding and reduction reactions. Surface plasmon polaritons are surface electromagnetic waves coupled to oscillating free electrons of a metallic surface that propagate along a metal/dielectric interface. Since polaritons are highly sensitive to small changes in the refractive index of the metallic material, it can be used as a biosensing tool that does not require labeling. SPRM measurements can be made in real-time, such as measuring binding kinetics of membrane proteins in single cells, or DNA hybridization.
History
The concept of classical SPR has been since 1968 but the SPR imaging technique was introduced in 1988 by Rothenhäusler and Knoll. Capturing a high resolution image of low contrast samples for optical measuring techniques is a near impossible task until the introduction of SPRM technique that came into existence in the year 1988. In SPRM technique, plasmon surface polariton (PSP) waves are used for illumination. In simple words, SPRI technology is an advanced version of classical SPR analysis, where the sample is monitored without label through the use of a CCD camera. The SPRI technology with the aid of CCD camera gives advantage of recording the sensograms and SPR images, and simultaneously analyzes hundreds of interactions.
Principles
Surface plasmons or surface plasmon polaritons are generated by coupling of electrical field with free electrons in a metal. SPR waves propagate along the interface between dielectrics and a conducting layer rich in free electrons.
As shown in Figure 2, when light passes from a medium of high refractive index to a second medium with a lower refractive index, the light is totally reflected under certain conditions.
In order to get total internal reflection (TIR), the θ1 and θ2 should be within a certain range that can be explained through the Snell's law. When light passes through a high refractive index media to a lower refractive media, it is reflected at an angle θ2, which is defined in Equation 1.
In the TIR process some portion of the reflected light leaks a small portion of electrical field intensity into medium 2 (η1 > η2). The light leaked into the medium 2 penetrates as an evanescent wave. The intensity and penetration depth of the evanescent wave can be calculated according to Equations 2 and 3, respectively.
Figure 3 shows a schematic representation of surface plasmons coupled to electron density oscillations. The light wave is trapped on the surface of the metal layer by collective coupling to the electrons of the metal surface. When the electron's plasma and the electric field of the wave light couple their frequency oscillations they enters into resonance.
Recently, the leakage light inside of the metal surface had been imaged.
Radiation of different wavelengths (green, red and blue) was converted into surface plasmon polaritons, through the interaction of the photons at the metal/dielectric interface. Two different metal surfaces were used; gold and silver. The propagation length of the SPP along the x-y plane (metal plane) in each metal and photon wavelength were compared. The propagation length is defined as the distance traveled by the SPP along the metal before its intensity decreases by a factor of 1/e, as defined in Equation 4.
Figure 4 shows the leakage light captured by a color CCD camera, of the green, red and blue photons in gold (a) and silver (b) films. In part c) of Figure 4, the intensity of the surface plasmon polaritons with the distance is shown. It was determined that the leakage light intensity is proportional to the intensity in the waveguide.
where δSPP is the propagation length; ε’m and ε’’m are the relative permittivity of the metal and λ0 is the free space wavelength.
The metallic film is capable of absorbing light due to the coherent oscillation of the conduction band electrons induced by the interaction with an electromagnetic field.
Electrons in the conduction band induce polarization after interaction with the electric field of the radiation. A net charge difference is created in the surface of the metal film, creating a collective dipolar oscillation of electrons with the same phase.
When the electron motion matches the frequency of the electromagnetic field, the absorption of incident radiation occurs. The oscillation frequency of gold surface plasmons is found in the visible region of the electromagnetic spectrum, giving a red color while silver gives yellow color.
Nanorods exhibit two absorption peaks in the UV-vis region due to longitudinal and transversal oscillation, for gold nanorods the transverse oscillation generates a peak at 520 nm, while the longitudinal oscillation generates absorption at longer wavelengths, within a range of 600 to 800 nm. Silver nanoparticles shift their light absorption wavelengths to higher energy levels, where the blue shifting goes from 408 nm to 380 nm, and 372 nm, when they change from sphere to rod and wire, respectively.
The absorption intensity and wavelength of gold and silver depends on the size and shape of the particles.
In Figure 5, the size and shape of silver nanoparticles influenced the intensity of the scattered light and maximum wavelength of silver nanoparticles. The triangular shaped particles appear red with a maximum scattered light at 670–680 nm, the pentagonal particles appear in green (620–630 nm) and the spherical particles have higher absorption energies (440–450 nm), appear in blue.
Plasmon excitation methods
Surface plasmon polaritons are quasiparticles, composed by electromagnetic waves-coupled to free electrons of the conduction band of metals.
One of widely used methods uses to couple p-polarized light with the metal-dielectric interface is prism-based coupling.
Prism couplers are the most widely used to excite surface plasmon polaritons. This method is also called Kretschmann–Raether configuration, where TIR creates an evanescent wave that couples the free electrons of the metal surface.
High numerical aperture objective lenses have been explored as a variant of prism-coupling to excite surface plasmon polaritons. Waveguide coupling is also used to create surface plasmons.
Prism coupling
Kretschmann–Raether configuration is used to achieve resonance between light and free electrons of the metal surface. In this configuration a prism with high refractive index is interfaced with a metal film. Light from a source propagates through the prism is made incident on the metal film. As a consequence of the TIR, some leaked through metal film, forming evanescent wave in the dielectric medium as in Figure 6.
The evanescent wave penetrates a characteristic distance into the less optically dense medium where it is attenuated.
Figure 6 shows the Kretschmann–Raether configuration, where a prism with refractive index of η1 is coupled to a dielectric surface with a refractive index η2, the incidence angle of the light is θ.
The interaction between the light and the surface polaritons in the TIR can be explained by using the Fresnel multilayer reflection; the amplitude reflection coefficient (rpmd) is expressed as follows in Equation 5.
The power reflection coefficient R is defined as follows:
In Figure 7, a schematic representation of the Otto prism coupling prism is shown. In the Figure 7, the air gap was shown a little thick just to explain the schematic although in reality, the air gap is so thin between prism and metal layer.
Waveguide coupling
The electromagnetic waves are conducted through an optical waveguide. When light enters to the region with a thin metal layer, it evanescently penetrates through the metal layer exciting a Surface Plasmon Wave (SPW). In waveguide coupling configuration, the waveguide is created when the refraction index of the grating is greater than that of substrate. Incident radiation propagates along the waveguide layer with high refractive index.
In Figure 8, electromagnetic waves are guided through a wave-guiding layer, once the optical waves reached the interface wave-guiding layer metal an evanescent wave is created. The evanescent wave excites the surface plasmon at the metal-dielectric interface.
Grating coupling
Due to the periodic grating, the phase matching between the incident light and the guide mode is easy to obtain.
According to Equation 7, the propagation vector (Kz) in the z direction can be tuned by changing the periodicity Λ. The grating vector can be modified, and the angle of resonant excitation can be controlled.
In Figure 9, q is the diffraction order it can have values of any integer (positive, negative or zero).
Resonance measurement methods
The propagation constant of a monochromatic beam of light parallel to the surface is defined by Equation 8.
where θ is the angle of incidence, ksp is the propagation constant of the surface plasmon, and n(p) is the refractive index of the prism. When the wave vector of the SPW, ksp matches the wave vector of the incident light , SPW is expressed as:
Here εd and εm represent the dielectric constant of dielectrics and the metal while the wavelength of the incident light corresponds to λ. kx and ksp can be represented as:
The surface plasmons are evanescent waves that have their maximum intensity at the interface and decay exponentially away from the phase boundary to a penetration depth.
The propagation of the surface plasmons is intensely affected by a thin film coating on the conducting layer. The resonance angle θ shifts, when the metal surface is coated with a dielectric material, due to the change of the propagation vector k of the surface plasmon.
This sensitivity is due to the shallow penetration depth of the evanescent wave. Materials with a high amount of free electrons are used. Metal films of roughly 50 nm made of copper, titanium, chromium and gold are used. However, Au is the most common metal used in SPR as well as in SPRM.
Scanning angle SPR is the most widely used method for detecting biomolecular interactions.
It measures the reflectance percentage (%R) from a prism/metal film assembly as a function of the incident angle at a fixed excitation wavelength. When the angle of incidence matches the propagation constant of the interface, this mode is excited at expenditure of the reflected light. As a consequence, the reflectivity value at the resonance angle is dumped.
The propagation constant of the polaritons can be modified by varying the dielectric material. This modification causes resonance angle shifting as in the example shown in Figure 10, from θ1 to θ2 due to the change on the surface plasmon propagation constant.
The resonance angle can be found by using Equation 11.
where n1 is n2 and ng are the refractive index of medium 1, 2 and the metal layer, respectively.
Using TIR two-dimensional imaging is possible to achieve spatial differences in %R at a fixed angle θ. A beam of monochromatic light is used to irradiate the sample at a fixed incident angle. The SPR image is created from the reflected light detected by a CCD camera.
The minimum value of %R at the resonance angle provides SPRM.
Huang and collaborators developed a microscope with an objective with high numerical aperture (NA), which improve the lateral resolution at expense of the longitudinal resolution.
Lateral resolution
The resolution of a conventional light microscopy is limited by the light diffraction limit. In SPRM, the excited surface plasmons adopt a horizontal configuration from the incident beam light. The polaritons will travel along the metal-dielectric interface, for a determined period, until they decay back into photons. Therefore, the resolution achieved by SPRM is determined by the propagation length ksp of the surface plasmons parallel to the incident plane.
The separation between two areas should be approximately the magnitude of ksp in order to be resolved. Berger, Kooyman and Greve showed that the lateral resolution can be tuned by changing the excitation wavelength, the better resolution is achieved when the excitation energy increases. Equations 4 and 12 defines the magnitude of the wave vector of the surface plasmons.
where n2 is the refractive index of medium 2, ng is the refractive index of the metal film, and λ is the excitation wavelength.
Instrumentation
The surface plasmon resonance microscopy is based on surface plasmon resonance and recording desired images of the structures present on the substrate using an instrument equipped with a CCD camera. In the past decade, SPR sensing has been demonstrated to be an exceedingly powerful technique and used quite extensively in the research and development of materials, biochemistry and pharmaceutical sciences.
The SPRM instrument works with the combination of the following main components: source light (typically He-Ne laser), that further travels through a prism that is attached to a glass side, coated with a thin metal film (typically gold or silver), where the light beam reflects at the gold/solution interface at an angle greater than the critical angle. The reflected light from the interface surface area is recorded by a CCD detector, and an image is recorded. Although the above-mentioned components are some important for SPRM, additional accessories such as polarizers, filters, beam expanders, focusing lenses, rotating stage, etc., similar to several imaging methods are installed and used in the instrumentation for an effective microscopic technique as demanded by the application. Figure 12 shows a typical SPRM. Depending on the applications, and to optimize the imaging technique, the researchers modify this basic instrumentation with some design changes that even include altering the source beam. One of such design changes that resulted in a different SPRM is an objective-type as shown in Figure 11 with some modification in the optical configuration.
The SPRi systems are currently manufactured by well known biomedical instrumentation manufacturers such as GE Life Sciences, HORIBA, Biosensing USA, etc. The cost of SPRi's ranger from, USD 100k-250k, although simple demonstration prototypes can be made for USD2000.
Sample preparation
To perform measurements for SPRM, the sample preparation is a critical step. There are two factors that can be affected by the immobilization step: one is the reliability and reproducibility of the acquire data. It is important to ensure stability to the recognition element; such as antibodies, proteins, enzymes, under the experiment conditions. Moreover, the stability of the immobilized specimens will affect the sensitivity, and/or the limit of detection (LOD).
One of the most popular immobilization methods used is Self-Assembled Monolayer (SAM) on gold surface. Jenkins and collaborators 2001, used mercaptoethanol patches surrounded by SAM composed of octadecanethiol (ODT) to study the adsorption of egg-phosphatidylcholine on the ODT SAM.
A pattern of ODT-mercaptoethanol was made onto a 50 nm gold film. The gold film was obtained through thermal evaporation on a LaSFN 9 glass. The lipid vesicles were deposited on the ODT SAM through adsorption, giving a final multilayer thickness greater than 80 Å.
11-Mercaptoundecanoic acid-Self assembled monolayer (MUA-SAM) were formed on Gold coated BK7 slides. A PDMS plate was masked on the MUA-SAM chip. Clenbuterol (CLEN) was attached to BSA molecules through amide bond, between the carboxylic group of BSA and the amine group of CLEN molecules. In order to immobilize BSA on the gold surface, the spots created through PDMS making were functionalized with sulfo-NHS and EDC, subsequently 1% BSA solution was poured in the spots and incubated for 1 hour. Non-immobilized BSA was rinsed out with PBS and CLEN solution was poured on the spots, unimmobilized CLEN was removed through PBS rinse.
An alkanethiol-SAM was prepared in order to simultaneously measure the concentration of horseradish peroxidase (Px), Human Immunoglobulin E (IgE), Human choriogonadotropin (hCG) and Human immunoglobulin G (IgG), through SPR. The alkanethiols made of carbon chains composed by 11 and 16 carbons were self-assembled on the sensor chip. The antibodies were attached to the C16 alkanethiol, which had a terminal carboxylic group.
The micro patterned electrode was fabricated by gold deposition on microscope slides. PDMS stamping was used to produce an array of hydrophilic/hydrophobic surface; ODT treatment followed by immersion in 2-mercaptoethanol solutions rendered a functionalized surface for lipid membranes deposition. The patterned electrode was characterized through SPRM. In the Figure 14 B, the SPRM image reveals the size of the pockets, which was 100 um x 100 um, and they were 200 um apart. As is seen in the image the remarkable contrast of the image is due to the high sensitivity of the technique.
Applications
SPRM is a useful technique for measuring concentration of biomolecules in the solution, detection of binding molecules and real time monitoring of molecular interactions. It can be used as biosensor for surface interactions of biological molecules: antigen-antibody binding, mapping and sorption kinetics. For example, one of the possible reason of Type 1 diabetes of children is the high-level presence of Cow's milk antibodies IgG, IgA, IgM (mainly due to IgA) in their serum.
Cow's milk antibodies can be detected in the milk and serum sample using SPRM.
SPRM is also advantageous to detect the site-specific attachment of lymphocyte B or T on antibody array. This technique is convenient to study the label free and real time interactions of cells on the surface. So SPRM can be served as diagnostic tool for cell surface adhesion kinetics.
Besides its merits, there are limitations of SPRM though. It's not applicable for detecting low molecular weight molecules. Although it's label free but will need to have crystal clean experimental conditions. Sensitivity of SPRM can be improved with coupling of MALDI-MS.
There are a number of applications of SPRM from which some of them are being described here.
Membrane proteins
Membrane proteins are responsible for the regulation of cellular responses to extracellular signals. It has been the challenging thing to investigate the involvement of membrane proteins in disease biomarkers and therapeutic targets and its binding kinetics with their ligands. Traditional approaches could not reflect clear structures and functions of membrane proteins.
In order to understand the structural details of membrane proteins, there is a need of alternate analytical tool, which can provide three-dimensional and sequential resolutions that can monitor membrane proteins. Atomic force microscopy (AFM) is an excellent method for obtaining high spatial resolution images of membrane proteins,
but it might not be helpful to investigate its binding kinetics. Fluorescence-based microscopy (FLM) can be used to study the interactions of membrane proteins in individual cells but it requires development of proper labels and needs tactics for different target proteins.
Furthermore, host protein may be affected by the labeling.
Binding kinetics of MP's in the single living cells can be studied via label free imaging method based on SPR Microscopy without extracting the proteins from the cell membranes, which help scientists to work with the actual conformations of the membrane proteins. Furthermore, distribution and local binding activities of membrane proteins in each cell can be mapped and calculated. SPR microscopy (SPRM) makes possible to simultaneously optical and fluorescence imaging of the same sample, which prove to get the advantages of both label-based and label-free detection methods in the single setup.
Detection of DNA hybridization
SPR imaging is used to study the multiple adsorption interactions in an array format under same experimental conditions. Nelson and his coworkers introduced a multistep procedure to create DNA arrays on gold surfaces for use with SPR imaging.
Affinity interactions can be studied for a variety of target molecules e.g. proteins and nucleic acids. Mismatching of bases in the DNA sequence leads to the number of lethal diseases like lynch syndrome which has high risk of colon cancer.
SPR imaging is useful to monitor adsorption of molecules on the gold surface which is possible because of the change in the reflectivity from the surface. First G-G mismatch pair is stabilized by attaching it with the ligand, naphthyridine dimer, through hydrogen bonding which make the hairpin structures in double stranded DNA on gold surface. Binding of Dimer with DNA enhances the free energy of hybridization, which causes change in index of refraction.
DNA array is fabricated to test the G–G mismatch stabilizing properties of the naphthyridine dimer. Each of the four immobilized sequences in the array differed by one base. The position of this base is indicated by an X in sequence 1 as shown in Figure 16. The SPR difference image is only detected for the sequence having cytosine (C) base at the X position in sequence 1, the complementary sequence to sequence 2. However, the SPR difference image corresponding to the addition of sequence 2 in the presence of the naphthyridine dimer shows that, in addition to its complement, sequence 2 also hybridizes to the sequence that forms a G–G mismatch. These results demonstrate that SPR imaging is a promising tool for monitoring single base mismatches and screen out the hybridized molecules.
Antibody binding to protein arrays
SPR imaging can be used to study the binding of antibodies to protein array. Amine functionalities on the gold surface with proteins array, is used to study binding of antibodies. Immobilization of the protein was done by flowing protein solutions through the PDMS micro channels. Then PDMS was removed from the surface and solutions of antibody were flowed over the array. Three-component protein array containing the proteins human fibrinogen, ovalbumin, and bovine IgG is shown in Figure 17, SPR images obtained by Kariuki and co-workers. This contrast in the array is due to difference of refractive index which is outcome of local binding of antibodies. These images show that there is a high degree of antibody binding specificity and a small degree of non-specific adsorption of the antibody to the array background, which can be improved to modify the array background. Based on these results, SPR imaging technique can be opted as diagnostic tool for studying the antibody interactions to protein arrays.
Coupled with mass spectrometry
Discovery and validation of protein biomarkers are crucial for diseases diagnosis. Coupling of SPRM with MALDI-mass spectrometer (SUPRA-MS) enables the multiplex quantification of binding and molecular characterization on the basis of different masses. SUPRA-MS is used to detect, identify and characterize the potential breast cancer biomarker, LAG3 protein, introduced in the human plasma. Glass slides were taken to prepare gold chips via coating with thin layers of chromium and gold by sputtering process. Gold surface was functionalized using solution of 11-Mercapto-1-undecanol (11-MUOH) and 16-mercapto-1-hexadecanoic acid (16-MHA). This self-assembled monolayer was activated with sulfo-NHS and EDC. Pattern of sixteen droplets was deposited on the macroarray. Immunoglobin G antibodies were spotted against Lymphocyte activation gene 3 (α-LAG3) and rat serum albumin (α-RSA). After placing biochip in the SPRi and running buffer solution in the flow cell, α-LAG3 was injected. Special image station was used on the proteins that are attached. This station can also be placed on the MALDI. Before placing on the MALDI, captured proteins were reduced, digested and loaded with matrix in order to avoid contamination.
Antigen density is directly proportional to change in reflectivity ΔR because evanescent wave penetration depth Lzc is larger than thickness of immobilized antigen layer.
where is the index increment of the molecule and is the sensitivity prism, reflectivity.
Clean mass spectrum was obtained for LAG3 protein due to good tryptic digestion and homogeneity of the matrix (α-cyano-4-hydroxycinnamic acid). Relatively high intensity m/z peak of LAG3 protein was found at 1,422.70amu with average mascot score of 87.9 ± 2.4. Validation of MS results was further confirmed by MS-MS analysis. These results are similar to classical analytical method in-gel digestion.
Greater S/N > 10, 100% reliability and detection at femtomole level on chip proves the credibility of this coupling technique. One can find protein-protein interaction and on-chip peptide distribution with high spatial resolution using subjected technique.
DNA aptamers
Aptamers are particular DNA ligands that target biomolecules such as proteins. SPR imaging platform would be a good choice to characterize aptamer -protein interactions. To study the aptamer-protein interaction, first oligonucleotides are grafted through formation of thiol Self Assembling Monolayer (SAM) on gold substrate using piezoelectric dispensing system. Thiol groups are introduced on DNA nucleotides by N-hydroxysuccinimide (NHS). Target oligonucleotides having a primary amine group at their 59th end are conjugated to HS-C (11)-NHS in phosphate buffer solution at pH 8.0 for one hour at room temperature. Aptamer grafting biosensor is placed on SPRM after rinsing. Then Thrombin is co-injected with excess of cytochrome C for signal specificity. Concentration of free thrombin is determined by calibration curve obtained by plotting initial slope of the signal at the beginning of injection against concentration. The interaction of thrombin and the aptamer can be monitored on microarray in real-time during injections of thrombin at different concentrations. Solution phase dissociation constant KDsol (3.16 ± 1.16 nM) is calculated from the measured concentrations of free thrombin.
[THR---APT] = cTHR – [THR], the equilibrium concentration of thrombin attached to aptamers in solution and [APT] = cAPT – [THR---APT], the concentration of free aptamers in solution.
Surface phase dissociation constant KDsurf (3.84 ± 0.68) is obtained by fitting Langmuir adsorption isotherm on equilibrium signals. Both dissociation constants are significantly different because KDsurf is dependent on the surface grafting density as shown in Figure 19. This dependence extrapolates linearly at low sigma to solution-phase affinity.
The difference in SPRi image can gives us information regarding the presence of binding and specificity but not suitable for quantification of free protein in case of multiple affinity sites. The real time monitoring of the interaction is possible by using SPRM to study the kinetics and the affinity of the interactions.
Detection of polymer interaction
Despite using surface plasmon resonance imaging (SPRi) in biology to characterize interactions between two biological molecules, it is also useful to monitor the interactions between two polymers. In this approach, one polymer, called as host protein HP, is immobilized on the surface of a biochip and the other polymer designated as guest polymer GP is inserted on the SPRi-Biochip to study the interactions. For example, a host protein of amine-functionalized poly(β-cyclodextrin) and guest protein of PEG (ada)4.
SPRi biochip was used for immobilization of HP of different concentrations. An array of HP active sites was produced on the chip. The attachment of HP was done through its amino groups to N-hydroxy succinimide functionalities on the gold surface. First SPRi system was filled running buffer solution followed by placing of SPRi –biochip into the analysis chamber. Two solutions of different concentrations of GP was 1g/L and 0.1 g/L were injected in the flow cell. The association and the dissociation of both polymers can be monitored in real-time on the basis of change in reflectivity and images from SPRM can be differentiated on the basis of white spots (association phase) and black spots (dissociation phase). PEG without adamantyl groups didn’t show adsorption on β-cyclodextrin cavities. On the other hand, there wasn’t any adsorption of GP without HP on the chip. Change in SPRi response on the reaction sites is provided by the capturing of kinetic curves and real time images from the CCD camera. Local changes in light reflectivity are directly related to quantity of target molecules on each point. Variation at the surface of the chip provide comprehensive knowledge on molecular binding and kinetic processes.
Bio-mineralization
One of the important class of biomaterials is polymer hydroxyapatite that is remarkably useful in the field of bone regeneration because of its resemblance with natural bone material. The advantage of hydroxyapatite, (Ca10(PO4)6(OH)2, is being started to form inside the bone tissue through mineralization which also advocate the enhancement of osteointegration. Biomineralization is also called calcification, in which calcium cations come from cells and physiological fluids while phosphate anions are produced from hydrolysis of phosphoesters and phosphoproteins as well as from the body fluids. This phenomenon is also tested in vitro studies.
For in vitro studies, Polyamidoamine (PAMAM) dendrimers with amino- and carboxylic-acid external reactive shells are considered as sensing phase. These dendrimers are required to immobilized on the gold surface and inactive to gold surface. Hence, thiols groups have to be introduced at the terminals of dendrimers so that dendrimers can be attached on the gold surface. Carboxylic groups are functionalized by N,N-(3-dimethylaminopropyl)-N’-ethyl-carbodiimide hydrochloride (EDC) and N-hydroxysuccinimide (NHS) solutions in phosphate buffer. Functional groups (amide, amino and carboxyl) act as ionic pumps capturing calcium ions from the test fluids; then calcium cations bind with phosphate anions to generate calcium-phosphate mineral nuclei on the dendrimer surface.
SPRM is expected to be sensitive enough to provide important quantitative information on mineralization's occurrence and kinetics. This detection of the mineralization is based on the specific mass change induced by the mineral nuclei formation and growth. Nucleation and progress in mineralization can be monitored by SPRM as shown in Figure 20. PAMAM-containing sensors are fixed on the SPRi analysis platform and then exposed to experimental fluids in the flow cell as shown in Figure 21. SPRM is not adapted to sense the origin and nature of mass change but it detects the modification of refractive index due to mineral precipitation.
References
Microscopy
Plasmonics | Surface plasmon resonance microscopy | [
"Physics",
"Chemistry",
"Materials_science"
] | 6,701 | [
"Plasmonics",
"Surface science",
"Condensed matter physics",
"Microscopy",
"Nanotechnology",
"Solid state engineering"
] |
49,248,459 | https://en.wikipedia.org/wiki/Transactivation%20domain | The transactivation domain or trans-activating domain (TAD) is a transcription factor scaffold domain which contains binding sites for other proteins such as transcription coregulators. These binding sites are frequently referred to as activation functions (AFs). TADs are named after their amino acid composition. These amino acids are either essential for the activity or simply the most abundant in the TAD. Transactivation by the Gal4 transcription factor is mediated by acidic amino acids, whereas hydrophobic residues in Gcn4 play a similar role. Hence, the TADs in Gal4 and Gcn4 are referred to as acidic or hydrophobic, respectively.
In general we can distinguish four classes of TADs:
acidic domains (called also “acid blobs” or “negative noodles", rich in D and E amino acids, present in Gal4, Gcn4 and VP16).
glutamine-rich domains (contains multiple repetitions like "QQQXXXQQQ", present in SP1)
proline-rich domains (contains repetitions like "PPPXXXPPP" present in c-jun, AP2 and Oct-2)
isoleucine-rich domains (repetitions "IIXXII", present in NTF-1)
Alternatively, since similar amino acid compositions does not necessarily mean similar activation pathways, TADs can be grouped by the process they stimulate, either initiation or elongation.
Acidic/9aaTAD
Nine-amino-acid transactivation domain (9aaTAD) defines a domain common to a large superfamily of eukaryotic transcription factors represented by Gal4, Oaf1, Leu3, Rtg3, Pho4, Gln3, Gcn4 in yeast, and by p53, NFAT, NF-κB and VP16 in mammals. The definition largely overlaps with an "acidic" family definition. A 9aaTAD prediction tool is available. 9aaTADs tend to have an associated 3-aa hydrophobic (usually Leu-rich) region immediately to its N-terminal.
9aaTAD transcription factors p53, VP16, MLL, E2A, HSF1, NF-IL6, NFAT1 and NF-κB interact directly with the general coactivators TAF9 and CBP/p300. p53 9aaTADs interact with TAF9, GCN5 and with multiple domains of CBP/p300 (KIX, TAZ1,TAZ2 and IBiD).
The KIX domain of general coactivators Med15(Gal11) interacts with 9aaTAD transcription factors Gal4, Pdr1, Oaf1, Gcn4, VP16, Pho4, Msn2, Ino2 and P201. Positions 1, 3-4, and 7 of the 9aaTAD are the main residues that interact with KIX. Interactions of Gal4, Pdr1 and Gcn4 with Taf9 have been observed. 9aaTAD is a common transactivation domain which recruits multiple general coactivators TAF9, MED15, CBP/p300 and GCN5.
Glutamine-rich
Glutamine (Q)-rich TADs are found in POU2F1 (Oct1), POU2F2 (Oct2), and Sp1 (see also Sp/KLF family). Although such is not the case for every Q-rich TAD, Sp1 is shown to interact with TAF4 (TAFII 130), a part of the TFIID assembly.
See also
DNA-binding protein
Transcription factor
References
External links
9aaTAD prediction tool
Transcription factors
Protein domains | Transactivation domain | [
"Chemistry",
"Biology"
] | 790 | [
"Transcription factors",
"Gene expression",
"Protein classification",
"Signal transduction",
"Protein domains",
"Induced stem cells"
] |
49,249,936 | https://en.wikipedia.org/wiki/Drug%20vectorization | In pharmacology and medicine, vectorization of drugs refers to (intracellular) targeting with plastic, noble metal or silicon nanoparticles or liposomes to which pharmacologically active substances are reversibly bound or attached by adsorption.
CNRS researchers have devised a way to overcome the problem of multidrug resistance using polyalkyl cyanoacrylate (PACA) nanoparticles as "vectors".
As a developing concept, drug nanocarriers are expected to play a major role in delivering multiple drugs to tumor tissues by overcoming semi-permeable membranes and biological barriers such as the blood–brain barrier.
References
See also
Vector (molecular biology)
Cancer treatment
Nanomedicine
Nanobiotechnology
Paul Ehrlich#Magic bullet
Gold nanobeacons
Pharmacology
Nanomedicine | Drug vectorization | [
"Chemistry",
"Materials_science"
] | 177 | [
"Nanomedicine",
"Pharmacology",
"Nanotechnology",
"Medicinal chemistry"
] |
49,250,446 | https://en.wikipedia.org/wiki/Reactive%20carbonyl%20species | Reactive carbonyl species (RCS) are molecules with highly reactive carbonyl groups, and often known for their damaging effects on proteins, nucleic acids, and lipids. They are often generated as metabolic products. Important RCSs include 3-deoxyglucosone, glyoxal, and methylglyoxal. RCSs react with amines and thiol groups leading to advanced glycation endproducts (AGEs). AGE's are indicators of diabetes.
Reactive aldehyde species (RASP), such as malondialdehyde and 4-hydroxynonenal, are a subset of RCS that are implicated in a variety of human diseases.
See also
Reactive oxygen species
Reactive sulfur species
Reactive nitrogen species
References
Molecules
Carbon compounds | Reactive carbonyl species | [
"Physics",
"Chemistry",
"Biology"
] | 159 | [
"Molecular physics",
"Molecules",
"Biotechnology stubs",
"Biochemistry stubs",
"Physical objects",
"nan",
"Biochemistry",
"Atoms",
"Matter"
] |
49,250,487 | https://en.wikipedia.org/wiki/Reactive%20sulfur%20species | Reactive sulfur species (RSS) are a family of sulfur-based chemical compounds that can oxidize and inhibit thiol-proteins and enzymes. They are often formed by the oxidation of thiols and disulfides into higher oxidation states. Examples of RSS include persulfides, polysulfides and thiosulfate.
See also
Reactive oxygen species
Reactive nitrogen species
Reactive carbonyl species
References
Molecules
Sulfur compounds | Reactive sulfur species | [
"Physics",
"Chemistry"
] | 89 | [
"Molecular physics",
"Molecules",
"Physical objects",
"nan",
"Atoms",
"Matter"
] |
49,252,268 | https://en.wikipedia.org/wiki/Fluorothymidine%20F-18 | Fluorothymidine F-18 (FLT) is a tumor-specific PET tracer and radiopharmaceutical. It is an isotopologue of alovudine. FLT is suitable for monitoring how tumors respond to cytostatic therapy. FLT accumulates in proliferating cells where it indicates the activity of the enzyme thymidine kinase. Cell division can be characterized by the activity of that enzyme. FLT is phosphorylated as though it were thymidine, and is subsequently incorporated into DNA. Thymidine is essential for DNA replication. Considering that FLT lacks a 3′-hydroxy group, transcription of DNA is impeded following incorporating of FLT. FLT indicates changes in tumor cell proliferation by tracking the restoration of nucleosides from degenerated DNA.
References
PET radiotracers
Pyrimidinediones
Organofluorides | Fluorothymidine F-18 | [
"Chemistry"
] | 190 | [
"Chemicals in medicine",
"Medicinal radiochemistry",
"PET radiotracers"
] |
49,253,092 | https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20208 | Zinc finger protein 208 is a protein that in humans is encoded by the ZNF208 gene.
Function
Zinc finger proteins (ZNFs), such as ZNF208, bind DNA and, through this binding, regulate gene transcription. Most ZNFs contain conserved C2H2 motifs and are classified as Kruppel-type zinc fingers. A conserved protein motif, termed the Kruppel-associated box (KRAB) domain, mediates protein-protein interactions (Eichler et al., 1998 [PubMed 9724325]). See ZNF91 (MIM 603971) for further information on ZNFs.
References
Further reading
Proteins | Zinc finger protein 208 | [
"Chemistry"
] | 146 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
49,253,125 | https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20226 | Zinc finger protein 226 is a protein that in humans is encoded by the ZNF226 gene.
Gene
The zinc finger protein 226 is also known as the Kruppel-associated box protein. Within humans, the ZNF226 gene is found on the plus strand of chromosome19q13, spanning 13,311 nucleotides from 44,165,070 to 44,178,381.
Transcript
Currently, there are 20 different transcript variants encoding ZNF226. All of them have six or seven identified exon regions within ZNF226. The longest identified transcript, ZNF226 transcript variant x4 spans 2,797 base pairs (bp).
Protein
ZNF226 is currently known to have three isoforms within humans: ZNF226 isoform X1, ZNF226 isoform X2, and ZNF226 isoform X3. The ZNF226 isoform X1 protein is the longest known variant, with 803 amino acids. This protein contains the Kruppel associated box A (KRAB-A) domain, which functions as a transcriptional repressor. However, the exact function of the ZNF226 protein is currently unknown. Within isoform X1, there are 18 C2H2 zinc finger structural motif (zf-C2H2) domains, which are known to bind either zinc ions (Zn2+) or nucleic acid (Figure 7–8). Within those regions, cysteine and histidine are the primary amino acids that bind to Zn2+ or nucleic acid, although other amino acids have been identified for binding (Figure 7–8). In addition to the KRAB-A domain and zf-C2H2 domains, there are zinc finger double domains which also contain binding sites for ions or nucleic acids.
ZNF226 human and ortholog protein sequences have molecular weights between 89 and 92 kDa. They had isoelectric points (pI) ranging from 8.60 to 9.00. In humans, the zf-C2H2 and zinc finger double domain region of ZNF226 isoform X1 is 59.3 kDa with a theoretical pI of 9.11. With the spacing of cysteine, or C, there is a cysteine every three amino acids. At least one of the amino acids in between the C's are either aspartic acid or glutamic acid. Despite the region's patterns of aspartic acid, it is still considered to have a lesser amount of the amino acid at 1.9%. There is also repetition within the chemical patterns within humans that is characteristic of ZNF226. These repetitions appear most common within the zf-C2H2 and zinc finger double domains of the protein, notably with cysteine and histidine binding sites. Predicted secondary structures of ZNF226 demonstrate a variable number of alpha helices, beta-stranded bridges, and random coils throughout the protein. Using various programs, such as GOR4 and the Chou and Fasman program, there is overall similarity in the predictions of coiled, stranded, and helix regions throughout the protein.
Regulation
Gene
Promoter
Using the Genomatix software, GXP_7536741 (1142 bp) was identified as the best promoter of ZNF226 (Figure 1). Within the last 500 bp of the promoter, the signal transducer and activator of transcription (V$STAT.01), selenocysteine tRNA activating factor (V$THAP11.01), and cell cycle regulators: cell cycle homology region (V$CHR.01) were conserved among Homo sapiens, Macaca mulatta, Pan troglodytes, and Canis lupus familiaris. In addition, the SPI-1 proto-oncogene; hematopoietic TF PU.1 (V$SPI1.02) was also known for binding to a promoter region within the c-fes proto-oncogene which encodes tyrosine kinase. The TF binding site is also found in two regions within the promoter sequence. The signal transducer and activator of transcription binding site was also conserved in two regions, and is known to have a higher binding specificity. The selenocysteine tRNA activating factor plays a role in embryonic stem cell regeneration. The cell cycle regulators: cell cycle homology region binding site plays an important role in cell survival, where mutations in the transcription factor can lead to apoptosis.
Tissue distribution
In terms of gene expression, ZNF226 is generally expressed in most tissues. Microarray data illustrates higher expression of ZNF226 within the ovaries. This is further supported by data which depicts a decrease in ZNF226 expression in granulosa cells within individuals with polycystic ovary syndrome. There was also higher expressions of ZNF226 observed within the thyroid compared to other tissues. Evidence of decreased ZNF226 expression is observed with individuals with papillary thyroid cancer.
Within fetuses, there is some level of ZNF226 expression present within all tissues throughout the gestational period of 10 to 20 weeks. However, there is a higher level of ZNF226 expression in the heart at 10 weeks of gestation, and a decreased level of expression within kidneys at 20 weeks gestation.
ZNF226 expression has been observed within epithelial progenitor cells (EPCs) in the peripheral blood (PB) and umbilical cord blood (CB). The gene expression is lower in PB-EPCs when compared to CB-EPCs. PB-EPCs have more tumor suppressor (TP53) expression when compared to CB-EPCs. CB-EPCs have more angiogenic expression, or growth and splitting of vasculature.
Transcript
Using RNAfold, minimum free energy structures were created based on the extended 5’ and 3’ untranslated region (UTR) in human sequences. Unconserved amino acids, miRNA, stem-loop formations, and RNA binding proteins (RBPs) are shown on the diagram (Figure 2–3).
miRNA targeting
Within the 5’ UTR region, both miR-4700-5p and miR-4667-5p were referenced in an experiment which identified certain miRNAs expressed consistently in ERBB2+ breast cancer gene. In addition, miR-8089 was referenced in a study showing certain novel miRNAs found within sepsis patients. miR-4271 was shown to have effects on coronary heart disease binding to the 3' UTR region of the APOC3 gene. Literature on miR-7113-5p shows that this miRNA is a mirtron.
Within the 3’ UTR region, miR-3143 is referenced in a study where miRNAs were expressed consistently in ERBB2+ breast cancer gene. miR-152-5p plays a role in inhibiting DNA methylation of genes involved in metabolic and inflammatory pathways. miR-31-3p is overexpressed in esophageal squamous cell carcinoma (ESCC). One miRNA result, miR-150-5p, was conserved across multiple homologs within the 3’ UTR region more than 3000 bp downstream. The miR-150-5p miRNA plays a role in colorectal cancer (CRC), where a lower expression of the miRNA was associated with a suppression of CRC metastasis.
RNA binding proteins
In terms of some of the RBPs found, PAPBC1 had five binding sites, two of which are highlighted on the 5’ UTR. This protein is known to attach poly-a-tails for proteins that have entered the cytoplasm, preventing them from re-entry into the nucleus. The FUS protein was another one found with a binding site on a predicted stem loop. The gene encodes for a protein which facilitates transportation of the protein into the cytoplasm. Within the 3’ UTR, the RBMY1A1, RBMX, and ACO1 proteins were some of the top scoring RBPs. The RBMY1A1 is a protein known to partake in splicing, and is required for sperm development. The RBMX protein is a homolog of the RBMY protein involved in sperm production. It is also known to promote transcription of a tumor suppressor gene, TXNIP. ACO1 is another RBP known to bind with mRNA to regulate iron levels. By binding to iron responsive elements, it can repress translation of ferritin and inhibit degradation of transferring receptor mRNA when iron levels become low.
Protein
Analysis to predict post-translational modifications of the protein were conducted on. Based on the results of Expasy's Myristoylator in Homo sapiens, Mirounga leonina, and Fukomys damarensis, it can be concluded that ZNF226 is not myristoylated at the N-terminus. Numerous predicted binding sites for post-translational modifications were also identified among the three species. The phosphorylation region at the C-terminus of the protein was also identified as a match for the protein kinase C phosphorylation binding site. S-nitrosylation was another identified modification at C354 (Figure 2). This modification is found in SRG1, a zinc finger protein that plays a role preventing nitric oxide (NO) synthesis. When NO is sustained, s-nitrosylation occurs within the protein, disrupting its transcriptional repression abilities. Acetylation was another modification identified. In the case of promyelocytic leukemia, a condition resulting in the abundance of blood forming cells in the bone marrow, promyelocytic leukemia zinc finger proteins are known to be activated by histone acetyltransferases, or by acetylation of a C-terminus lysine. Acetylation in other zinc finger proteins, such as GATA1, are known to enhance their ability to interact with other proteins. Arginine dimethylation is another identified modification within ZNF226. Arginine methylation of cellular nucleic acid binding protein (CNBP), a zinc finger protein, has shown to impede its ability to bind nucleic acids.
It is predicted that ZNF226 localizes within the nucleus, which aligns with its known functions as a transcription factor. It has also been predicted to localize within the mitochondria.
Homology/evolution
Although there is little information available on the ZNF226 gene, homologs of the gene have been found across eukaryotes and bacteria species. Strict orthologs were only found within mammals (Figure 4). The ZNF226 gene is also closely related to the paralog ZNF234 in humans, and the Zfp111 gene within mice. Across the various species in which ZNF226 orthologs and homologs that were identified, conservation of the C2H2 binding sites is apparent (Figure 4–5). In human ZNF226 paralogs, there is also conservation of the C2H2 binding sites, as well as nucleic acid binding sites.
Slow rate evolution is apparent for the ZNF226 protein. It evolves in a manner similar to the cytochrome c protein instead of the fibrinogen alpha chain protein (Figure 6).
Interacting Proteins
Two interactions detected via the two hybrid method occurred with SSBP3 and ATF4, both of which are transcription factors.
ATF4/CREB-2 is a transcription factor which binds to the long terminal repeat of the human T-cell leukemia type1 virus (HTLV-1). It can be an activator of HTLV-1.
SSBP3/CSDP is found in mice embryonic stem cells to develop into trophoblasts (provide nutrients to embryo). ZNF226 is expressed at greater levels within human stem cells.
Function
With ZNF226 being a transcription factor, playing a role in transcriptional repression, the 18 zf-C2H2 binding domains are predicted to bind to the DNA sequence shown in the sequence logo (Figure 7–9).
Clinical significance
Associated diseases and conditions
A mutation within ZNF226 gene has been positively correlated with the presence of hepatocellular carcinoma (HCC). A particular SNP (rs2927438) also correlated with an increased expression of ZNF226 in brain frontal cortical tissue and peripheral mononuclear cells, such as T cells and B cells. The promoter region of ZNF226 was found to be hypomethylated in those who were exposed to the Chinese famine. The hypomethylated region in ZNF226 was shown to have a correlation of methylation in the blood and the prefrontal cortex, although the exact function of the protein in the famine is not understood. ZNF226 gene was listed among many other genes with a copy number variation (CNV) that was associated with single common variable immunodeficiency (CVID).
SNPs
Numerous SNPs were identified throughout the ZNF226 gene. Within the GXP_7536741 promoter, there were two SNPs of interest that were found. Listed below are associated transcription factors for both SNPs.
References
Proteins | Zinc finger protein 226 | [
"Chemistry"
] | 2,866 | [
"Proteins",
"Biomolecules by chemical classification",
"Molecular biology"
] |
49,253,469 | https://en.wikipedia.org/wiki/Holmium%E2%80%93magnesium%E2%80%93zinc%20quasicrystal | A holmium–magnesium–zinc (Ho–Mg–Zn) quasicrystal is a quasicrystal made of an alloy of the three metals holmium, magnesium and zinc that has the shape of a regular dodecahedron, a Platonic solid with 12 five-sided faces. Unlike the similar pyritohedron shape of some cubic-system crystals such as pyrite, this quasicrystal has faces that are true regular pentagons.
The crystal is part of the R–Mg–Zn family of crystals, where R=Y, Gd, Tb, Dy, Ho or Er. They were first discovered in 1994. These form quasicrystals in the stoichiometry around . Magnetically, they form a spin glass at cryogenic temperatures.
While the experimental discovery of quasicrystals dates back to the 1980s, the relatively large, single grain nature of some Ho–Mg–Zn quasicrystals has made them a popular way to illustrate the concept.
See also
Complex metallic alloys
References
Quasicrystals
Tessellation
Magnesium alloys
Zinc alloys
Rare earth alloys
Holmium | Holmium–magnesium–zinc quasicrystal | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 232 | [
"Materials science stubs",
"Rare earth alloys",
"Alloy stubs",
"Magnesium alloys",
"Tessellation",
"Euclidean plane geometry",
"Crystallography stubs",
"Crystallography",
"Alloys",
"Zinc alloys",
"Planes (geometry)",
"Quasicrystals",
"Symmetry"
] |
55,996,819 | https://en.wikipedia.org/wiki/Pi%20electron%20donor-acceptor | The pEDA parameter (pi electron donor-acceptor) is a pi-electron substituent effect scale, described also as mesomeric or resonance effect. There is also a complementary scale - sEDA. The more positive is the value of pEDA the more pi-electron donating is a substituent. The more negative pEDA, the more pi-electron withdrawing is the substituent (see the table below).
The pEDA parameter for a given substituent is calculated by means of quantum chemistry methods. The model molecule is the monosubstituted benzene. First the geometry should be optimized at a suitable model of theory, then the natural population analysis within the framework of Natural Bond Orbital theory is performed. The molecule have to be oriented in such a way that the aromatic benzene ring is perpendicular to the z-axis. Then, the 2pz orbital occupations of ring carbon atoms are summed up to give the total pi- occupation. From this value the sum of pi-occupation for unsubstituted benzene (value close to 6 in accord to Huckel rule) is subtracted resulting in original pEDA parameter. For pi-electron donating substituents like -NH2, OH or -F the pEDA parameter is positive, and for pi-electron withdrawing substituents like -NO2, -BH2 or -CN the pEDA is negative.
The pEDA scale was invented by Wojciech P. Oziminski and Jan Cz. Dobrowolski and the details are available in the original paper.
The pEDA scale linearly correlates with experimental substituent constants like Taft-Topsom σR parameter.
For easy calculation of pEDA the free of charge for academic purposes written in Tcl program with graphical user interface AromaTcl is available.
Sums of pi-electron occupations and pEDA parameter for substituents of various character are gathered in the following table:
References
Chemical bond properties
Organic chemistry
Quantum chemistry | Pi electron donor-acceptor | [
"Physics",
"Chemistry"
] | 422 | [
"Chemical bond properties",
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
55,997,248 | https://en.wikipedia.org/wiki/Oasis%20effect | The oasis effect refers to the creation of a local microclimate that is cooler than the surrounding dry area due to evaporation or evapotranspiration of a water source or plant life and higher albedo of plant life than bare ground. The oasis effect is so-named because it occurs in desert oases. Urban planners can design a city's layout to optimize the oasis effect to combat the urban heat island effect. Since it depends on evaporation, the oasis effect differs by season.
Causes
An oasis contains moisture from a water source and/or plants. When that water evaporates or transpirates, heat from the surroundings is used to convert liquid to gas in an endothermic process, which results in cooler local temperatures. Moreover, vegetation has a higher albedo than bare ground, and reflects more sunlight, leading to lower land temperatures, lower air temperatures, and a cooler local microclimate.
Seasonal effects
The oasis effect occurs most prominently during the summer because warmer temperatures lead to more evaporation. In the winter, the oasis effect operates differently. Instead of making the oasis cooler, the oasis effect makes it warmer at night. This occurs through the fact that trees block heat from leaving the land. Basically, radiation cannot be emitted back into the atmosphere because the trees intercept and absorb it.
Urban planning
The oasis effect plays a role in urban development because plants and bodies of water result in cooler cities. Accordingly, cities with parks will have lower temperatures because plants have higher albedo than bare ground or roads. Areas with higher albedo reflect more light than they absorb, leading to cooler temperatures. Normally, cities are hotter than their suburbs due to dense population, dark buildings and roads, and pollution; this is known as the urban heat island effect. However, by careful placement of trees, parks, and plant life, cities can create their own oasis effect. By maintaining plant life throughout a city, urban planners can produce an oasis effect to counter the urban heat island effect; even a small scattering of trees can significantly reduce local temperatures. However, concerns can arise in arid regions with limited water sources where city planners may not want to leave water sources out in the open to evaporate, and may not want to sacrifice water for upkeep of plants.
See also
Urban climate
Water-sensitive urban design
Green roof
Oasis
Evaporative Cooling
References
Climate patterns
Hydrogeology
Hydrology
Meteorological concepts | Oasis effect | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 492 | [
"Hydrology",
"Hydrogeology",
"Environmental engineering"
] |
59,409,110 | https://en.wikipedia.org/wiki/Magnetic%20topological%20insulator | In physics, magnetic topological insulators are three dimensional magnetic materials with a non-trivial topological index protected by a symmetry other than time-reversal. This type of material conducts electricity on its outer surface, but its volume behaves like an insulator.
In contrast with a non-magnetic topological insulator, a magnetic topological insulator can have naturally gapped surface states as long as the quantizing symmetry is broken at the surface. These gapped surfaces exhibit a topologically protected half-quantized surface anomalous Hall conductivity () perpendicular to the surface. The sign of the half-quantized surface anomalous Hall conductivity depends on the specific surface termination.
Theory
Axion coupling
The classification of a 3D crystalline topological insulator can be understood in terms of the axion coupling . A scalar quantity that is determined from the ground state wavefunction
.
where is a shorthand notation for the Berry connection matrix
,
where is the cell-periodic part of the ground state Bloch wavefunction.
The topological nature of the axion coupling is evident if one considers gauge transformations. In this condensed matter setting a gauge transformation is a unitary transformation between states at the same point
.
Now a gauge transformation will cause , . Since a gauge choice is arbitrary, this property tells us that is only well defined in an interval of length e.g. .
The final ingredient we need to acquire a classification based on the axion coupling comes from observing how crystalline symmetries act on .
Fractional lattice translations , n-fold rotations : .
Time-reversal , inversion : .
The consequence is that if time-reversal or inversion are symmetries of the crystal we need to have
and that can only be true if (trivial),(non-trivial) (note that and are identified) giving us a classification. Furthermore, we can combine inversion or time-reversal with other symmetries that do not affect to acquire new symmetries that quantize . For example, mirror symmetry can always be expressed as giving rise to crystalline topological insulators, while the first intrinsic magnetic topological insulator MnBiTe has the quantizing symmetry .
Surface anomalous hall conductivity
So far we have discussed the mathematical properties of the axion coupling. Physically, a non-trivial axion coupling () will result in a half-quantized surface anomalous Hall conductivity () if the surface states are gapped. To see this, note that in general has two contribution. One comes from the axion coupling , a quantity that is determined from bulk considerations as we have seen, while the other is the Berry phase of the surface states at the Fermi level and therefore depends on the surface. In summary for a given surface termination the perpendicular component of the surface anomalous Hall conductivity to the surface will be
.
The expression for is defined because a surface property () can be determined from a bulk property () up to a quantum. To see this, consider a block of a material with some initial which we wrap with a 2D quantum anomalous Hall insulator with Chern index . As long as we do this without closing the surface gap, we are able to increase by without altering the bulk, and therefore without altering the axion coupling .
One of the most dramatic effects occurs when and time-reversal symmetry is present, i.e. non-magnetic topological insulator. Since is a pseudovector on the surface of the crystal, it must respect the surface symmetries, and is one of them, but resulting in . This forces on every surface resulting in a Dirac cone (or more generally an odd number of Dirac cones) on every surface and therefore making the boundary of the material conducting.
On the other hand, if time-reversal symmetry is absent, other symmetries can quantize and but not force to vanish. The most extreme case is the case of inversion symmetry (I). Inversion is never a surface symmetry and therefore a non-zero is valid. In the case that a surface is gapped, we have which results in a half-quantized surface AHC .
A half quantized surface Hall conductivity and a related treatment is also valid to understand topological insulators in magnetic field giving an effective axion description of the electrodynamics of these materials. This term leads to several interesting predictions including a quantized magnetoelectric effect. Evidence for this effect has recently been given in THz spectroscopy experiments performed at the Johns Hopkins University.
Experimental realizations
Magnetic topological insulators have proven difficult to create experimentally. In 2023 it was estimated that a magnetic topological insulator might be developed in 15 years' time.
A compound made from manganese, bismuth, and tellurium (MnBi2Te4) has been predicted to be a magnetic topological insulator. In 2024, scientists at the University of Chicago used MnBi2Te4 to develop a form of optical memory which is switched using lasers. This memory storage device could store data more quickly and efficiently, including in quantum computing.
References
Condensed matter physics
Magnetism | Magnetic topological insulator | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,038 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
59,410,798 | https://en.wikipedia.org/wiki/Hetherington%20Prize | The Hetherington Prize has been awarded once a year since 1991 at Oxford University for the best doctoral thesis presentation in the Department of Materials. The first ever prize (1991) was awarded to Prof. Kwang-Leong Choy (D.Phil., DSc, FIMMM, FRSC, CSci), who went on to become the Director of the Institute for Materials Discovery at University College London and Fellow of the Royal Society of Canada.
The award is almost exclusively awarded to only one doctoral candidate per year, but in two years it was shared (in 2011 to Nike Dattani and Lewys Jones, and in 2015 to Nina Klein, Aaron Lau, and Joe O'Gorman).
List of notable winners of the Hetherington Prize
References
1991 establishments in England
Awards established in 1991
Awards and prizes of the University of Oxford
Materials science awards | Hetherington Prize | [
"Materials_science",
"Technology",
"Engineering"
] | 175 | [
"Science and technology awards",
"Materials science awards",
"Materials science"
] |
59,412,730 | https://en.wikipedia.org/wiki/Water%20supply%20and%20sanitation%20in%20Lesotho | Lesotho is a mountainous and fairly 'water-rich country', but suffers from a lack of clean drinking water due to inadequate sanitation. In recent decades, with the construction of dams for the Lesotho Highlands Water Project (LHWP), Lesotho has become the main provider of water to parts of northern South Africa. Despite the economic and infrastructure development occasioned by the LHWP, waterborne diseases are common in the country and the infant mortality rate from them is high. In 2017, a project to improve the rural water supply in the Lesotho Lowlands was funded by the Global Environment Facility and the African Development Bank, and is ongoing.
Clean water and sanitation
Lesotho faces issues with clean water and sanitation; most notably access to drinkable water sources that are uncontaminated and inadequate public sanitation. With a government that does not support many large water projects to improve infrastructure and hygienic practices because they consider them to be cost ineffective, little progress has been made in providing clean water and sanitation to the citizens of Lesotho. Foreign and Lesotho NGOs play an important role in the region, especially in the more rural confines of the country.
History
The idea for the Lesotho Highlands Water Project (LHWP) originated in the late 1970s; however, it was not until a military coup had overthrown the government of Lesotho in 1986 that the project was established. The LHWP was a cooperation between South Africa and Lesotho's newly-installed government, although the arrangement played clearly in South Africa's favor. The project was such that South Africa would construct a five-series set of dams starting in the Lesotho Highlands by digging tunnels within the gorges of the Maluti Mountains, taking the southern flow of the Malibamatso River and directing it north towards South Africa.
The LHWP was estimated at a budget of US$5.6 billion and included a hydroelectric plant that Lesotho alone was responsible for constructing alongside these dams. The Lesotho government had sold their citizens on the idea of economic gain from such a large project. With the assistance of the Development Bank of South Africa (DBSA) and World Bank (WB) funding the dams, and the European Community funding the hydroelectric component, the project came into being with construction planned over the next 30 years, overshooting Lesotho's 5-year development plan cycles, and was to have been completed by 2016. The first dam was scheduled to be completed by 1996.
Microbial examination
The World Health Organization (WHO) conducted a study between July 1992 and January 1993 on the region within approximately 1,000 sq. miles of the 20-year old hydroelectric facility that resulted from the LHWP. The study included 72 remote villages' water sources. Three categories helped identify the levels of contamination and whether or not these were safe sources of consumable water for the villagers: unimproved was considered a natural source of water from open springs, and reservoirs; semi-improved indicated that the source of water had been manipulated or treated to inhibit human and animal contamination; improved water sources were usually completely covered and protected from outside elements with channels that secluded the source of water into silt-boxes for later consumption, further eliminating contamination.
The WHO also surveyed 588 households about their access to clean water: 38% of the 588 households claimed they had access to improved water sources. However, fewer than 5% used pit latrines and 18% of children under the age of five had experienced diarrheal illnesses within the two weeks prior to the study. The study found that most unimproved and semi-improved sources of drinking water had a presence of total Coliform (TC) and Escherichia coli (E. coli) exceeding the standards set by the U.S. Environmental Protection Agency (EPA) at approximately >16 organisms per 100 ml, compared to the EPA standard of <1 organism per 100ml, preferably non-existent. In addition, 83% of all improved water sources were found to be E. coli ridden.
The WHO concluded that because of the LHWP, the service roads built were a sign of slow infrastructure improvement and that the surrounding communities should stay patient as infrastructure for water supplies should follow in the years to come. Except for the more rural mountainous areas that have no accessible roads available all year round and their contamination of drinkable water has stayed intact. They suggested that villagers begin to implement hygienic habits that discontinue self-pollution and defecation within their own natural resources.
A similar study followed about two decades later. Conducted in 2011, the Department of Environmental Health (DEH) at the University of Lesotho examined the microbacterial contaminants in the drinking water of the Maseru district of the Manonyane community. Their scientific study included 22 springs, 6 open wells, 6 private boreholes and 1 open reservoir. They also conducted household surveys to assess the citizens' hygienic practices in and around the water sources sampled.
The surveys revealed the lack of consideration of contamination of local drinking water. Whether it was laundry run-off or leaking pit latrines or livestock feces, the lack of consideration was apparent because the citizens were not properly educated on the hygienic process. This poor sanitation has led to 1.6 million child deaths under the age of five because 84% of children reside in rural communities.
The water sources sampled were analyzed within a 6-hour timeframe and properly stored within a cool refrigerator on the way to the lab for accurate results. What they found in the samples was compared to the WHO definition of acceptable fecal matter and associated rick categories (chart below).
Of the 35 water sources sampled, 34 of the drinkable water sources exceeded the WHO no-risk guidelines of 0 cfu/100 ml of water, while 50% of the 34 were at high risk of contamination. Even the sources of water considered improved had experienced recent rainfall that leaked contaminated fecal matter into the water.
The DEH used this study to convey the issues of hygienic practices and lack of routine inspection of protected water sources. Their suggestion is for the citizens of the Manonyane community to implement health programs that educate the villagers to begin practicing safer hygienic techniques, like discontinuing laundry near water sources and using latrines that are fortified and unable to leak, as well as keeping livestock clear away from human drinking water sources to decrease human and animal contamination within their local water sources.
Water insecurity
The government of Lesotho has had failed projects to bring drinkable and sanitary water and sanitation services to the rural communities of Lesotho. These failed attempts have left the government to focus less on the impending costs of overhauling sanitary water conditions in favor of more lucrative ventures. In turn the people of Lesotho have had increasing numbers of water-borne diseases. Studies have shown correlation to poor access of clean water as a source of household illness and a large demographic epidemic by HIV/AIDS. These insecurities cause negative thoughts and feelings about the water the citizens of Lesotho consume. With the introduction and increased spread of HIV/AIDS over the past decades, Lesotho has seen it affect more skilled water and sanitation workers, ultimately leading to their deaths. This decreases the availability of clean water, which in turns leads to lower access due to droughts and climate change. These syndemic issues have been growing more apparent and have yet to be addressed.
Recent developments
On February 13, 2017 the Global Environment Facility (GEF) in partnership with the African Development Bank (AfDB) decided to finance the climate change adaptation for Sustainable Rural Water Supply in the Lesotho Lowlands. The GEF will be funding $4.4 million and the AfDB $17 million respectively. This project is to improve clean water and sanitation for the rural communities of the Lesotho Lowlands in response to recent climate change and managing resources more efficiently after the recent drought. The project will help sustain rural communities with potable water. Their plan is to implement watersheds to protect imported water supply from reoccurring droughts and possible floods.
The funds will be distributed through the Least Developed Countries Fund (LDCF), established under the United Nations Framework Convention on Climate Change. This project is in direct correlation with the National Adaptation Plans of Lesotho to improve the water and sanitation supply of their country, especially in rural communities. The funds will address sustainability of resources and inspire innovation for more jobs and economic growth within the guidelines of the Bank's Strategy for 2013–2022.
The second phase of the Lesotho Lowlands Water Supply Scheme, the Lowlands Water Development Project, seeks to increase water security and climate resilience in four areas of Lesotho's Lowlands. The project will address the nation's susceptibility to the detrimental effects of climate change on water security by including infrastructure to generate clean water in large quantities, enhance distribution networks and sanitation, and boost the efficiency of water usage. The project will also receive €116 million in funding from the European Investment Bank.
References
Infrastructure in Lesotho
Energy infrastructure in Lesotho
Water supply | Water supply and sanitation in Lesotho | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,861 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
59,413,690 | https://en.wikipedia.org/wiki/Kerr%E2%80%93Dold%20vortex | In fluid dynamics, Kerr–Dold vortex is an exact solution of Navier–Stokes equations, which represents steady periodic vortices superposed on the stagnation point flow (or extensional flow). The solution was discovered by Oliver S. Kerr and John W. Dold in 1994. These steady solutions exist as a result of a balance between vortex stretching by the extensional flow and viscous diffusion, which are similar to Burgers vortex. These vortices were observed experimentally in a four-roll mill apparatus by Lagnado and L. Gary Leal.
Mathematical description
The stagnation point flow, which is already an exact solution of the Navier–Stokes equation is given by , where is the strain rate. To this flow, an additional periodic disturbance can be added such that the new velocity field can be written as
where the disturbance and are assumed to be periodic in the direction with a fundamental wavenumber . Kerr and Dold showed that such disturbances exist with finite amplitude, thus making the solution an exact to Navier–Stokes equations. Introducing a stream function for the disturbance velocity components, the equations for disturbances in vorticity-streamfunction formulation can be shown to reduce to
where is the disturbance vorticity. A single parameter
can be obtained upon non-dimensionalization, which measures the strength of the converging flow to viscous dissipation. The solution will be assumed to be
Since is real, it is easy to verify that Since the expected vortex structure has the symmetry , we have . Upon substitution, an infinite sequence of differential equation will be obtained which are coupled non-linearly. To derive the following equations, Cauchy product rule will be used. The equations are
The boundary conditions
and the corresponding symmetry condition is enough to solve the problem. It can be shown that non-trivial solution exist only when On solving this equation numerically, it is verified that keeping first 7 to 8 terms suffice to produce accurate results. The solution when is was already discovered by Craik and Criminale in 1986.
See also
Sullivan vortex
References
Flow regimes
Vortices | Kerr–Dold vortex | [
"Chemistry",
"Mathematics"
] | 431 | [
"Dynamical systems",
"Flow regimes",
"Vortices",
"Fluid dynamics"
] |
51,566,791 | https://en.wikipedia.org/wiki/C18H20O3 | {{DISPLAYTITLE:C18H20O3}}
The molecular formula C18H20O3 (molar mass: 284.350 g/mol) may refer to:
Bisdehydrodoisynolic acid (BDDA)
16-Ketoestrone (16-Keto-E1)
Molecular formulas | C18H20O3 | [
"Physics",
"Chemistry"
] | 71 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
68,686,019 | https://en.wikipedia.org/wiki/1V-LSD | 1V-LSD (1-valeryl-D-lysergic acid diethylamide), sometimes nicknamed Valerie, is a psychotropic substance and a research chemical with psychedelic effects. 1V-LSD is an artificial derivative of natural lysergic acid, which occurs in ergot alkaloids, as well as being an analogue of LSD.
1V-LSD has been sold online until an amendment to the German NpSG was enforced in 2022 which controls 1P-LSD and now 1cP-LSD, 1V-LSD and several other lysergamides.
Pharmacology
As demonstrated with other N-acylated derivatives of LSD, 1V-LSD is believed to serve as a prodrug for LSD but may also act as a weak partial agonist at the 5-HT2A receptor.
Animal studies
A Head-twitch response assay in mice found that 1V-LSD has a similar potency to 1P-LSD and 1cP-LSD, with behavioral effects also closely resembling these structural analogs.
Chemistry
1V-LSD is the condensation product of valeric acid (pentanoic acid) and LSD, where the valeroyl group is substituted on the NH position of the indole moiety.
Ehrlich's reagent is used to identify the presence of an indole moiety; the chemical backbone of the lysergamide and ergoline molecules. However, as with other N-acylated lysergamides, 1V-LSD reacts very slowly to Ehrlich reagent and may not give reliable results if the reagent isn't fresh.
Legal position
1V-LSD is not a controlled substance in North America (United States and Canada), unless sold for human consumption in the US.
Since March 2nd 2022, 1V-LSD has been under investigation in Sweden and may therefore soon become controlled.
1V-LSD was placed under legal control in South Korea in July 2022 on a temporary but renewable basis.
An amendment to the NpSG banned the sale of 1V-LSD in Germany in September 2022. Due to a interpunctation error in the actualised NpSG, the ban never took effect. The law was amended in March 2023, now banning 1V-LSD.
See also
1B-LSD
1cP-LSD
1D-LSD
1P-LSD
AL-LAD
LSD
References
Designer drugs
Psychedelic_drugs
Lysergamides
Prodrugs
Serotonin receptor agonists
Fatty acid amides | 1V-LSD | [
"Chemistry"
] | 553 | [
"Chemicals in medicine",
"Prodrugs"
] |
68,688,298 | https://en.wikipedia.org/wiki/Protists%20in%20the%20fossil%20record | A protist is any eukaryotic organism (that is, an organism whose cells contain a cell nucleus) that is not an animal, plant, or fungus. While it is likely that protists share a common ancestor, the last eukaryotic common ancestor, the exclusion of other eukaryotes means that protists do not form a natural group, or clade. Therefore, some protists may be more closely related to animals, plants, or fungi than they are to other protists. However, like algae, invertebrates and protozoans, the grouping is used for convenience.
Many protists have neither hard parts nor resistant spores, and their fossils are extremely rare or unknown. Examples of such groups include the apicomplexans, most ciliates, some green algae (the Klebsormidiales), choanoflagellates, oomycetes, brown algae, yellow-green algae, Excavata (e.g., euglenids). Some of these have been found preserved in amber (fossilized tree resin) or under unusual conditions (e.g., Paleoleishmania, a kinetoplastid).
Others are relatively common in the fossil record, as the diatoms, golden algae, haptophytes (coccoliths), silicoflagellates, tintinnids (ciliates), dinoflagellates, green algae, red algae, heliozoans, radiolarians, foraminiferans, ebriids and testate amoebae (euglyphids, arcellaceans). Some are used as paleoecological indicators to reconstruct ancient environments.
More probable eukaryote fossils begin to appear at about 1.8 billion years ago, the acritarchs, spherical fossils of likely algal protists. Another possible representative of early fossil eukaryotes are the Gabonionta.
Modern classifications
Systematists today do not treat Protista as a formal taxon, but the term "protist" is still commonly used for convenience in two ways. The most popular contemporary definition is a phylogenetic one, that identifies a paraphyletic group: a protist is any eukaryote that is not an animal, (land) plant, or (true) fungus; this definition excludes many unicellular groups, like the Microsporidia (fungi), many Chytridiomycetes (fungi), and yeasts (fungi), and also a non-unicellular group included in Protista in the past, the Myxozoa (animal).
The other definition describes protists primarily by functional or biological criteria: protists are essentially those eukaryotes that are never multicellular, that either exist as independent cells, or if they occur in colonies, do not show differentiation into tissues (but vegetative cell differentiation may occur restricted to sexual reproduction, alternate vegetative morphology, and quiescent or resistant stages, such as cysts); this definition excludes many brown, multicellular red and green algae, which may have tissues.
The taxonomy of protists is still changing. Newer classifications attempt to present monophyletic groups based on morphological (especially ultrastructural), biochemical (chemotaxonomy) and DNA sequence (molecular research) information. However, there are sometimes discordances between molecular and morphological investigations; these can be categorized as two types: (i) one morphology, multiple lineages (e.g. morphological convergence, cryptic species) and (ii) one lineage, multiple morphologies (e.g. phenotypic plasticity, multiple life-cycle stages).
Because the protists as a whole are paraphyletic, new systems often split up or abandon the kingdom, instead treating the protist groups as separate lines of eukaryotes. The recent scheme by Adl et al. (2005) does not recognize formal ranks (phylum, class, etc.) and instead treats groups as clades of phylogenetically related organisms. This is intended to make the classification more stable in the long term and easier to update. Some of the main groups of protists, which may be treated as phyla, are listed in the taxobox, upper right. Many are thought to be monophyletic, though there is still uncertainty. For instance, the Excavata are probably not monophyletic and the chromalveolates are probably only monophyletic if the haptophytes and cryptomonads are excluded.
Archaeplastida
(in part)
Red algae
One of the oldest fossils identified as a red alga is also the oldest fossil eukaryote that belongs to a specific modern taxon. Bangiomorpha pubescens, a multicellular fossil from arctic Canada, strongly resembles the modern red alga Bangia and occurs in rocks dating to 1.05 billion years ago.
Two kinds of fossils resembling red algae were found sometime between 2006 and 2011 in well-preserved sedimentary rocks in Chitrakoot, central India. The presumed red algae lie embedded in fossil mats of cyanobacteria, called stromatolites, in 1.6 billion-year-old Indian phosphorite – making them the oldest plant-like fossils ever found by about 400 million years.
Red algae are important builders of limestone reefs. The earliest such coralline algae, the solenopores, are known from the Cambrian period. Other algae of different origins filled a similar role in the late Paleozoic, and in more recent reefs.
Calcite crusts that have been interpreted as the remains of coralline red algae, date to the Ediacaran Period. Thallophytes resembling coralline red algae are known from the late Proterozoic Doushantuo formation.
Glaucophyta
SAR supergroup
Stramenopiles
Brown algae
The occurrence of Phaeophyceae as fossils is rare due to their generally soft-bodied nature, and scientists continue to debate the identification of some finds. Part of the problem with identification lies in the convergent evolution of morphologies between many brown and red algae. Most fossils of soft-tissue algae preserve only a flattened outline, without the microscopic features that permit the major groups of multicellular algae to be reliably distinguished. Among the brown algae, only species of the genus Padina deposit significant quantities of minerals in or around their cell walls. Other algal groups, such as the red algae and green algae, have a number of calcareous members. Because of this, they are more likely to leave evidence in the fossil record than the soft bodies of most brown algae and more often can be precisely classified.
Fossils comparable in morphology to brown algae are known from strata as old as the Upper Ordovician, but the taxonomic affinity of these impression fossils is far from certain. Claims that earlier Ediacaran fossils are brown algae have since been dismissed. While many carbonaceous fossils have been described from the Precambrian, they are typically preserved as flattened outlines or fragments measuring only millimeters long. Because these fossils lack features diagnostic for identification at even the highest level, they are assigned to fossil form taxa according to their shape and other gross morphological features. A number of Devonian fossils termed fucoids, from their resemblance in outline to species in the genus Fucus, have proven to be inorganic rather than true fossils. The Devonian megafossil Prototaxites, which consists of masses of filaments grouped into trunk-like axes, has been considered a possible brown alga. However, modern research favors reinterpretation of this fossil as a terrestrial fungus or fungal-like organism. Likewise, the fossil Protosalvinia was once considered a possible brown alga, but is now thought to be an early land plant.
A number of Paleozoic fossils have been tentatively classified with the brown algae, although most have also been compared to known red algae species. Phascolophyllaphycus possesses numerous elongate, inflated blades attached to a stipe. It is the most abundant of algal fossils found in a collection made from Carboniferous strata in Illinois. Each hollow blade bears up to eight pneumatocysts at its base, and the stipes appear to have been hollow and inflated as well. This combination of characteristics is similar to certain modern genera in the order Laminariales (kelps). Several fossils of Drydenia and a single specimen of Hungerfordia from the Upper Devonian of New York have also been compared to both brown and red algae. Fossils of Drydenia consist of an elliptical blade attached to a branching filamentous holdfast, not unlike some species of Laminaria, Porphyra, or Gigartina. The single known specimen of Hungerfordia branches dichotomously into lobes and resembles genera like Chondrus and Fucus or Dictyota.
The earliest known fossils that can be assigned reliably to the Phaeophyceae come from Miocene diatomite deposits of the Monterey Formation in California. Several soft-bodied brown macroalgae, such as Julescraneia, have been found.
Diatoms
Heterokont chloroplasts appear to derive from those of red algae, rather than directly from prokaryotes as occurred in plants. This suggests they had a more recent origin than many other algae. However, fossil evidence is scant, and only with the evolution of the diatoms themselves do the heterokonts make a serious impression on the fossil record.
The earliest known fossil diatoms date from the early Jurassic (~185 Ma ago), although the molecular clock and sedimentary evidence suggests an earlier origin. It has been suggested that their origin may be related to the end-Permian mass extinction (~250 Ma), after which many marine niches were opened. The gap between this event and the time that fossil diatoms first appear may indicate a period when diatoms were unsilicified and their evolution was cryptic. Since the advent of silicification, diatoms have made a significant impression on the fossil record, with major fossil deposits found as far back as the early Cretaceous, and with some rocks such as diatomaceous earth, being composed almost entirely of them.
Although diatoms may have existed since the Triassic, the timing of their ascendancy and "take-over" of the silicon cycle occurred more recently. Prior to the Phanerozoic (before 544 Ma), it is believed that microbial or inorganic processes weakly regulated the ocean's silicon cycle. Subsequently, the cycle appears dominated (and more strongly regulated) by the radiolarians and siliceous sponges, the former as zooplankton, the latter as sedentary filter-feeders primarily on the continental shelves. Within the last 100 My, it is thought that the silicon cycle has come under even tighter control, and that this derives from the ecological ascendancy of the diatoms.
However, the precise timing of the "take-over" remains unclear, and different authors have conflicting interpretations of the fossil record. Some evidence, such as the displacement of siliceous sponges from the shelves, suggests that this takeover began in the Cretaceous (146 Ma to 66 Ma), while evidence from radiolarians suggests "take-over" did not begin until the Cenozoic (66 Ma to present).
The expansion of grassland biomes and the evolutionary radiation of grasses during the Miocene is believed to have increased the flux of soluble silicon to the oceans, and it has been argued that this promoted the diatoms during the Cenozoic era. Recent work suggests that diatom success is decoupled from the evolution of grasses, although both diatom and grassland diversity increased strongly from the middle Miocene.
Diatom diversity over the Cenozoic has been very sensitive to global temperature, particularly to the equator-pole temperature gradient. Warmer oceans, particularly warmer polar regions, have in the past been shown to have had substantially lower diatom diversity. Future warm oceans with enhanced polar warming, as projected in global-warming scenarios, could thus in theory result in a significant loss of diatom diversity, although from current knowledge it is impossible to say if this would occur rapidly or only over many tens of thousands of years.
The fossil record of diatoms has largely been established through the recovery of their siliceous frustules in marine and non-marine sediments. Although diatoms have both a marine and non-marine stratigraphic record, diatom biostratigraphy, which is based on time-constrained evolutionary originations and extinctions of unique taxa, is only well developed and widely applicable in marine systems. The duration of diatom species ranges have been documented through the study of ocean cores and rock sequences exposed on land. Where diatom biozones are well established and calibrated to the geomagnetic polarity time scale (e.g., Southern Ocean, North Pacific, eastern equatorial Pacific), diatom-based age estimates may be resolved to within <100,000 years, although typical age resolution for Cenozoic diatom assemblages is several hundred thousand years.
Diatoms preserved in lake sediments are widely used for paleoenvironmental reconstructions of Quaternary climate, especially for closed-basin lakes which experience fluctuations in water depth and salinity.
The Cretaceous record of diatoms is limited, but recent studies reveal a progressive diversification of diatom types. The Cretaceous–Paleogene extinction event, which in the oceans dramatically affected organisms with calcareous skeletons, appears to have had relatively little impact on diatom evolution.
Although no mass extinctions of marine diatoms have been observed during the Cenozoic, times of relatively rapid evolutionary turnover in marine diatom species assemblages occurred near the Paleocene–Eocene boundary, and at the Eocene–Oligocene boundary. Further turnover of assemblages took place at various times between the middle Miocene and late Pliocene, in response to progressive cooling of polar regions and the development of more endemic diatom assemblages.
A global trend toward more delicate diatom frustules has been noted from the Oligocene to the Quaternary. This coincides with an increasingly more vigorous circulation of the ocean's surface and deep waters brought about by increasing latitudinal thermal gradients at the onset of major ice sheet expansion on Antarctica and progressive cooling through the Neogene and Quaternary towards a bipolar glaciated world. This caused diatoms to take in less silica for the formation of their frustules. Increased mixing of the oceans renews silica and other nutrients necessary for diatom growth in surface waters, especially in regions of coastal and oceanic upwelling.
Oomycetes
Alveolata
Apicomplexa
Ciliophora
Dinoflagellata
Dinoflagellates are mainly represented as fossils by fossil dinocysts, which have a long geological record with lowest occurrences during the mid-Triassic, whilst geochemical markers suggest a presence to the Early Cambrian.
Some evidence indicates dinosteroids in many Paleozoic and Precambrian rocks might be the product of ancestral dinoflagellates (protodinoflagellates).
Molecular phylogenetics show that dinoflagellates are grouped with ciliates and apicomplexans (=Sporozoa) in a well-supported clade, the alveolates. The closest relatives to dinokaryotic dinoflagellates appear to be apicomplexans, Perkinsus, Parvilucifera, syndinians, and Oxyrrhis. Molecular phylogenies are similar to phylogenies based on morphology.
The earliest stages of dinoflagellate evolution appear to be dominated by parasitic lineages, such as perkinsids and syndinians (e.g. Amoebophrya and Hematodinium).
All dinoflagellates contain red algal plastids or remnant (nonphotosynthetic) organelles of red algal origin. The parasitic dinoflagellate Hematodinium however lacks a plastid entirely. Some groups that have lost the photosynthetic properties of their original red algae plastids has obtained new photosynthetic plastids (chloroplasts) through so-called serial endosymbiosis, both secondary and tertiary. Like their original plastids, the new chloroplasts in these groups can be traced back to red algae, except from those in the members of the genus Lepidodinium, which possess plastids derived from green algae, possibly Trebouxiophyceae or Ulvophyceae. Lineages with tertiary endosymbiosis are Dinophysis, with plastids from a cryptomonad, the Karenia, Karlodinium, and Takayama, which possess plastids of haptophyte origin, and the Peridiniaceae, Durinskia and Kryptoperidinium, which has plastids derived from diatoms Some species also perform kleptoplasty.
Dinoflagellate evolution has been summarized into five principal organizational types: prorocentroid, dinophysoid, gonyaulacoid, peridinioid, and gymnodinoid.
The transitions of marine species into fresh water have been infrequent events during the diversification of dinoflagellates and in most cases have not occurred recently, possibly as late as the Cretaceous.
Recently, the "living fossil" Dapsilidinium pastielsii was found inhabiting the Indo-Pacific Warm Pool, which served as a refugium for thermophilic dinoflagellates.
Rhizaria
Cercozoa
Foraminifera
Molecular clocks indicate that the crown-group of foraminifera likely evolved during the Neoproterozoic, between 900 and 650 million years ago; this timing is consistent with Neoproterozoic fossils of the closely related filose amoebae. As fossils of foraminifera have not been found prior to the very end of the Ediacaran, it is likely that most of these Proterozoic forms did not have hard-shelled tests.
Due to their non-mineralised tests, "allogromiids" have no fossil record.
The mysterious vendozoans of the Ediacaran period have been suggested to represent fossil xenophyophores. However, the discovery of diagenetically altered C27 sterols associated with the remains of Dickinsonia cast doubt on this identification and suggest it may instead be an animal. Other researchers have suggested that the elusive trace fossil Paleodictyon and its relatives may represent a fossil xenophyophore and noted the similarity of the extant xenophyophore Occultammina to the fossil; however, modern examples of Paleodictyon have not been able to clear up the issue and the trace may alternately represent a burrow or a glass sponge. Supporting this notion is the similar habitat of living xenophyophores to the inferred habitat of fossil graphoglyptids; however, the large size and regularity of many graphoglyptids as well as the apparent absence of xenophyae in their fossils casts doubt on the possibility. As of 2017 no definite xenophyophore fossils have been found.
Test-bearing foraminifera have an excellent fossil record throughout the Phanerozoic eon. The earliest known definite foraminifera appear in the fossil record towards the very end of the Ediacaran; these forms all have agglutinated tests and are unilocular. These include forms like Platysolenites and Spirosolenites.
Single-chambered foraminifera continued to diversity throughout the Cambrian. Some commonly encountered forms include Ammodiscus, Glomospira, Psammosphera, and Turritellella; these species are all agglutinated. They make up part of the Ammodiscina, a lineage of spirillinids that still contains modern forms. Later spirillinids would evolve multilocularity and calcitic tests, with the first such forms appearing during the Triassic; the group saw little effects on diversity due to the K-Pg extinction.
The earliest multi-chambered foraminifera are agglutinated species, and appear in the fossil record during the middle Cambrian period. Due to their poor preservation they cannot be positively assigned to any major foram group.
The earliest known calcareous-walled foraminifera are the Fusulinids, which appear in the fossil record during the Llandoverian epoch of the early Silurian. The earliest of these were microscopic, planispirally coiled, and evolute; later forms evolved a diversity of shapes including lenticular, globular, and elongated rice-shaped forms. Later species of fusulinids grew to much larger size, with some forms reaching 5 cm in length; reportedly, some specimens reach up to 14 cm in length, making them among the largest foraminifera extant or extinct. Fusulinids are the earliest lineage of foraminifera thought to have evolved symbiosis with photosynthetic organisms. Fossils of fusulinids have been found on all continents except Antarctica; they reached their greatest diversity during the Visean epoch of the Carboniferous. The group then gradually declined in diversity until finally going extinct during the Permo-Triassic extinction event.
During the Tournaisian epoch of the Carboniferous, Miliolid foraminifera first appeared in the fossil record, having diverged from the spirillinids within the Tubothalamea. Miliolids suffered about 50% casualties during both the Permo-Triassic and K-Pg extinctions but survived to the present day. Some fossil miliolids reached up to 2 cm in diameter.
The earliest known Lagenid fossils appear during the Moscovian epoch of the Carboniferous. Seeing little effect due to the Permo-Triassic or K-Pg extinctions, the group diversified through time. Secondarily unilocular taxa evolved during the Jurassic and Cretaceous.
The earliest Involutinid fossils appear during the Permian; the lineage diversified throughout the Mesozoic of Eurasia before apparently vanishing from the fossil record following the Cenomanian-Turonian Ocean Anoxic Event. The extant group planispirillinidae has been referred to the involutinida, but this remains the subject of debate.
The Robertinida first appear in the fossil record during the Anisian epoch of the Triassic. The group remained at low diversity throughout its fossil history; all living representatives belong to the Robertinidae, which first appeared during the Paleocene.
The first definite Rotaliid fossils do not appear in the fossil record until the Pliensbachian epoch of the Jurassic, following the Triassic-Jurassic event. Diversity of the group remained low until the aftermath of the Cenomanian-Turonian event, after which the group saw a rapid diversification. Of this group, the planktonic Globigerinina—the first known group of planktonic forams—first appears in the aftermath of the Toarcian Turnover; the group saw heavy losses during both the K-Pg extinction and the Eocene-Oligocene extinction, but remains extant and diverse to this day. An additional evolution of planktonic lifestyle occurred in the Miocene or Pliocene, when the rotaliid Neogallitellia independently evolved a planktonic lifestyle.
Radiolaria
The earliest known radiolaria date to the very start of the Cambrian period, appearing in the same beds as the first small shelly fauna—they may even be terminal Precambrian in age. They have significant differences from later radiolaria, with a different silica lattice structure and few, if any, spikes on the test. Ninety percent of radiolarian species are extinct. The skeletons, or tests, of ancient radiolarians are used in geological dating, including for oil exploration and determination of ancient climates.
Some common radiolarian fossils include Actinomma, Heliosphaera and Hexadoridium.
Excavata
Euglenozoa
Percolozoa
Metamonada
Amoebozoa
Hacrobia
Coccolithophores: The diagram on the right shows —>
(A) coccolithophore species richness over time combining heterococcoliths and nannoliths. Q, Quaternary; N, Neogene; Pal, Paleogene; E/O, Eocene/Oligocene glacial onset event; PETM, Paleocene/Eocene thermal maximum warming event; K/Pg, Cretaceous/Paleogene; OAE, oceanic anoxic event; T-OAE, Toarcian oceanic anoxic event; T/J, Triassic/Jurassic; P/T, Permian/Triassic; mass ext., mass extinction.
(B) the fossil record of major coccolithophore biomineralization innovations and morphogroups, including the first appearances of muroliths (simple coccoliths with narrow, wall-like rims), placoliths (coccoliths with broad shields that interlock to form strong coccospheres), holococcoliths (coccoliths formed from microcrystals in the haploid life cycle phase), Braarudosphaera (pentagonal, laminated nannoliths forming dodecahedral coccospheres); Calciosolenia (distinct, rhombic murolith coccoliths), Coccolithus (long-ranging and abundant Cenozoic genus), Isochrysidales (dominant order that includes Emiliania, Gephyrocapsa, and Reticulofenestra). Significant mass extinctions and paleoceanographic/paleoclimatic events are marked as horizontal lines.
Hemimastigophora
Apusozoa
Opisthokonta
(in part)
Choanozoa
(reclassified)
Golden algae
Because many of these organisms had a silica capsule, they have a relatively complete fossil record, allowing modern biologists to confirm that they are, in fact, not derived from cyanobacteria, but rather an ancestor that did not possess the capability to photosynthesize. Many of the chrysophyta precursor fossils entirely lacked any type of photosynthesis-capable pigment. Most biologists believe that the chrysophytes obtained their ability to photosynthesize from an endosymbiotic relationship with fucoxanthin-containing cyanobacteria.
Green algae
The ancestral green alga was a unicellular flagellate.
See also
Marine sediment
Microfossils
Protist shells
List of prehistoric foraminifera genera
Footnotes
References
Evolution
Protista
Micropaleontology | Protists in the fossil record | [
"Biology"
] | 5,645 | [
"Eukaryotes",
"Protists"
] |
68,691,754 | https://en.wikipedia.org/wiki/Accessible%20Books%20Consortium | The Accessible Books Consortium (ABC) is a public-private partnership which was launched in 2014 by the World Intellectual Property Organization. The ABC was created with the intent of being "one possible initiative, amongst others, to implement the aims of the Marrakesh VIP Treaty at a practical level." ABC's goal is "to increase the number of books worldwide in accessible formats - such as braille, audio, e-text, and large print and to make them available to people who are blind, have low vision or are otherwise print disabled."
Context
The World Health Organization estimated in 2018 that worldwide 253 million people are visually impaired, with more than 90% of them living in developing and least developed countries. The World Blind Union (WBU) estimates that only 10% of people who are blind are able to go to school or have employment. World Blind Union (WBU) estimates that less than 10% of all published materials can be read by people who are blind or visually impaired, with the lack of accessible books being a significant barrier to getting an education and leading an independent life.
Work
The Accessible Books Consortium (ABC) operates through three channels:
The ABC Global Book Service: hosting an online platform that allows for the cross-border exchange of books in accessible formats. The Service contains titles in over 80 languages, with English, French and Spanish being the predominant languages.
Training and Technical Assistance: establishing projects in developing and least-developed countries to "provide training and funding for the production of educational materials in accessible formats in national languages for students who are print disabled."
Accessible Publishing: promoting the production of "born accessible" publications by all publishers. Born accessible books are usable by both people who are print disabled or sighted. ABC encourage accessible publishing through the ABC Charter for Accessible Publishing and the ABC International Excellence Award, which 'recognizes outstanding leadership and achievements in advancing the accessibility of digital publications'''.
Accessible Books Consortium (ABC) Global Book Service
The ABC Global Book Service is a free service that puts into practice the provisions of the Marrakesh Treaty. It allows participating libraries for the blind, referred to in the Marrakesh Treaty as Authorized Entities (AEs), to search, order and exchange books in accessible digital formats across national borders. Through the Service, Authorized Entities that are located in countries that have implemented the provisions of the Marrakesh Treaty are able to perform these exchanges without requiring further authorization from rights holders.
Through their participation in the Service, AEs are able to make the accessible books that are shared by all other AEs available to their own patrons. By pooling their collective resources in this way, libraries can vastly increase their selection of books in large-print, audio books, digital braille and braille music.
In April 2021, ABC launched an additional application that allows individuals who are blind, visually impaired or otherwise print disabled to have direct access to search and download books in accessible formats from the ABC Global Book Service. This new application is offered to Authorized Entities located in countries that have ratified and implemented the provisions of the Marrakesh Treaty.
List of authorized entities that have joined the ABC Global Book Service
Training and Technical Assistance
The Accessible Books Consortium provides training and technical assistance to organizations in developing and least developed countries on the production of accessible format books. According to the ABC: "the ABC model for capacity building aims to equip organizations in developing and least developed countries with the ability to produce educational materials in national languages to be used by primary, secondary and university students who are print disabled." This allows participating organizations to convert textbooks into accessible formats, such as DAISY, ePUB3, and digital braille. Such assistance has been provided to organizations including in Argentina, Bangladesh, India, Nepal, Nigeria, Sri Lanka and Tunisia.
In February 2021, ABC launched an online course providing similar training on the production of books in accessible formats, in part to 'ensure the continuation of its assistance programs during the COVID-19 pandemic'''.
Accessible publishing
The Accessible Books Consortium encourages the production of accessible eBooks through the use of the accessibility features of the EPUB3 standard.
ABC International Excellence Award for Accessible Publishing
List of winners:
ABC Charter for Accessible Publishing Signatories
The Accessible Books Consortium Charter for Accessible Publishing contains eight principles designed to encourage publishers are following accessibility best practices. The Accessible Books Consortium partners include
Accessible Books Consortium advisory board members
The ABC has an advisory board which provides technical expertise, transparency and communication with stakeholders. Its members are:
African Union for the Blind
Blind Citizens of New Zealand
DAISY Consortium
Dedicon
eBound Canada
Government of Australia
International Authors Forum (IAF)
International Council for Education of People with Visual Impairment
International Federation of Library Associations and Institutions (IFLA)
International Federation of Reproduction Rights Organisations (IFRRO)
International Publishers Association (IPA)
Manual Moderno
Sao Mai Vocational and Assistive Technology Center for the Blind
World Blind Union (WBU)
World Intellectual Property Organization
References
External links
Accessible Books Consortium official website
World Intellectual Property Organization
Accessibility
Book promotion
Consortia
2014 establishments in Switzerland
Accessible information | Accessible Books Consortium | [
"Engineering"
] | 1,018 | [
"Accessibility",
"Design"
] |
68,692,311 | https://en.wikipedia.org/wiki/Bioserenity | BioSerenity is a medtech company created in 2014 that develops ambulatory medical devices to help diagnose and monitor patients with chronic diseases such as epilepsy. The medical devices are composed of medical sensors, smart clothing, a smartphone app for Patient Reported Outcome, and a web platform to perform data analysis through Medical Artificial Intelligence for detection of digital biomarkers. The company initially focused on Neurology, a domain in which it reported contributing to the diagnosis of 30 000 patients per year. It now also operates in Sleep Disorders and Cardiology. BioSerenity reported it provides pharmaceutical companies with solutions for companion diagnostics.
Company history
BioSerenity was founded in 2014, by Pierre-Yves Frouin. The company was initially hosted at the ICM Institute (Institute du Cerveau et de la Moëlle épinière), in Paris, France.
Fund Raising
June 8, 2015 : The company raises a $4 million seed round with Kurma Partners and IdInvest Partners
September 20, 2017 : The company raises a $17 million series A round with LBO France, IdInvest Partners and BPI France
June 18, 2019 : The company raises a $70 million series B round with Dassault Systèmes, IdInvest Partners, LBO France et BPI France
November 13, 2023 : The company raises a 24M€ series C round with Jolt Capital
Acquisitions
In 2019, BioSerenity announced the acquisition of the American Company SleepMed and working with over 200 Hospitals.
In 2020, BioSerenity was one of the five French manufacturers (Savoy, BB Distrib, Celluloses de Brocéliande, Chargeurs) working on the production of sanitary equipment including FFP2 masks at request of the French government.
In 2021, the Neuronaute would be used by approximately 30,000 patients per year.
Awards
BioSerenity is one of the Disrupt 100
BioSerenity joined the Next40
BioSerenity was selected by Microsoft and AstraZeneca in their initiative AI Factory for Health
BioSerenity accelerated at Stanford's University StartX program
References
External links
FDA Clearance Neuronaute
FDA Clearance Cardioskin
FDA Clearance Accusom
Healthcare companies of France
Machine learning
Health care companies established in 2014
French companies established in 2014 | Bioserenity | [
"Engineering"
] | 467 | [
"Artificial intelligence engineering",
"Machine learning"
] |
67,275,493 | https://en.wikipedia.org/wiki/Tigloidine | Tigloidine is a tropane alkaloid that naturally occurs as a minor constituent of a number of solanaceous plants, including Duboisia myoporoides, Physalis peruviana, and Mandragora turcomanica.
It was formerly marketed as an antiparkinsonian drug under the trade name Tropigline.
References
Tropane alkaloids
Antiparkinsonian agents | Tigloidine | [
"Chemistry"
] | 89 | [
"Alkaloids by chemical classification",
"Tropane alkaloids"
] |
67,275,893 | https://en.wikipedia.org/wiki/Hallmarks%20of%20aging | Aging is characterized by a progressive loss of physiological integrity, leading to impaired function and increased vulnerability to death. The hallmarks of aging are the types of biochemical changes that occur in all organisms that experience biological aging and lead to a progressive loss of physiological integrity, impaired function and, eventually, death. They were first listed in a landmark paper in 2013 to conceptualize the essence of biological aging and its underlying mechanisms.
The following three premises for the interconnected hallmarks have been proposed:
"their age-associated manifestation"
"the acceleration of aging by experimentally accentuating them"
"the opportunity to decelerate, stop, or reverse aging by therapeutic interventions on them"
Overview
Over time, almost all living organisms experience a gradual and irreversible increase in senescence and an associated loss of proper function of the bodily systems. As aging is the primary risk factor for major human diseases, including cancer, diabetes, cardiovascular disorders, and neurodegenerative diseases, it is important to describe and classify the types of changes that it entails.
After a decade, the authors of the heavily cited original paper updated the set of proposed hallmarks in January 2023. In the new review, three new hallmarks have been added: macroautophagy, chronic inflammation and dysbiosis, totaling 12 proposed hallmarks.
The nine hallmarks of aging of the original paper are grouped into three categories as below:
Primary hallmarks (causes of damage)
Genome instability
Telomere shortening (or telomere attrition)
Epigenetic alterations
Loss of proteostasis
macroautophagy
Antagonistic hallmarks (responses to damage)
Deregulated nutrient sensing
Mitochondrial dysfunction
Cellular senescence
Integrative hallmarks (culprits of the phenotype)
Stem cell exhaustion
Altered intercellular communication
chronic inflammation
dysbiosis
Primary hallmarks are the primary causes of cellular damage. Antagonistic hallmarks are antagonistic or compensatory responses to the manifestation of the primary hallmarks. Integrative hallmarks are the functional result of the previous two groups of hallmarks that lead to further operational deterioration associated with aging.
There are also proposed further hallmarks or underlying mechanisms that drive multiple of these hallmarks.
The hallmarks
Each hallmark was chosen to try to fulfill the following criteria:
manifests during normal aging;
experimentally increasing it accelerates aging;
experimentally amending it slows the normal aging process and increases healthy lifespan.
These conditions are met to different extents by each of these hallmarks. The last criterion is not present in many of the hallmarks, as science has not yet found feasible ways to amend these problems in living organisms.
Genome instability
Proper functioning of the genome is one of the most important prerequisites for the smooth functioning of a cell and the organism as a whole. Alterations in the genetic code have long been considered one of the main causal factors in aging. In multicellular organisms genome instability is central to carcinogenesis, and in humans it is also a factor in some neurodegenerative diseases such as amyotrophic lateral sclerosis or the neuromuscular disease myotonic dystrophy.
Abnormal chemical structures in the DNA are formed mainly through oxidative stress and environmental factors. A number of molecular processes work continuously to repair this damage. Unfortunately, the results are not perfect, and thus damage accumulates over time. Several review articles have shown that deficient DNA repair, allowing greater accumulation of DNA damages, causes premature aging; and that increased DNA repair facilitates greater longevity.
Telomere shortening
Telomeres are regions of repetitive nucleotide sequences associated with specialized proteins at the ends of linear chromosomes. They protect the terminal regions of chromosomal DNA from progressive degradation and ensure the integrity of linear chromosomes by preventing DNA repair systems from mistaking the ends of the DNA strand for a double strand break.
Telomere shortening is associated with aging, mortality and aging-related diseases. Normal aging is associated with telomere shortening in both humans and mice, and studies on genetically modified animal models suggest causal links between telomere erosion and aging. Leonard Hayflick demonstrated that a normal human fetal cell population will divide between 40 and 60 times in cell culture before entering a senescence phase. Each time a cell undergoes mitosis, the telomeres on the ends of each chromosome shorten slightly. Cell division will cease once telomeres shorten to a critical length. This is useful when uncontrolled cell proliferation (like in cancer) needs to be stopped, but detrimental when normally functioning cells are unable to divide when necessary.
An enzyme called telomerase elongates telomeres in gametes and stem cells. Telomerase deficiency in humans has been linked to several aging-related diseases related to loss of regenerative capacity of tissues. It has also been shown that premature aging in telomerase-deficient mice is reverted when telomerase is reactivated. The shelterin protein complex regulates telomerase activity in addition to protecting telomeres from DNA repair in eukaryotes.
Epigenomic alterations
Out of all the genes that make up a genome, only a subset are expressed at any given time. The functioning of a genome depends both on the specific order of its nucleotides (genomic factors), and also on which sections of the DNA chain are spooled on histones and thus rendered inaccessible, and which ones are unspooled and available for transcription (epigenomic factors). Depending on the needs of the specific tissue type and environment that a given cell is in, histones can be modified to turn specific genes on or off as needed. The profile of where, when and to what extent these modifications occur (the epigenetic profile) changes with aging, turning useful genes off and unnecessary ones on, disrupting the normal functioning of the cell.
As an example, sirtuins are a type of protein deacetylases that promote the binding of DNA onto histones and thus turn unnecessary genes off. These enzymes use NAD as a cofactor. With aging, the level of NAD in cells decreases and so does the ability of sirtuins to turn off unneeded genes at the right time. Decreasing the activity of sirtuins has been associated with accelerated aging and increasing their activity has been shown to stave off several age-related diseases.
Loss of proteostasis
Proteostasis is the homeostatic process of maintaining all the proteins necessary for the functioning of the cell in their proper shape, structure and abundance. Protein misfolding, oxidation, abnormal cleavage or undesired post-translational modification can create dysfunctional or even toxic proteins or protein aggregates that hinder the normal functioning of the cell. Though these proteins are continually removed and recycled, formation of damaged or aggregated proteins increases with age, leading to a gradual loss of proteostasis. This can be slowed or suppressed by caloric restriction or by administration of rapamycin, both through inhibiting the mTOR pathway.
Deregulated nutrient sensing
Nutrient sensing is a cell's ability to recognize, and respond to, changes in the concentration of macronutrients such as glucose, fatty acids and amino acids. In times of abundance, anabolism is induced through various pathways, the most well-studied among them the mTOR pathway. When energy and nutrients are scarce, the AMPK receptor senses this and switches off mTOR to conserve resources.
In a growing organism, growth and cell proliferation are important and thus mTOR is upregulated. In a fully grown organism, mTOR-activating signals naturally decline during aging. It has been found that forcibly overactivating these pathways in grown mice leads to accelerated aging and increased incidence of cancer. mTOR inhibition methods like dietary restriction or administering rapamycin have been shown to be one of the most robust methods of increasing lifespan in worms, flies and mice.
Mitochondrial dysfunction
The mitochondrion is the powerhouse of the cell. Different human cells contain from several up to 2500 mitochondria, each one converting carbon (in the form of acetyl-CoA) and oxygen into energy (in the form of ATP) and carbon dioxide.
During aging, the efficiency of mitochondria tends to decrease. The reasons for this are still quite unclear, but several mechanisms are suspected: reduced biogenesis, accumulation of damage and mutations in mitochondrial DNA, oxidation of mitochondrial proteins, and defective quality control by mitophagy.
Dysfunctional mitochondria contribute to aging through interfering with intracellular signaling and triggering inflammatory reactions.
Cellular senescence
Under certain conditions, a cell will exit the cell cycle without dying, instead becoming dormant and ceasing its normal function. This is called cellular senescence. Senescence can be induced by several factors, including telomere shortening, DNA damage and stress. Since the immune system is programmed to seek out and eliminate senescent cells, it might be that senescence is one way for the body to rid itself of cells damaged beyond repair.
The links between cell senescence and aging are several:
The proportion of senescent cells increases with age.
Senescent cells secrete inflammatory markers which may contribute to aging.
Clearance of senescent cells has been found to delay the onset of age-related disorders.
Stem cell exhaustion
Stem cells are undifferentiated or partially differentiated cells with the unique ability to self-renew and differentiate into specialized cell types. For the first few days after fertilization, the embryo consists almost entirely of stem cells. As the fetus grows, the cells multiply, differentiate and assume their appropriate function within the organism. In adults, stem cells are mostly located in areas that undergo gradual wear (intestine, lung, mucosa, skin) or need continuous replenishment (red blood cells, immune cells, sperm cells, hair follicles).
Loss of regenerative ability is one of the most obvious consequences of aging. This is largely because the proportion of stem cells and the speed of their division gradually lowers over time. It has been found that stem cell rejuvenation can reverse some of the effects of aging at the organismal level.
Altered intercellular communication
Different tissues and the cells they consist of need to orchestrate their work in a tightly controlled manner so that the organism as a whole can function. One of the main ways this is achieved is through excreting signal molecules into the blood where they make their way to other tissues, affecting their behavior. The profile of these molecules changes as we age.
One of the most prominent changes in cell signaling biomarkers is "inflammaging", the development of a chronic low-grade inflammation throughout the body with advanced age. The normal role of inflammation is to recruit the body's immune system and repair mechanisms to a specific damaged area for as long as the damage and threat are present. The constant presence of inflammation markers throughout the body wears out the immune system and damages healthy tissue.
It's also been found that senescent cells excrete a specific set of molecules called the SASP (Senescence-Associated Secretory Phenotype) which induce senescence in neighboring cells. Conversely, lifespan-extending manipulations targeting one tissue can slow the aging process in other tissues as well.
Further hallmarks
These may constitute further hallmarks or underlying mechanisms that drive multiple of these hallmarks.
Resurrection of endogenous retroviruses could be "a hallmark and driving force of cellular senescence and tissue aging" as retroviruses in the human genomes can become awakened from dormant states and contribute to aging which can be blocked by neutralizing antibodies.
Alternative conceptual models
In 2014, other scientists have defined a slightly different conceptual model for aging, called 'The Seven Pillars of Aging', in which just three of the 'hallmarks of aging' are included (stem cells and regeneration, proteostasis, epigenetics). The seven pillars model highlights the interconnectedness between all of the seven pillars which is not highlighted in the nine hallmarks of aging model.
Links to other diseases or hallmarks
Authors of the original paper merged or linked various hallmarks of cancer with those of aging.
The authors also concluded that the hallmarks are not only interconnected among each other but also "to the recently proposed hallmarks of health, which include organizational features of spatial compartmentalization, maintenance of homeostasis, and adequate responses to stress".
See also
References
Ageing
Senescence | Hallmarks of aging | [
"Chemistry",
"Biology"
] | 2,549 | [
"Senescence",
"Metabolism",
"Cellular processes"
] |
67,280,250 | https://en.wikipedia.org/wiki/Ali%20Akbar%20Moosavi-Movahedi | Ali Akbar Moosavi-Movahedi (; born February 1953) is an Iranian biophysicist, and biophysical chemist at the Institute of Biochemistry and Biophysics, University of Tehran. He is the founder of the Iran Society of Biophysical Chemistry. He is the fellow of The World Academy of Sciences (TWAS), fellow of Islamic World Academy of Sciences (IAS), and a member of the Islamic Republic of Iran Academy of Sciences.
Education and early life
Ali Akbar Moosavi-Movahedi was born in Shiraz, Iran, in 1953. He attended Alborz High School in 1968, graduated from the National University of Iran (now known as Shahid Beheshti University) with a BSc in chemistry in 1975, earned his MSc in Bioanalytical Chemistry at the Eastern Michigan University in 1979, and obtained his Ph.D. in Biophysical Chemistry at the University of Manchester in 1986.
Professional experience
Ali Akbar Moosavi-Movahedi's research career has been mostly marked on thermodynamics of protein denaturation especially by surfactants, protein folding/unfolding, protein glycation, biophysics of molecular diabetes, amyloid and protein aggregation and fibrillation, bioactive peptides, nutraceuticals, functional foods and artificial enzyme. He has been selected as the chair of UNESCO Chair on Interdisciplinary Research in Diabetes, University of Tehran which is mostly oriented in the area of oxidative stress and diabetes complications.
He has established a highly equipped BCL laboratory for collaborations and working with different international research groups.
Ali A. Moosavi-Movahedi has been one of the pioneering scientists in establishing the first PhD programs in Biochemistry and Biophysics in Iran. He has initiated a few science and technology institutional foundations in Iran. His publications include 20 books and numerous full research papers published mostly in international research journals, mainly around structural elucidation of proteins, enzymes, and DNA strands. He is the chair of Center of Excellence in Biothermodynamics and national committee member of the International Science Council (ISC) (previous name: ICSU) at University of Tehran.
He is a member of a few national and international scientific societies and is currently the president of Iran Society of Biophysical Chemistry(ISOBC). Several awards are given to researchers by the society such as ISOBC Global Science Contribution for senior highly cited distinguished scientists, ISOBC for talented young (under 35 year) researcher, Moosavi-Movahedi Award for young (under 40 years) eminent PhD that confer to scientists in Annual ISOBC Congress. ISOBC is a member of the International Union of Biochemistry and Molecular Biology (IUBMB) and the European Biophysical Societies' Association (EBSA).
He is the founding member of the Federation of Iran Bioscience Societies (FIrBS), Universal Scientific Education and Research Network (USERN), Biochemical Society of Iran, Iranian Biology Society.
He is the Editor-in-Chief of a journal named "Science Cultivation," which publishes aimed popularization of science, policy research in science and technology, promotion of science, assisting managers and policymakers in scientific centers, steering the direction of research for scientific elites and innovators in science and technology. This Journal attempts to create the necessary context for cultivating and fertilizing new areas of sciences through monitoring, culture building, and scientific capacity building.
He supports organizing National, Regional, and International conferences in the field of Biophysical Chemistry, biothermodynamics, Biomolecular Sciences and also in the area of enculturing science and technology advancements.
He has published a new international book entitled "Rationality and Scientific Lifestyle for Health" Springer 2021.
Awards and honors
Khwarizmi International Award,
Distinguished National Professor, 1997,
The first-class Research Medal, University of Tehran, 2003,
National Eminent Character 2003,
First-rank basic science research medal in Annual Razi Medical Sciences Research Festival 2005,
Iranian Science and Culture Hall of Fame, 2005
Top Researcher Elsevier-Scopus International Award in the field of Biochemistry, Genetics & Molecular Biology, 2007,
First Rank Avicenna Festival Award as Top Researcher 2008,
Member of Academy of Sciences of Iran, 2009,
National eminent researcher first- rank award conferred in National Research Festival by Ministry of Science, Research and Technology of Iran, 2009,
Chosen as Eminent Professor of University of Tehran 2010,
distinguished Professor appointed by Iran's National Elites Foundation 2012,
Essential Science Indicators (ESI) 1% citation scientist in the field of Biology and Biochemistry since 2013,
TWAS (The World Academy of Sciences) Fellow 2015,
IAS (The Islamic Academy of Sciences) Fellow 2016
COMSTECH Award for Lifetime Achievement Award in Chemistry 2021
References
Iranian Science and Culture Hall of Fame recipients
People from Shiraz
Academic staff of the University of Tehran
Iranian chemists
Iranian biophysicists
Protein structure
Thermodynamics
1953 births
Living people
Fellows of the Islamic World Academy of Sciences | Ali Akbar Moosavi-Movahedi | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,025 | [
"Protein structure",
"Thermodynamics",
"Structural biology",
"Dynamical systems"
] |
67,287,307 | https://en.wikipedia.org/wiki/Cell%20engineering | Cell engineering is the purposeful process of adding, deleting, or modifying genetic sequences in living cells to achieve biological engineering goals such as altering cell production, changing cell growth and proliferation requirements, adding or removing cell functions, and many more. Cell engineering often makes use of recombinant DNA technology to achieve these modifications as well as closely related tissue engineering methods. Cell engineering can be characterized as an intermediary level in the increasingly specific disciplines of biological engineering which includes organ engineering, tissue engineering, protein engineering, and genetic engineering.
The field of cellular engineering is gaining more traction as biomedical research advances in tissue engineering and becomes more specific. Publications in the field have gone from several thousand in the early 2000s to nearly 40,000 in 2020.
Overview
Improving production of natural cellular products
One general form of cell engineering involves altering natural cell production to achieve a more desirable yield or shorter production time. A possible method for changing natural cell production includes boosting or repressing genes that are involved in the metabolism of the product. For example, researchers were able to overexpress transporter genes in hamster ovary cells to increase monoclonal antibody yield. Another approach could involve incorporating biologically foreign genes into an existing cell line. For example, E.Coli, which synthesizes ethanol, can be modified using genes from Zymomonas mobilis to make ethanol fermentation the primary cell fermentation product.
Altering cell requirements
Another beneficial cell modification is the adjustment of substrate and growth requirements of a cell. By changing cell needs, the raw material cost, equipment expenses, and skill required to grow and maintain cell cultures can be significantly reduced. For example, scientists have used foreign enzymes to engineer a common industrial yeast strain which allows the cells to grow on substrate cheaper than the traditional glucose. Because of the biological engineering focus on improving scale-up costs, research in this area is largely focused on the ability of various enzymes to metabolize low-cost substrates.
Augmenting cells to produce new products
Closely tied with the field of biotechnology, this subject of cell engineering employs recombinant DNA methods to induce cells to construct a desired product such as a protein, antibody, or enzyme. One of the most notable examples of this subset of cellular engineering is the transformation of E. Coli to transcript and translate a precursor to insulin which drastically reduced the cost of production. Similar research was conducted shortly after in 1979 in which E. Coli was transformed to express human growth hormone for use in treatment of pituitary dwarfism. Finally, much progress has been made in engineering cells to produce antigens for the purpose of creating vaccines.
Adjustment of cell properties
Within the focus of bioengineering, various cell modification methods are utilized to alter inherent properties of cells such as growth density, growth rate, growth yield, temperature resistance, freezing tolerance, chemical sensitivity, and vulnerability to pathogens. For example, in 1988 one group of researchers from the Illinois Institute of Technology successfully expressed a Vitreoscilla hemoglobin gene in E. Coli to create a strain that was more tolerant to low-oxygen conditions such as those found in high density industrial bioreactors.
Stem cell engineering
One distinct section of cell engineering involves the alteration and tuning of stem cells. Much of the recent research on stem cell therapies and treatments falls under the aforementioned cell engineering methods. Stem cells are unique in that they may differentiate into various other types of cells which may then be altered to produce novel therapeutics or provide a foundation for further cell engineering efforts. One example of directed stem cell engineering includes partially differentiating stem cells into myocytes to enable production of pro-myogenic factors for the treatment of sarcopenia or muscle disuse atrophy.
History
The phrase "cell engineering" was first used in a published paper in 1968 to describe the process of improving fuel cells. The term was then adopted by other papers until the more specific "fuel-cell engineering" was used.
The first use of the term in a biological context was in 1971 in a paper which describes methods to graft reproductive caps between algae cells. Despite the rising popularity of the phrase, there remains unclear boundaries between cell engineering and other forms of biological engineering.
Examples
Therapeutic T cell engineering: altering T cells to target cancer-related antigens for treatment
Monoclonal antibody production: improving monoclonal antibody production using engineered cells
In vivo cell factories: engineering cells to produce therapeutics within the patient's body
Directed stem cell differentiation: using external factors to direct stem cell differentiation
Antibody Drug Conjugates: engineering antibody and cytotoxic drug linkages for disease treatment
References
External links
Institute for Cell Engineering at Johns Hopkins University School of Medicine
Cell & Tissue Engineering at University of California, Berkeley Bioengineering Department
Cells
Cell lines
Molecular biology | Cell engineering | [
"Chemistry",
"Biology"
] | 968 | [
"Biochemistry",
"Molecular biology"
] |
60,826,815 | https://en.wikipedia.org/wiki/Silanes | In organosilicon chemistry, silanes are a diverse class of charge-neutral organic compounds with the general formula . The R substituents can be any combination of organic or inorganic groups. Most silanes contain Si-C bonds, and are discussed under organosilicon compounds. Some contain Si-H bonds and are discussed under hydrosilanes.
Examples
Silane , the parent.
Binary silicon-hydrogen compounds (which are sometimes called silanes also) includes silane itself but also compounds with Si-Si bonds including disilane and longer chains.
Silanes with one, two, three, or four Si-H bonds are called hydrosilanes. Silane is again the parent member. Examples: triethylsilane () and triethoxysilane ().
Polysilanes are organosilicon compounds with the formula . They feature Si-Si bonds. Attracting more interest are the organic derivatives such as polydimethylsilane . Dodecamethylcyclohexasilane is an oligomer of such materials. Formally speaking, polysilanes also include compounds of the type , but these less studied.
Carbosilanes are polymeric silanes with alternating Si-C bonds.
Chlorosilanes have Si-Cl bonds. The dominant examples come from the Direct process, i.e., (CH3)4-xSiClx. Another important member is trichlorosilane ().
Organosilanes are a class of charge-neutral organosilicon compounds. Example: tetramethylsilane ()
By tradition, compounds with Si-O-Si bonds are usually not referred to as silanes. Instead, they are called siloxanes. One example is hexamethyldisiloxane, .
Applications
See compound-specific applications. Commonly:
Polysilicone production
PEX crosslinking agent
See also
Silane quats
References
Silanes
Trimethylsilyl compounds
Carbosilanes | Silanes | [
"Chemistry"
] | 421 | [
"Functional groups",
"Trimethylsilyl compounds"
] |
60,828,981 | https://en.wikipedia.org/wiki/Detonation%20spraying | Detonation spraying is one of the many forms of thermal spraying techniques that are used to apply a protective coating at supersonic velocities to a material in order to change its surface characteristics. This is primarily to improve the durability of a component. It was first invented in 1955 by H.B. Sargent, R.M. Poorman and H. Lamprey and is applied to a component using a specifically designed detonation gun (D-gun). The component being sprayed must be prepared correctly by removing all surface oils, greases, debris and roughing up the surface in order to achieve a strongly bonded detonation spray coating. This process involves the highest velocities (≈3500 m/s shockwave that propels the coating materials) and temperatures (≈4000 °C) of coating materials compared to all other forms of thermal spraying techniques. Because of these characteristics, detonation spraying is able to apply low porous (below 1%) and low oxygen content (between 0.1 and 0.5%) protective coatings that protect against corrosion, abrasion and adhesion under low load.
This process allows the application of very hard and dense surface coatings which are useful as wear resistant coatings. For this reason, detonation spraying is commonly used for protective coatings in aircraft engines, plug and ring gauges, cutting edges (skiving knives), tubular drills, rotor and stator blades, guide rails or any other metallic material that is subject to high wear and tear. Commonly the materials that are sprayed onto components during detonation spraying are powders of metals, metal alloys and cermets; as well as their oxides (aluminum, copper, iron, etc.).
Detonation spraying is an industrial process that can be dangerous if not performed correctly and in a safe environment. As such there are many safety precautions that must be adhered to when using this thermal spraying technique.
History
The process of detonation spraying was first developed in 1955 by H.B. Sargent, R.M. Poorman and H. Lamprey and was subsequently patented. It was first made commercially available as the 'D-Gun Process' by Union Carbide in the same year. It was further developed in the 1960s by the Paton Institute in Kiev (Ukraine), into a technology that is still currently commercially available in the US by Demeton Technologies (West Babylon).
D-Gun
Detonation spray coatings are applied using a detonation gun (D-gun) which is composed of a long-water-cooled metal barrel containing inlet valves for introducing gases and powders into the chamber. A preselected amount of the desired protective coating material known as feedstock (in powder form of particle size 5–60μm) is introduced into the chamber (at common powder flow rates of 16–40 g/min). Here oxygen and fuel (generally acetylene) are ignited by a spark plug to create a supersonic shock wave that propels the mixture of melted and/or partially-melted and/or solid feedstock (depending on the type of material used) out of the barrel and onto the subject being sprayed. The barrel is then cleared using a short burst of nitrogen before the D-gun is ready to be fired again. This is an important step because the heat from the residual gases can cause the new fuel mixture to combust which would in turn cause an uncontrollable reaction. Also a small amount of inert nitrogen gas inserted between the two mixtures of fuel and feedstock prior to firing, helps to prevent backfiring. D-guns typically operate at firing rates of between 1–10 Hz. Many different mixtures of coating powders and D-gun settings can be used during detonation gun spraying of a material, all of which influence the final surface characteristics of the sprayed coating. Common powder materials used include but are not limited to: alumina-titania, alumina, tungsten carbide-tungsten-chromium carbide mixture with nickel-chromium alloy binder, chromium carbide, tungsten carbide with cobalt binder.
Metallurgists consider the measurements of surface oxygen content, macro and micro-hardness, porosity, bond strength and surface roughness when determining the quality of a thermally sprayed coating.
Components
Spark plug
Water cooled barrel
Nitrogen inlet valve
Fuel inlet valve
Oxygen inlet valve
Powder feedstock inlet valve
Cycle of operation overview
Mixture of fuel and oxygen is injected into the combustion chamber.
Powder feedstock is introduced into the chamber.
Nitrogen gas is added between the fuel-oxygen mixture and powder feedstock in order to prevent backfiring.
Mixture is ignited, and heated powder is ejected from the barrel onto the target material.
Barrel is then purged by nitrogen gas ready for firing again.
This process is repeated at a rate of between 1–10 Hz until desired thickness of coating is achieved.
Surface Preparation
Detonation sprayed coatings are primarily mechanically bonded. This means that the surface of the component being sprayed, must be properly prepared so as to maximise the bond strength between the sprayed coating and the substrate. To successfully prepare the surface it must be cleaned of all greases, oils, dirt and other contaminants and sufficiently roughened to provide enough of a surface irregularity for the coating to cling to. Chemical processes are generally the most suitable methods used to clean the substrate surface. After which care must be taken not to touch and/or dirty the surface prior to spraying. The three methods used to roughen up the substrate surface are abrasive blasting, machining and bond coating. Cleaning occurs only after the roughening of the surface except for when a bond coating is used; the surface must be cleaned before and possibly after this process too. Application of the detonation spray coating should be performed as soon as possible after a substrates surface has been prepared.
Abrasive Blasting
Abrasive blasting also known as sandblasting, involves using compressed air to fire a steam of clean, sharp, crushed steel grit or aluminum oxide onto the surface of the component. Aluminum is a good option as it is relatively cheap. The fired grit breaks off small chucks of the substrate surface creating an evenly rough surface for good mechanical bonds to form. The substrate needs to be cleaned of any debris and residual grit from blasting prior to spraying.
Machining
For cases where a very strong mechanical bond is required (such as for components that are going to be used to machine with) the components surface is often machined to create grooves for the coating to bond to. Dovetail grooves offer strong positive bonding but can be laboursome and costly. A cheaper method is to cut simple partially open grooves, yet this method produces an inferior final bond strength. The edges and corners of a component present possible weak points in the coating structure, as they can break off from the component. To increase the bond strength at these points the corners and edges of the component should be rounded off. If the coating does not need to reach the edges of a component, then an undercut can be used (as shown in the diagram to the right) to secure the coating to the substrate. Although undercuts can also be used in other scenarios.
Coatings often have a tendency to shrink after being applied due to the cooling process. This means steps need to be taken in order to minimise the negative effects of shrinking. If not, the coating can suffer from stress due to tension which will weaken the coating and in some cases may cause it to peel off. The fact coatings shrink can be used to increase the bond strength if applied wisely. Coating over the entire external surface of a component means that the coating will shrink around the component when cooled providing a sort of gripping force that will increase the mechanical bond strength. This is also the case if a flat component is sprayed over the edges, the coating will grip the surface like a clamp; again increasing bond strength. Internal coatings suffer from the effect of shrinking in that they will be pulled away from the surface of the component. To counter this the component can be heated to reduce the relative shrinking effects on cooling.
Components should be dry machined (without oils) to avoid oils being deposited on the component before spraying. If this is unavoidable then the substrate will need to be cleaned again prior to detonation spraying.
Bond Coating
After a surface has been abrasion blasted and/or machined a thin layer of molybdenum, nickel-chromium alloys or nickel aluminide can be spayed before the final detonation spray coating to improve the bond strength. This is known as a bond coating. Bond coatings are often used when spray coating materials of ceramic composites are being applied. The component may need to be machined and/or abrasion blasted slightly deeper for the purpose of allowing space for the bond coating and spray coating to fit flush on the component surface.
Areas that are not to be sprayed must be covered in stop-off chemicals (chemicals that stop the spay from bonding) or tape. The chemicals and tape are then removed after the coating has cooled.
Detonation Spray Coatings
Detonation spraying produces coatings of very high chemical bond strength and hardness. Coatings are of low porosity, oxygen content and have a low to medium surface roughness. This is achieved due to the extremely high temperatures and velocities produced by the detonation gun during surface coating application. These properties make detonation spraying the standard of comparison for all other thermal spray coatings (wire arc, plasma, flame, HVAF, HVOF, Warm, Cold).
There are many factors that determine the final detonation gun coating properties. Primarily, surface properties are determined by the type and properties of the powdered feedstock used (composition and particle size) but they are also affected by the settings used on the D-gun. These are powder flow rate, firing rate, distance from gun to target, how the D-gun is moved around to apply the coating, size of barrel, amount and composition of fuel and oxygen mixture.
Detonation spraying is able to apply protective coatings to relatively sensitive and delicate materials. This is due to the nature of the application of detonation gun coatings, being very quick and having the heat source removed from the target material. This allows for a large range of suitable applications for detonation spraying.
Types of Materials
Many materials are able to be sprayed as coatings using the D-gun. These materials used for the feedstock are powders of metals, alloys and cermets; as well as their oxides. However, mainly high-tech coatings are used, these include ceramics, and complex composites. Characteristics such as strength, hardness, shrink, corrosion resistance and wearing quality of possible spraying materials are factored into the decision of selecting a coating material.
Some examples include:
Al2O3
Cu–Al
Cu–SiC
Al–Al2O3
Cu–Al2O3
Al–SiC
Al–Ti
TiMo(CN)–36NiCo
Fe–A
Applications
The main functions of detonation spray coatings are to protect against corrosion (due to low oxygen content), abrasion and adhesion under low load. This means that detonation spraying produces hard durable coatings that are suitable for:
Various components of general machinery: shafts, seals, bushings, bearings, seals
Aviation:
rotor and stator blades
engine components
guide rails
Oil and gas industry:
bushings and sealing rings of ESP units
gate valves
shut-off valves
working surface of drill tools
Space rocket industry
Electronic and radio industry
Engineering of instruments
Tools industry
Tubular drills
Skiving knives for rubber and plastic
Shipbuilding industry
D-gun plated plug and ring gauges
Limitations
There are a few limitations of detonation spraying, these are:
Detonation spraying creates a coating that is mostly mechanically bonded as opposed to being metallurgically bonded, which is a much stronger type of bond.
Detonation spraying is a 'line of sight' process meaning that components generally need to be coated before being put to use or assembled. This is because the detonation gun needs to be able to access the surface to be able to apply an effective coating.
The coatings despite being considerably strong in compression are weak under tension, meaning they can't be applied to malleable or expanding components.
The coatings tend to fatigue under pinpoint loading.
Detonation guns are quite large and loud.
Detonation spraying has to be performed at a location specifically designed for it, as the gun is reasonable large and it is a loud process that produces substantial noise. For this reason, it is usually installed in a sound-proof room (with concrete walls 45 cm thick).
The process involves a considerable amount of mechanisation and automation because the operator can't be in the room whilst the D-gun is in operation.
Safety
Detonation gun spraying like any other industrial process carries with it a number of safety hazards that need to be managed correctly in order to ensure operator safety whilst in use. These safety precautions primarily fall into the following categories and the hazard minimisation techniques suggested, in some cases have a positive effect on the resultant detonation spray coating. For example, having to automate the spraying process means that a very even and consistent spray coating can be achieved.
Noise
The operation of a detonation gun is a very loud process due to the multiple explosions occurring in the chamber per second. This could cause damage to operators hearing if in close proximity to the D-gun. As a result, detonation spraying should be performed within a sound proof room and no one should be present in the room during operation. Also operators should wear ear protection (such as ear muffs and/or ear plugs) while working with a D-gun.
Heat
Extremely high temperatures are reached by the D-gun (≈4000 °C) whilst in operation. Flammable and explosive fuels (generally acetylene) are used in detonation spraying to produce the supersonic shockwave that propels the powder coating materials onto their target components. This poses a serious burn and explosion hazard. Again, no-one should be present in the room whilst the D-gun is in operation and the room should be designed to withstand any malfunction of the D-gun. Also protective gloves should be used to handle the D-gun and sprayed components to void burns form hot components after spraying.
Dust and Fumes
The D-gun atomises the powder feedstock into extremely small particles (80–95% of particles by total number are of size <100 nm). This means proper extraction facilities are required for inhalation safety purposes. Also isolation of the D-gun is recommended to avoid operators breathing in the dangerous dust and fumes. If operators are to enter the room they should wear appropriate dust masks or respirators. Many of the compounds used as the feedstock in detonation spraying are detrimental to human health if ingested or inhaled. Airborne metals from the detonation gun in particular are harmful to the lungs. Exposure to cadmium for example can cause harm to the kidneys and lungs, vomiting, loss of consciousness and even reduced fertility. Also heavy metals have been shown in recent studies to be carcinogenic such as lead, nickel, chromium, and cadmium. Some serious lung conditions caused by metal dust inhalation include:
Silicosis - a lung disease cause by inhaling silica present in the feedstock compounds.
Siderosis - (silver polisher's lung or welder's lung), a lung disease cause by inhaling iron present in the feedstock compounds.
Alzheimer's - a memory loss disease more common among the elderly has been shown by some studies to be caused by high levels of exposure to aluminum (among many other causes). However, it must be noted that these studies were not conclusive, and others have proven otherwise.
Metal fume fever - this can occur in some individuals following exposure certain metal compounds (such as copper, zinc, magnesium and aluminum alloys or oxides) that have a particularly unpleasant odour. The fumes are caused as a byproduct when the metals are heated and can trigger a fever-like reaction that may need medical attention.
References
Chemical processes
Metallurgical processes
Coatings | Detonation spraying | [
"Chemistry",
"Materials_science"
] | 3,366 | [
"Metallurgical processes",
"Metallurgy",
"Coatings",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
60,831,908 | https://en.wikipedia.org/wiki/Buoyancy%20engine | A buoyancy engine is a device that alters the buoyancy of a vehicle or object in order to either move it vertically, as in the case of underwater profiling floats and stealth buoys, or provide forward motion (therefore providing variable-buoyancy propulsion) such as with underwater gliders and some autonomous aircraft.
For underwater applications, buoyancy engines typically involve a hydraulic pump that either inflates and deflates an external bladder filled with hydraulic fluid, or extends and retracts a rigid plunger. The change in the vehicle's total volume alters its buoyancy, making it float upwards or sink as required. Alternative systems employing gas obtained from water electrolysis, rather than hydraulic fluid, have also been proposed, as have systems which pump ambient water into and out of a pressure vessel.
Operation
The buoyancy engine is a technology used in research for underwater surveillance and mapping. A buoyancy engine works by changing the buoyancy of the device to make the vehicle positively or negatively buoyant so that it will float or sink through the water column. One way of doing this is by inflating and deflating an oil bladder outside the rigid pressure hull using oil stored inside the rigid pressure hull. In doing so, this changes the density of the craft the engine is installed on. As a result, an autonomous underwater vehicle such as an underwater glider can repeatedly adjust its buoyancy without external input. This allows the glider to remain in operation, independent of a surface vessel, for a longer duration, which makes the underwater glider a more viable tool for mapping the ocean floor.
An underwater glider works similarly to how an aircraft glider works, in that it utilizes the flow of water over a set of airfoil section wings to generate lift. The direction of the lift generated induces forward motion, whether the vehicle is gliding down when negatively buoyant or up when positively buoyant. The way weight is distributed within the underwater glider helps with this by putting the center of gravity at or just in front of the leading edge of the wings. This promotes an efficient and smooth glide slope. The buoyancy engine allows an underwater glider to continue this gliding process for extended periods by reversing the vertical force inducing the glide when the vehicle reaches the top and bottom depth limits of its operating envelope. Without a buoyancy engine, an underwater glider could be used once and then deploy a package that would float to the surface where it can be retrieved, or drop ballast, with the same effect. With the addition of a buoyancy engine, the underwater glider becomes a more viable tool as it can stay in operation longer and can be reused.
An underwater glider, like an aircraft glider, loses altitude as it moves forward. In the case of an underwater glider, its depth increases. Eventually, any glider will touch the ground. With a gliding aircraft, this is not much of an issue since they are expected to land and are reusable when they do so. This is not true for an underwater glider. If an underwater glider were to land on the ocean floor, it is essentially lost. Since a buoyancy engine allows a glider to change its density, the glider can glide in two directions. It can glide down like an aircraft, or it can glide up if it makes itself less dense than the water around it. In this way, as long as the buoyancy engine remains active, and power is available, an underwater glider can continue to operate.
The actual operation of a buoyancy engine occurs through a complex system of tubing, valves, and sensors. When a glider equipped with a buoyancy engine is deployed, the glider will increase its density to sink to an appropriate depth at which to start its mission. Once at that depth, the glider will begin the mission and the buoyancy engine will adjust the density to a value that is efficient for gliding. When a predetermined depth has been reached, the buoyancy engine will decrease density and this will cause the glider to glide back towards the surface. In this way, the underwater glider remains in operation between two preset depths. The mechanism used to modify buoyancy for this purpose is often a variable buoyancy pressure vessel.
Application
The buoyancy engine, when combined with the underwater glider, gives scientists and other individuals or organizations access to hardware to survey the ocean depths. For instance, the buoyancy engine, since it is used on underwater gliders and extends the capabilities of such craft, would be able to more effectively map the ocean floor. The use of the buoyancy engine has other effects as well. It could be used to improve the detection of underwater stores of oil. In addition, since the operational range of underwater gliders is increased through the use of buoyancy engines, ocean floors can be mapped in larger sections which is more efficient than pre-existing technologies. Also, buoyancy engines do not give off environmentally harmful substances making them an environmentally safe technology.
Other applications that extend from this include investigating disasters that happen at sea. Due to the increased mapping capabilities provided by the buoyancy engine, searching for the wreckage of an airliner or passenger ship can be conducted more economically by a larger number of units, so the wreckage may be found sooner and evidence can be collected more efficiently. Ocean mapping and underwater surveillance are important as they can reveal resources that would not be available otherwise.
References
Buoyancy
Fluid mechanics | Buoyancy engine | [
"Engineering"
] | 1,109 | [
"Civil engineering",
"Fluid mechanics"
] |
60,835,248 | https://en.wikipedia.org/wiki/Tsai-Hill%20failure%20criterion | The Tsai–Hill failure criterion is one of the phenomenological material failure theories, which is widely used for anisotropic composite materials which have different strengths in tension and compression. The Tsai-Hill criterion predicts failure when the failure index in a laminate reaches 1.
Tsai–Hill failure criterion in plane stress
The Tsai-Hill criterion is based on an energy theory with interactions between stresses. Ply rupture appears when:
Where:
is the allowable strength of the ply in the longitudinal direction (0° direction)
is the allowable strength of the ply in the transversal direction (90° direction)
is the allowable in-plane shear strength of the ply between the longitudinal and the transversal directions
The Tsai hill criterion is interactive, i.e. the stresses in different directions are not decoupled and do affect the failure simultaneously. Furthermore, it is a failure mode independent criterion, as it does not predict the way in which the material will fail, as opposed to mode-dependent criteria such as the Hashin criterion, or the Puck failure criterion. This can be important as some types of failure can be more critical than others.
References
Composite materials
Mechanical failure | Tsai-Hill failure criterion | [
"Physics",
"Materials_science",
"Engineering"
] | 251 | [
"Composite materials",
"Materials science",
"Materials",
"Mechanical engineering",
"Mechanical failure",
"Matter"
] |
60,837,114 | https://en.wikipedia.org/wiki/FlexAID | FlexAID is a molecular docking software that can use small molecules and peptides as ligands and proteins and nucleic acids as docking targets. As the name suggests, FlexAID supports full ligand flexibility as well side-chain flexibility of the target. It does using a soft scoring function based on the complementarity of the two surfaces (ligand and target).
FlexAID has been shown to outperform existing widely used software such as AutoDock Vina and FlexX in the prediction of binding poses. This is particularly true in cases where target flexibility is crucial, such as is likely to be the case when using homology models. The source code is available on GitHub under Apache License.
Graphical user interface
A PyMOL plugin for FlexAID, NRGsuite, has also been developed by the original authors.
See also
Docking (molecular)
Virtual screening
List of protein-ligand docking software
References
External links
— Najmanovich Research Group resources
Molecular modelling software
Molecular modelling
Free and open-source software
Software using the Apache license
Free software programmed in C
Free software programmed in C++ | FlexAID | [
"Chemistry"
] | 223 | [
"Molecular modelling software",
"Molecular physics",
"Computational chemistry software",
"Molecular modelling",
"Theoretical chemistry"
] |
62,029,996 | https://en.wikipedia.org/wiki/Prymnesin-B1 | Prymnesin-B1 is a chemical with the molecular formula . It is a member of the prymnesins, a class of ladder-frame polyether phycotoxins made by the alga Prymnesium parvum. It is known to be toxic to fish. It is a so called "B-type" prymnesin, which differ in the number of backbone cycles when compared to A-type prymnesins like prymnesin-2.
Structures
Prymnesins-B1 is formed of a large polyether polycyclic core with several conjugate double and triple bonds, chlorine and nitrogen heteroatoms and a single sugar moiety consisting of α-D-galactopyranose.
Biosynthesis
The backbone of B-type prymnesins like prymnesin-B1 is reportedly made by giant polyketide synthase enzymes dubbed the "PKZILLAs".
See also
Prymnesin-1
Prymnesin-2
References
Phycotoxins
Polyether toxins
Primary alcohols
Secondary alcohols
Conjugated enynes
Organochlorides
Halohydrins
Halogen-containing natural products
Amines
Conjugated diynes
Glycosides
Heterocyclic compounds with 5 rings
Oxygen heterocycles | Prymnesin-B1 | [
"Chemistry"
] | 281 | [
"Carbohydrates",
"Glycosides",
"Toxins by chemical classification",
"Polyether toxins",
"Functional groups",
"Biomolecules",
"Glycobiology",
"Amines",
"Bases (chemistry)"
] |
62,034,187 | https://en.wikipedia.org/wiki/Devimistat | Devimistat (INN; development code CPI-613) is an experimental anti-mitochondrial drug being developed by Cornerstone Pharmaceuticals. It is being studied for the treatment of patients with metastatic pancreatic cancer and relapsed or refractory acute myeloid leukemia (AML).
Devimistat's mechanism of action differs from other drugs, operating on the tricarboxylic acid cycle and inhibiting enzymes involved with cancer cell energy metabolism. A lipoic acid derivative different from standard cytotoxic chemotherapy, devimistat is currently being studied in combination with modified FOLFIRINOX to treat various solid tumors and heme malignancies.
Regulation
The U.S. Food and Drug Administration (FDA) has designated devimistat as an orphan drug for the treatment of pancreatic cancer, AML, myelodysplastic syndromes (MDS), peripheral T-cell lymphoma, and Burkitt's lymphoma, and given approval to initiate clinical trials in pancreatic cancer and AML.
Clinical trials
Clinical trials of the drug are underway including a Phase III open-label clinical trial to evaluate efficacy and safety of devimistat plus modified FOLFIRINOX (mFFX) versus FOLFIRINOX (FFX) in patients with metastatic adenocarcinoma of the pancreas.
References
Experimental cancer drugs
Carboxylic acids
Thioethers | Devimistat | [
"Chemistry"
] | 300 | [
"Carboxylic acids",
"Functional groups"
] |
57,742,157 | https://en.wikipedia.org/wiki/Chlorophenylsilatrane | 1-(4-Chlorophenyl)silatrane is an extremely toxic organosilicon compound which was developed by M&T Chemicals as a single-dose rodenticide. It was never registered as rodenticide, except for experimental use. 1-(4-Chlorophenyl)silatrane was one of the chemicals studied in the Project Coast.
Toxicity
1-(4-Chlorophenyl)silatrane is a GABA receptor antagonist and it destroys nervous functions in the central nervous system of vertebrates, primarily in the brain and possibly in the brain stem. It's a rapid acting convulsant, causing convulsions within 1 minute in mice and rats. Death occurred within 5 minutes. It is therefore likely to induce poison shyness. In field trials, it was less effective than zinc phosphide against wild rats.
See also
Phenylsilatrane
References
Convulsants
Organosilicon compounds
Nitrogen heterocycles
Oxygen heterocycles
Neurotoxins
Rodenticides
Chemical weapons
GABAA receptor negative allosteric modulators
Poisons
4-Chlorophenyl compounds
Atranes
Silicon heterocycles | Chlorophenylsilatrane | [
"Chemistry",
"Biology",
"Environmental_science"
] | 254 | [
"Chemical accident",
"Toxicology",
"Chemical weapons",
"Rodenticides",
"Neurotoxins",
"Poisons",
"Biochemistry",
"Neurochemistry",
"Biocides"
] |
64,344,766 | https://en.wikipedia.org/wiki/Kalafungin | Kalafungin is a substance discovered in the 1960s and found to act as a broad-spectrum antibiotic in vitro. It was isolated from a strain of the bacterium Streptomyces tanashiensis.
It is not known to be marketed anywhere in the world.
References
Antibiotics | Kalafungin | [
"Biology"
] | 59 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
64,345,797 | https://en.wikipedia.org/wiki/Black%20box%20model%20of%20power%20converter | The black box model of power converter also called behavior model, is a method of system identification to represent the characteristics of power converter, that is regarded as a black box. There are two types of black box model of power converter - when the model includes the load, it is called terminated model, otherwise un-terminated model. The type of black box model of power converter is chosen based on the goal of modeling. This black box model of power converter could be a tool for filter design of a system integrated with power converters.
To successfully implement a black box model of a power converter, the equivalent circuit of the converter is assumed a-priori, with the assumption that this equivalent circuit remains constant under different operating conditions. The equivalent circuit of the black box model is built by measuring the stimulus/response of the power converter.
Different modeling methods of power converter could be applied in different circumstances. The white box model of power converters is suitable when all the inner components are known, which can be quite difficult due to the complex nature of the power converter. The grey box model combines some features from both, black box model and white box model, when parts of components are known or the relationship between physical elements and equivalent circuit is investigated.
Assumption
Since the power converter consists of power semiconductor device switches, it is a nonlinear and time-variant system. One assumption of black box model of a power converter is that the system is regarded as linear system when the filter is designed properly to avoid saturation and nonlinear effects. Another strong assumption related to the modeling procedure is that the equivalent circuit model is invariant under different operating conditions. Since in the modeling procedures circuit components are determined under different operating conditions.
Equivalent circuit
The expression of a black box model of power converter is the assumed equivalent circuit model (in frequency domain), which could be easily integrated in the circuit of a system in order to facilitate the process of filter design, control system design and pulse-width modulation design. In general, the equivalent circuit contains mainly two parts: active components like voltage/current sources, and passive components like impedance. The process of black box modeling is actually an approach to determine this equivalent circuit for the converter.
Active components
The active components in equivalent circuit are voltage/current sources. They are usually at least two sources, which could be variety options depending on the analysis approach, such as two voltage sources, two current sources, and one voltage and one current source.
Passive components
The passive components containing resistors, capacitors and inductors can be expressed as combination of several impedances or admittances. Another expression method is to regard the passive components of the power converter as a two-port network and use a Y-matrix or Z-matrix to describe the characteristics of passive components.
Modeling method
Different modeling methods can be utilized to define the equivalent circuit. It depends on the chosen equivalent circuit and the optional measurement techniques. However, many modeling methods need at least one or more assumption mentioned above in order to regard the systems as linear time-invariant system or periodically switched linear system.
One example of modeling method
This method is based on the two assumptions mentioned in section Assumption, so the system is regarded as linear time-invariant system. Based on these assumptions, the equivalent circuit could be derived from several equations of different operating conditions. The equivalent circuit model is defined containing three impedances and two current sources, where five unknown parameters needs to be determined. Three sets of different operating conditions are built up by changing external impedance and the corresponding currents and voltages at the terminals of the power converter are measured or simulated as known parameters. In each condition, two equations containing five unknown variables could be derived according to Kirchhoff's circuit laws and nodal analysis. In total, six equations could be used to solve these five unknowns and the equivalent circuit could be determined in this way.
Other methods to determine passive elements
There are many methods used to determine passive elements. The conventional method is to switch off the power converter and measure the impedance with an impedance analyzer, or measure the scattering parameters by a vector network analyzer and compute the impedance afterwards. These conventional methods assume that the impedances of power converter is the same in the operating condition and switched-off condition.
Many state-of-art methods are investigated to measure the impedance when the power converter is in operating condition. One method is to put two clamp-on current probes in the system, in which one is called receiving probe and another is injecting probe. The output of two probes are connected on a vector network analyzer, the impedance of power converter is measured after some calibration procedures in CM and DM measurement setups. This method is restricted with its delicate calibration procedure.
Another state-of-art method is to utilize a transformer and an impedance analyzer in two different setups in order to measure CM and DM impedance separately. The measurement range of this method is limited by the characteristics of the transformer.
See also
black box
grey box model
system identification
nonlinear system identification
linear time-invariant system
time-variant system
nonlinear system
References
Electromagnetic compatibility
Systems theory
Dynamical systems | Black box model of power converter | [
"Physics",
"Mathematics",
"Engineering"
] | 1,069 | [
"Radio electronics",
"Electromagnetic compatibility",
"Mechanics",
"Electrical engineering",
"Dynamical systems"
] |
64,345,806 | https://en.wikipedia.org/wiki/Rashba%E2%80%93Edelstein%20effect | The Rashba–Edelstein effect (REE) is a spintronics-related effect, consisting in the conversion of a bidimensional charge current into a spin accumulation. This effect is an intrinsic charge-to-spin conversion mechanism and it was predicted in 1990 by the scientist V.M. Edelstein. It has been demonstrated in 2013 and confirmed by several experimental evidences in the following years.
Its origin can be ascribed to the presence of spin-polarized surface or interface states. Indeed, a structural inversion symmetry breaking (i.e., a structural inversion asymmetry (SIA)) causes the Rashba effect to occur: this effect breaks the spin degeneracy of the energy bands and it causes the spin polarization being locked to the momentum in each branch of the dispersion relation. If a charge current flows in these spin-polarized surface states, it generates a spin accumulation. In the case of a bidimensional Rashba gas, where this band splitting occurs, this effect is called Rashba–Edelstein effect.
For what concerns a class of peculiar materials, called topological insulators (TI), spin-splitted surface states exist due to the surface topology, independently from the Rashba effect. Topological insulators, indeed, display a spin-splitted linear dispersion relation on their surfaces (i.e., spin-polarized Dirac cones), while having a band gap in the bulk (this is why these materials are called insulators). Also in this case, spin and momentum are locked and, when a charge current flows in these spin-polarized surface states, a spin accumulation is produced and this effect is called Edelstein effect. In both cases, a 2D charge-to-spin conversion mechanism occurs.
The reverse process is called the inverse Rashba–Edelstein effect and it converts a spin accumulation into a bidimensional charge current, resulting in a 2D spin-to-charge conversion.
The Rashba–Edelstein effect and its inverse effect are classified as a spin-charge interconversion (SCI) mechanisms, as the direct and inverse spin Hall effect, and materials displaying these effects are promising candidates for becoming spin injectors, detectors and for other future technological applications.
The Rashba–Edelstein effect is a surface effect, at variance with the spin Hall effect which is a bulk effect. Another difference among the two, is that the Rashba–Edelstein effect is a purely intrinsic mechanism, while the spin Hall effect origin can be either intrinsic or extrinsic.
Physical origin
The origin of the Rashba–Edelstein effect relies on the presence of spin-split surface or interface states, which can arise for a structural inversion asymmetry or because the material exhibits a topologically protected surface, being a topological insulator. In both cases, the material surface displays the spin polarization locked to the momentum, meaning that these two quantities are univocally linked and orthogonal one to the other (this is clearly visible from the Fermi countours). It is worth noticing that also a bulk inversion asymmetry could be present, which would result in the Dresselhaus effect. In fact, if, in addition to the spatial inversion asymmetry or to the topological insulator band structure, also a bulk inversion asymmetry is present, the spin and momentum are still locked but their relative orientation is not straightforwardly determinable (since also the orientation of the charge current with respect to the crystallographic axes plays a relevant role). In the following discussion, the Dresselhaus effect will be neglected, for simplicity.
The topological insulator case is easier to visualize due to the presence of a single Fermi contour, therefore the topological insulator case is discussed first. Topological insulators display spin-split surface states where spin-momentum locking is present. Indeed, when a charge current flows in the surface states of the topological insulator, it can also be seen as a well-defined momentum shift in the reciprocal space, resulting in a different occupation of the spin-polarized branches of the Dirac cone. This unbalance, according to the structure of the topological insulator band dispersion relation, produces a spin accumulation in the investigated material, i.e., a charge-to-spin conversion occurs. The spin accumulation is orthogonal to the injected charge current, accordingly to spin-momentum locking. Due to the fact that these materials display a conductive behaviour on their surface while being insulating on their bulk, the charge current is only allowed to flow on the topological insulator surfaces: this is the origin of the bidimensionality of this charge-to-spin conversion mechanism.
For what concerns the Rashba–Edelstein effect, the spin-split dispersion relation consists in two bands displaced along the k-axis due to a structural inversion asymmetry (SIA), accordingly to the Rashba effect (i.e., these bands show a linear splitting in k due to the spin-orbit coupling). This results in two Fermi countours, which are concentric at equilibrium, both displaying spin-momentum locking but with opposite helicity. If the system is driven in an out-of-equilibrium condition by injecting a charge current, the two disk displace one from the other and a net spin accumulation arises. This effect occurs, for instance, in a bidimensional Rashba gas. The Rashba splitting complicates the understanding and the visualization of the spin-to-charge conversion mechanism but the basic working principle of the Rashba–Edelstein effect is very similar to the one of the Edelstein effect.
Experimentally speaking, the Rashba–Edelstein effect occurs if a charge current is electrically injected inside the topological insulator, for instance by means of two electrodes where a potential difference is applied. The resulting spin accumulation can be probed in several ways, one of them is by employing the magneto optical Kerr effect (MOKE).
Inverse Rashba–Edelstein effect
The reverse process, i.e., the inverse Rashba–Edelstein effect (I(R)EE) occurs when a spin accumulation is generated inside the investigated material and a consequent charge current arises on the material surface (in this case, we have a 2D spin-to-charge conversion). In order to have the inverse Rashba–Edelstein effect, a spin accumulation is required to be generated inside the analyzed material, and this spin injection is usually achieved by coupling the material under investigation with a ferromagnet in order to perform the spin pumping or with a semiconductor where it is possible to perform optical orientation. As for the direct effect, the inverse Rashba–Edelstein effect occurs in materials lacking the structural inversion symmetry, while in topological insulators the inverse Edelstein effect arises.
In the case of the inverse Edelstein effect, by looking at the section of the Dirac cone, the spin-to-charge conversion can be visualized as follows: the spin injection produces a piling up of spins of one character in one of the energy dispersion relation branches. This results in a spin unbalance due to the different branch occupations (i.e., a spin accumulation), which leads to a momentum unbalance and, therefore, to a charge current which can be electrically probed. As for the direct effect, also in the inverse Edelstein effect, the charge current can only flow on the topological insulator surfaces due to the energy band conformation. This is how the 2D spin-to-charge conversion occurs in these materials and this could allow topological insulators to be exploited as spin detectors.
As for the direct effect, this analysis has been carried out for the inverse Edelstein effect because in this case only two energy branches are present. For what concerns the inverse Rashba–Edelstein effect, the process is very similar despite the presence of four energy branches, with spin-momentum locking, in the dispersion relation and two consequent Fermi countours with opposite helicity. In this case, the two Fermi countours, when a spin accumulation is generated inside the material, will be displaced one from the other, generating a charge current, at variance with the equilibrium case in which the two Fermi countours are concentric and no net momentum unbalance nor spin accumulation are present.
Process efficiency
While both the Rashba–Edelstein effect and the inverse Rashba–Edelstein effect rely on a spin accumulation, the figure of merit of the processes is commonly computed by accounting for the spin current density related to the spin accumulation, instead of the spin accumulation itself, in analogy with the spin Hall angle for the spin Hall effect. Indeed, the efficiency of the Rashba–Edelstein effect and of the inverse Rashba–Edelstein effect can be estimated by means of the Rashba–Edelstein length, i.e., the ratio between the charge current density, flowing on the surface of the investigated material, (i.e., a surface charge current density) and the three-dimensional spin current density (since the spin accumulation can diffuse in the three-dimensional space).
In the Rashba–Edelstein effect the spin current is a consequence of the spin accumulation that occurs in the material as the charge current flows on its surface (under the influence of a potential difference and, therefore, of an electric field), while in the inverse Rashba–Edelstein effect the spin current is the quantity injected inside the material leading to a spin accumulation and resulting in a charge flow localized at the material surface. In both cases, the asymmetry in the charge and spin current dimensions results in a ratio which dimensionally has the units of a length: this is the origin of the name of this efficiency parameter.
Analytically, the value of the bidimensional charge current density can be computed employing the Boltzmann equation and considering the action of an electric field , resulting in:
,
where is the elementary charge, is the momentum scattering time, and are, respectively, the Fermi wavevector and the Fermi velocity and is the reduced Planck constant.
The spin current density can be also analytically computed by integrating across the Fermi surface the product of the spin polarization and the corresponding distribution function.
In the Edelstein effect case, this quantity results in:
,
where is the unit vector perpendicular to the surface on which the charge current flows.
From these formula it can be observed the orthogonality of the spin and the charge current densities.
For what regards the Edelstein and its inverse effects, the conversion efficiency is:
.
This parameter is conventionally positive for a Fermi contour with a counterclockwise helicity. The Rashba–Edelstein length derivation is the same as the Edelstein one, except for which is substituted by the Rashba parameter , i.e., , resulting in:
.
The Rashba–Edelstein length of the investigated material can be compared to other spin-charge interconversion efficiencies, as the spin-Hall angle, to establish if this material is an efficient spin-charge interconverter, and, therefore, if it could be suitable for spintronic applications. The Rashba–Edelstein length (2D spin-charge interconversion efficiency) can be effectively compared to the spin Hall angle (3D spin-charge interconversion efficiency), by dividing the parameter for the thickness of the spin-splitted surface states in which this 2D conversion occurs. This "equivalent" spin Hall angle for the Rashba–Edelstein effect often results in being close to the unity or even greater than the unity: the Rashba–Edelstein effect, on average, is a more efficient spin-charge interconversion mechanism than the spin Hall effect and this could lead to a future employment of materials displaying this effect in the technological industry.
See also
Spin Hall effect
Topological insulator
Rashba effect
Dresselhaus effect
Spintronics
References
Condensed matter physics
Spintronics
Semiconductors
Quantum magnetism | Rashba–Edelstein effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,506 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Spintronics",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Quantum magnetism",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
64,345,861 | https://en.wikipedia.org/wiki/Advanced%20reprocessing%20of%20spent%20nuclear%20fuel | The advanced reprocessing of spent nuclear fuel is a potential key to achieve a sustainable nuclear fuel cycle and to tackle the heavy burden of nuclear waste management. In particular, the development of such advanced reprocessing systems may save natural resources, reduce waste inventory and enhance the public acceptance of nuclear energy. This strategy relies on the recycling of major actinides (Uranium and Plutonium, and also Thorium in the breeder fuel cycle) and the transmutation of minor actinides (Neptunium, Americium and Curium) in appropriate reactors. In order to fulfill this objective, selective extracting agents need to be designed and developed by investigating their complexation mechanism.
Managing spent nuclear fuel
The estimated inventory of spent nuclear fuel discharged from nuclear power reactors worldwide up to the end of 2013 is about 370,000 . To date, about 250,000 of this inventory is being stored. At the back-end step of the nuclear fuel cycle, two spent fuel management options could be potentially adopted:
Open fuel cycle: the cycle starts with the mining of uranium, goes through the fuel fabrication and ends with the direct disposal of the spent fuel.
Closed fuel cycle: the spent fuel stored for a long time can be safely handled and it undergoes reprocessing in order to recover and recycle a large amount of it.
According to the first option, the spent nuclear fuel is considered complete waste. After an interim storage (wet and dry), all the used fuel is directly disposed of in a deep geological repository. Several geological media and repository designs are being studied according to different natures of fuel and their burnup, radioactive inventory and decay heat generation. The assessment of a geological disposal is based on the multi-barrier approach, which combines multiple effects of engineered and natural barriers in order to delay potential long-lived radionuclide migrations to the biosphere over time.
Reprocessing is the alternative option for the spent fuel after a long interim storage. The used fuel is not considered waste anymore but a future energy resource. A large amount of fertile uranium ( ^{238}U ) is discharged along with a small quantity of fissile ( ^{235}U ) and a non-negligible portion of high level waste products and transuranic elements, which strongly contribute to the long-term radiotoxicity of the spent nuclear fuel. The recovery and recycling of uranium and plutonium were the first steps in developing a closed fuel cycle. Furthermore, a strong reduction of the volume, radiotoxicity and heat load of the spent nuclear fuel can be efficiently achieved.
Despite the benefits of this first reprocessing approach, an amount of waste must be treated, stored and disposed of in a deep geological repository over a long period of time. Waste from reprocessing and spent nuclear fuel are classified as High Level Waste (HLW) according to the IAEA guidance due to the high emission of radioactivity and decay heat.
The first reprocessing approach is based on the PUREX (Plutonium Uranium Reduction EXtraction) process, which is the standard and mature technology applied worldwide to recover uranium and plutonium from spent nuclear fuel at industrial scale. Following the dissolution of the spent fuel in nitric acid and the removal of uranium and plutonium, the generated secondary waste still contains fission and activation products along with transuranic elements that must be isolated from biosphere. Uranium and plutonium are recovered by the well-known tributylphosphate (TBP) ligand in a liquid-liquid extraction process.
Reprocessing allows the recycling of the uranium and plutonium into fresh fuel (RepU and MOX) and a strong reduction of volume, decay heat and radiotoxicity of the HLW. A measure of the HLW hazard is provided by radiotoxicity coming from the different nature of radionuclides. The SNF radiotoxicity is usually evaluated as a function of time and compared to the natural uranium ore. The spent nuclear fuel without reprocessing has a long-term toxicity that is mainly dominated by transuranic elements. Mainly due to plutonium, SNF without reprocessing reaches the reference radiotoxicity level after about 300,000 years. After uranium and plutonium removal, HLW is less radioactive and it decays to the reference level within 10,000 years. Since minor actinides (MAs) also contribute to the long-term decay heat and radiotoxicity of the spent fuel, an advanced reprocessing could further reduce the radiotoxic inventory with a decay to the reference level of about 300 years.
Partitioning & Transmutation strategy
Many efforts are being devoted to develop an advanced reprocessing approach with the aim to further reduce the radiotoxicity inventory of the spent nuclear fuel by removing all minor actinides (Neptunium, Americium and Curium) and the long-lived fission products (LLFP) from the high active raffinate downstream of the PUREX process. Before the conditioning process, the long-lived radionuclides undergo a transmutation into short-lived or stable nuclides by nuclear reactions. This coupled approach is known as Partitioning and Transmutation strategy (P&T), which inclusion in an advanced closed fuel cycle could lead to strongly reduce long-term radiotoxicity, volume and decay heat of the final waste thus simplifying a performance assessment of a future nuclear waste repository and enhancing proliferation resistance criteria.
Two potential process options for the partitioning of spent nuclear fuel are being developed: hydrometallurgical and pyrometallurgical processes. The hydrometallurgical partitioning, also known as solvent extraction process, was born and developed in Europe thereby becoming the reference technology for future SNF reprocessing at industrial level, whereas the pyrometallurgical option started in the United States and Russia as an alternative to the aqueous processes. Unlike plutonium, americium and curium show a very low affinity towards TBP ligand, thus needing further advanced separation processes and different extracting agents such as nitrogen-bearing ligands, also known as soft donors, which have been showing a very good affinity towards actinides. An efficient and selective separation of actinides from lanthanides (Ln) is crucial to meet the Closed Fuel Cycle goal. Lanthanide ions, present in a large mass ratio with respect to actinides in the PUREX raffinate, have a high neutron-capture cross section that would not lead to an efficient minor actinide transmutation. The presence of uranium isotopes and other impurities in the transmutation target could generate further radiotoxic transuranic isotopes by neutron capture, instead of more stable nuclides. Most of the radiotoxic nuclides could be transmuted by thermal neutrons in conventional reactors, but the process would take a lot of time due to the low transmutation efficiency. Recent research is focusing on innovative nuclear transmuters such as Gen IV fast reactors and hybrid reactors (Accelerator-driven Systems). The final product left by the P&T process will be a dense vitrified waste to be disposed of for a smaller period of time. All the benefits coming from the management of nuclear waste by this advanced approach will be a small step towards a sustainable energy source and an increased public acceptance of the nuclear energy.
Overview of European experience in nuclear partitioning
A lot of research funded by the European Commission is being devoted to hydrometallurgical processes for the partitioning and transmutation of trivalent actinides (An). These research programs have first led to multicycle processes, secondly to the development of simplified and innovative processes. The hydrometallurgical partitioning consists of two relevant steps: extraction and stripping. In the first step the organic phase, containing the extracting ligand dissolved in a suitable solvent, is contacted with the aqueous phase coming from the dissolution of the irradiated fuel. The solutes present in the aqueous phase are extracted by a complexation reaction with the extracting agent and transferred into the organic phase in which the formed complexes are soluble. The second step, known as stripping, is obtained by reversing the complexation reaction, where the solutes are back-extracted into another aqueous solution usually different in acidity compared to the previous one. The main goal is to develop reliable and affordable industrial separation processes by lipophilic and hydrophilic ligands to selectively extract minor actinides from the (3–4) M acidic target waste downstream of the PUREX process, but with the more challenging goal to minimize the amount of solid secondary waste. The CHON principle was born to meet this further process requirement, according to which all extractants and molecular reagents used in the developed processes have only to contain atoms of carbon (C), hydrogen (H), oxygen (O) and nitrogen (N), thus incinerable waste to easily release into the environment.
The industrial separation processes will be implemented stepwise by annular centrifugal contactors, developed for the first time at Argonne National Laboratory in the 1970s. The countercurrent process consists of the aqueous and organic phases moving continuously in opposite directions stage by stage. The two immiscible liquids enter each contactor unit, first contacted in the annular region between the housing and the spinning rotor and then centrifuged in the inner part of the unit. Two main ways are currently followed within the partitioning strategy: the heterogeneous and homogeneous recycling. All the first European research projects on hydrometallurgical partitioning started within the heterogeneous recycling, since none of the developed extracting agents were able to selectively extract actinides directly downstream of the PUREX process. This led research to develop first multi-stage and multi-cycle processes. A two-cycle process (DIAMide EXtraction + Selective ActiNide EXtraction) was developed for a selective actinide extraction downstream of a first co-extraction (DIAMEX) of actinides and lanthanides. The recent joint research projects point to develop innovative processes with a reduced number of cycles to directly extract minor actinides (americium and curium) from the PUREX raffinate in one cycle, either by a lipophilic extractant (1cycle-SANEX) or by a hydrophilic ligand (innovative-SANEX). Recent research efforts are being devoted to the homogeneous recycling by Grouped Actinides Extraction (GANEX), which consists in a previous uranium recovery (GANEX 1) and a successive group separation of plutonium, neptunium, americium and curium actinide ions (GANEX 2).
Industrial requirements for an extracting agent
The European experience in nuclear partitioning led to advanced hydrometallurgical separation processes. However, the feasibility of these advanced partitioning processes at industrial level relies on the use of reliable and affordable extracting agents, which have to meet these relevant industrial requirements:
Affinity towards An(III) over Ln(III)
Good An(III) back-extraction to enable solvent recycling
Good solubility in a proper diluent
CHON compliance to reduce secondary waste
Fast complexation kinetics
Hydrodynamic stability to prevent third phase and precipitates formation during extraction process
Chemical and radiolytic stability to prevent solvent degradation
Research on advanced reprocessing of spent nuclear fuel moves on by developing process optimization studies and designing new potential lipophilic and hydrophilic extracting agents that fulfill these industrial requirements.
Actinide partitioning: complexation mechanism
The selective separation of actinides from the PUREX raffinate by advanced processes needs new extracting agents, which must possess a more pronounced affinity towards actinides over lanthanides and other products mostly present in the acidic fuel dissolution. The design and the synthesis of efficient extracting agents rely on a deep knowledge of the complexation mechanism involved in the extraction process. Moreover, the structure and the stability of the ligand complexes with An(III) and Ln(III) upon extraction process, and the ligand selectivity need to be investigated. Research is being devoted to design more N-donor extracting agents, which show promising selectivity towards actinides.
Solvent extraction and complexation chemistry
The liquid-liquid extraction for the selective actinide partitioning (SANEX-like processes) consists of an organic phase, containing an extracting agent dissolved in a suitable solvent mixture, and an aqueous phase, containing the irradiated fuel dissolution in hot nitric acid. The two phases are vigorously mixed to promote the extraction kinetics. The centrifugation process is performed to favour the phase separation and the transfer of the formed complexes from the depleted aqueous phase (raffinate) into the organic phase (extract) where they result more soluble. This solvent separation can be performed by neutral extracting agents dissolved in the solvent. As can be seen in the equation, the solvating ligand (L) extracts the interested metal cation (M) together with its anion (A). The products of this reaction represent all the potential complexes that can form during an extraction process such as with .
The efficiency and the selectivity of the extraction and separation processes can be evaluated by the Distribution coefficients () and the Separation Factors (), as shown by the equations. The distribution coefficient is calculated as the ratio between the concentration of the metal cations in the organic and aqueous phase, whereas the separation factor is calculated as the ratio between the two distribution coefficients.
To perform a selective separation, the distribution ratio of the solute to be extracted must be greater than one, whereas that belonging to the solutes which remain in the aqueous feed must be lower than one. This always yields a separation factor SF > 1. Generally, the effects of acidity and temperature on distribution ratios and the separation factor are investigated because the main species with actinides and lanthanides could be prone to decomplexation upon increasing acidity due to protonation of the ligand or due to increasing temperature. The thermodynamic effects are usually investigated by performing extraction tests at increasing temperature. Furthermore, thermodynamics studies can assess the several alkyl chains of a ligand on its complexation properties towards minor actinides than lanthanides. The extraction processes are based on the complexation of metal ions with lipophilic or hydrophilic ligands. The extracting agent forms a coordination complex with the metal ion as a product of a Lewis acid-base reaction. Ligands are named bases (donors) and contain at least one electron lone pair to donate to metal ions named acids (acceptors). Metal cations in the aqueous feed raffinate are generally solvated by coordinating water molecules through the donor oxygen atoms to form aquo ions [M(H2O)_{n}]^{3+} . The complexation of a metal ion implies therefore the replacement of the coordinated water molecules with the respective ligands. The speed of this substitution plays a crucial role in the complexation kinetics and the following extraction processes. The replacement can be slow for an inert complex or rapid for a labile complex. The ligand could replace all the coordinated water molecules to form an inner sphere complex or just some of them for an outer-sphere complex. The complexation reaction is theoretically based on the Pearson's theory of hard and soft acids and bases, according to which hard acids form strong complexes with hard bases and likewise soft acids form strong complexes with soft bases. In aqueous solutions, hard-hard interactions are electrostatic, while soft-soft interactions usually show a covalent character.
The formation of strong complexes always implies either a large gain of entropy or a large decrease of enthalpy thereby obtaining a large negative value of the complexation free energy. According to Pearson's theory, lanthanide and actinide ions are considered hard acids, thus they bind especially with ligands bearing hard donors such as oxygen atoms by electrostatic interactions. The charge of actinide and lanthanide ions in solution is substantially +3 and the difference in size of these cations is very small. Thus, an efficient separation of minor actinides from lanthanides is very challenging. Actinides seem to be a little less hard than lanthanides, probably due to a longer spatial extent of 5f atomic orbitals with respect to 4f ones, then a selective separation is possible thanks to ligands bearing soft donors such as nitrogen and sulfur atoms, by a different bonding nature compared to hard donors. Despite the potential separation of actinides, a direct extraction from the PUREX raffinate is very hard, due to the presence of many other interfering species in the feed such as activation and fission products. In this perspective, advanced separation processes involving very selective extracting agents are required to achieve an efficient partitioning for the following transmutation process. To sum up, knowledge of hydration, acid-base interactions, kinetics and thermodynamic stabilities and speciation of actinide and lanthanide ions with ligands is of great value to understand a solvent extraction process and to design new and promising extracting agents.
Insight into complexation by spectroscopic techniques
The main objective is to elucidate the complexation and extraction mechanisms of a ligand in order to develop a reliable and affordable extraction process at industrial scale. In this perspective, the formation and the stability of different metal-ligand complexes are first investigated by different spectroscopic techniques on a laboratory scale. Preliminary studies can be performed by Electrospray ionization mass spectrometry (ESI-MS) to qualitatively explore the ligand complexes with lanthanides or actinides. Moreover, quantitative information about speciation and complexation of the ligand with some metal ions representatives of actinides and lanthanides can be obtained by Time-resolved fluorescence spectroscopy (TRLFS) experiments.
Electrospray ionization mass spectrometry
Electrospray ionization mass spectrometry is a very versatile technique, consisting in a transfer of the formed complexes from the injected solution to the gas phase by a soft ionization process without strongly perturbing the complex stability. In addition, a small amount of the prepared solution needs to be injected to obtain the speciation spectra. Speciation of several metal ions can be investigated in monophasic solutions at increasing ligand concentration in order to explore all potential complexes. Collision Induced Dissociation (CID) analysis can be also performed to assess the kinetic stability of the formed complexes by discovering the main fragmentation pathway of the ligand. Besides, the protonation effect on the complexation mechanism can be observed by performing analysis on monophasic solutions at increasing nitric acid concentration. Corroboration of the major complexes involved into the extraction process is generally found by performing experiments on biphasic solutions upon extraction tests. Despite versatility of this spectroscopic technique that directly provides information by changing the ligand to metal ratios, its qualitative nature due to instrumental set-up and potential changes in solution chemistry could partially affect species distribution and its ion abundance. For these reasons, corroboration for the speciation results needs to be found by other spectroscopic techniques.
Time Resolved Laser Fluorescence spectroscopy
Time-resolved laser fluorescence spectroscopy is a sensitive spectroscopic method able to investigate the formation of different complex species in sub-micro molar concentrations. Thanks to the great spectroscopic properties of some metal cations representatives of actinides and lanthanides, fluorescence analyses by laser excitation of ion energy levels can be carried out on monophasic and biphasic solutions. The fluorescence evolution resulting from the ion energy transitions is generally followed as a function of ligand concentration in monophasic titration experiments. The bathochromic shift of the fluorescence spectra are due to the ligand complexation. According to the postulated complexation model and the Slope Analysis on the experimental data, the stoichiometry of the major complexes can be determined. Moreover, the cumulative stability constants and the Separation Factor can be calculated, thus pointing out the potential ligand affinity towards actinides.
The fluorescence lifetime measurement is an additional way to follow the evolution of the metal-ion complexation with the ligand, starting from the initial solvent species up to the more stable complexes. Each species can be identified by a typical lifetime related to the decay of the emission intensity. The fluorescence lifetime measurements on the monophasic and biphasic solutions can confirm the formation of the major complexes, thanks to the correlation between the fluorescence lifetime and the number of water molecules potentially present in the inner coordination sphere of the metal ions. Fluorescence spectra can be also obtained at increasing temperature to study complexation thermodynamics and investigate complex stability at experimental conditions closer to industrial applications.
References
External links
Partitioning and Transmutation for waste management
The sustainability of used nuclear fuel management
Nuclear fuels
Uranium
Plutonium
Actinides
Nuclear reprocessing
Separation processes by phases
Liquid-liquid separation | Advanced reprocessing of spent nuclear fuel | [
"Chemistry"
] | 4,364 | [
"Separation processes",
"Separation processes by phases",
"Liquid-liquid separation"
] |
62,960,782 | https://en.wikipedia.org/wiki/Heinz%20Raether | Heinz Artur Raether (14 October 1909 — 31 December 1986) was a German physicist. He is best known for his theoretical and experimental contributions to the study of surface plasmons, as well as for Kretschmann-Raether configuration, a commonly-used experimental setup for the excitation of surface plasmon resonances.
From 1944 to 1946 he was a professor of physics at the University of Jena at the Physikalisches Institut. Here he dealt with electron physics, electron microscopy, electron interference and gas discharges.
In 1951, he took over the management of the Institute for Applied Physics at the University of Hamburg. After the development of the transistor, he focused on solid state physics. His work during this period concerned the structure and growth of crystals. Later he became interested in the collective behavior of the electrons of a crystal, the solid-state electron plasma.
In gas discharge physics, he devoted himself to the ignition process, especially the formation of the spark channel, the initial phase of electrical breakdown. In 1963 he was elected a full member of the Göttingen Academy of Sciences. In 1979 he was elected a member of the Academy of Sciences Leopoldina.
Selected publications
Articles
Books
See also
Raether limit
Surface plasmon polariton
Townsend discharge
References
1909 births
1986 deaths
Condensed matter physicists
Electrical breakdown
Optical physicists
People associated with electricity
Scientists from Nuremberg
German plasma physicists
Plasmonics
Academic staff of the University of Hamburg
Academic staff of the University of Jena
20th-century German physicists
Nanophysicists | Heinz Raether | [
"Physics",
"Chemistry",
"Materials_science"
] | 313 | [
"Plasmonics",
"Physical phenomena",
"Condensed matter physicists",
"Surface science",
"Electrical phenomena",
"Condensed matter physics",
"Electrical breakdown",
"Nanotechnology",
"Solid state engineering"
] |
54,426,201 | https://en.wikipedia.org/wiki/Unique%20homomorphic%20extension%20theorem | The unique homomorphic extension theorem is a result in mathematical logic which formalizes the intuition that the truth or falsity of a statement can be deduced from the truth values of its parts.
The lemma
Let A be a non-empty set, X a subset of A, F a set of functions in A, and the inductive closure of X under F.
Let be B any non-empty set and let G be the set of functions on B, such that there is a function in G that maps with each function f of arity n in F the following function in G (G cannot be a bijection).
From this lemma we can now build the concept of unique homomorphic extension.
The theorem
If is a free set generated by X and F, for each function there is a single function such that:
For each function f of arity n > 0, for each
Consequence
The identities seen in (1) e (2) show that is an homomorphism, specifically named the unique homomorphic extension of . To prove the theorem, two requirements must be met: to prove that the extension () exists and is unique (assuring the lack of bijections).
Proof of the theorem
We must define a sequence of functions inductively, satisfying conditions (1) and (2) restricted to . For this, we define , and given then shall have the following graph:
First we must be certain the graph actually has functionality, since is a free set, from the lemma we have when , so we only have to determine the functionality for the left side of the union. Knowing that the elements of G are functions(again, as defined by the lemma), the only instance where and for some is possible is if we have for some and for some generators and in .
Since and are disjoint when this implies and . Being all in , we must have .
Then we have with , displaying functionality.
Before moving further we must make use of a new lemma that determines the rules for partial functions, it may be written as:
(3)Be a sequence of partial functions such that . Then, is a partial function.
Using (3), is a partial function. Since then is total in .
Furthermore, it is clear from the definition of that satisfies (1) and (2). To prove the uniqueness of , or any other function that satisfies (1) and (2), it is enough to use a simple induction that shows and work for , and such is proved the Theorem of the Unique Homomorphic Extension.
Example of a particular case
We can use the theorem of unique homomorphic extension for calculating numeric expressions over whole numbers. First, we must define the following:
where
Be
Be he inductive closure of under and be
Be
Then will be a function that calculates recursively the truth-value of a proposition, and in a way, will be an extension of the function that associates a truth-value to each atomic proposition, such that:
(1)
(2) (Negation)
(AND Operator)
(OR Operator)
(IF-THEN Operator)
References
Theorems in analysis | Unique homomorphic extension theorem | [
"Mathematics"
] | 649 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical problems",
"Mathematical theorems"
] |
54,426,651 | https://en.wikipedia.org/wiki/Ackermann%27s%20formula | In control theory, Ackermann's formula is a control system design method for solving the pole allocation problem for invariant-time systems by Jürgen Ackermann. One of the primary problems in control system design is the creation of controllers that will change the dynamics of a system by changing the eigenvalues of the matrix representing the dynamics of the closed-loop system. This is equivalent to changing the poles of the associated transfer function in the case that there is no cancellation of poles and zeros.
State feedback control
Consider a linear continuous-time invariant system with a state-space representation
where is the state vector, is the input vector, and are matrices of compatible dimensions that represent the dynamics of the system. An input-output description of this system is given by the transfer function
where is the determinant and is the adjugate.
Since the denominator of the right equation is given by the characteristic polynomial of , the poles of are eigenvalues of (note that the converse is not necessarily true, since there may be cancellations between terms of the numerator and the denominator). If the system is unstable, or has a slow response or any other characteristic that does not specify the design criteria, it could be advantageous to make changes to it. The matrices , however, may represent physical parameters of a system that cannot be altered. Thus, one approach to this problem might be to create a feedback loop with a gain that will feed the state variable into the input .
If the system is controllable, there is always an input such that any state can be transferred to any other state . With that in mind, a feedback loop can be added to the system with the control input , such that the new dynamics of the system will be
In this new realization, the poles will be dependent on the characteristic polynomial of , that is
Ackermann's formula
Computing the characteristic polynomial and choosing a suitable feedback matrix can be a challenging task, especially in larger systems. One way to make computations easier is through Ackermann's formula. For simplicity's sake, consider a single input vector with no reference parameter , such as
where is a feedback vector of compatible dimensions. Ackermann's formula states that the design process can be simplified by only computing the following equation:
in which is the desired characteristic polynomial evaluated at matrix , and is the controllability matrix of the system.
Proof
This proof is based on Encyclopedia of Life Support Systems entry on Pole Placement Control. Assume that the system is controllable. The characteristic polynomial of is given by
Calculating the powers of results in
Replacing the previous equations into yields
Rewriting the above equation as a matrix product and omitting terms that does not appear isolated yields
From the Cayley–Hamilton theorem, , thus
Note that is the controllability matrix of the system. Since the system is controllable, is invertible. Thus,
To find , both sides can be multiplied by the vector giving
Thus,
Example
Consider
We know from the characteristic polynomial of that the system is unstable since
the matrix will only have positive eigenvalues. Thus, to stabilize the system we shall put a feedback gain
From Ackermann's formula, we can find a matrix that will change the system so that its characteristic equation will be equal to a desired polynomial. Suppose we want
Thus, and computing the controllability matrix yields
Also, we have that
Finally, from Ackermann's formula
State observer design
Ackermann's formula can also be used for the design of state observers. Consider the linear discrete-time observed system
with observer gain . Then Ackermann's formula for the design of state observers is noted as
with observability matrix . Here it is important to note, that the observability matrix and the system matrix are transposed: and .
Ackermann's formula can also be applied on continuous-time observed systems.
See also
Full state feedback
References
External links
Chapter about Ackermann's Formula on Wikibook of Control Systems and Control Engineering
Engineering concepts
Control engineering
Control theory | Ackermann's formula | [
"Mathematics",
"Engineering"
] | 832 | [
"Applied mathematics",
"Control theory",
"Control engineering",
"nan",
"Dynamical systems"
] |
54,436,343 | https://en.wikipedia.org/wiki/Propionyl%20chloride | Propionyl chloride (also propanoyl chloride) is the organic compound with the formula CH3CH2C(O)Cl. It is the acyl chloride derivative of propionic acid. It undergoes the characteristic reactions of acyl chlorides. It is a colorless, corrosive, volatile liquid.
It is used as a reagent for organic synthesis. In derived chiral amides and esters, the methylene protons are diastereotopic.
There have been efforts to schedule Propionyl chloride as a DEA List 1 Chemical as it can be used to synthesize fentanyl.
Synthesis
Propionyl chloride is industrially produced by chlorination of propionic acid with phosgene:
CH3CH2CO2H + COCl2 → CH3CH2COCl + HCl + CO2
References
Acyl chlorides
Reagents for organic chemistry | Propionyl chloride | [
"Chemistry"
] | 189 | [
"Reagents for organic chemistry"
] |
54,437,591 | https://en.wikipedia.org/wiki/Isobutyryl%20chloride | Isobutyryl chloride (2-methylpropanoyl chloride) is the organic compound with the formula . A colorless liquid, it the simplest branched-chain acyl chloride. It is prepared by chlorination of isobutyric acid.
Reactions
As an ordinary acid chloride, isobutyryl chloride is the subject of many reported transformations. Dehydrohalogenation of isobutyryl chloride with triethylamine gives 2,2,4,4-tetramethylcyclobutanedione. Treatment of isobutyryl chloride with hydrogen fluoride gives the acid fluoride.
References
Acyl chlorides
Reagents for organic chemistry | Isobutyryl chloride | [
"Chemistry"
] | 142 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs",
"Reagents for organic chemistry"
] |
71,619,423 | https://en.wikipedia.org/wiki/PGC%202933 | PGC or LEDA 2933 is a faint dwarf irregular galaxy in the Sculptor Group. It can be seen in the southern constellation Phoenix. According to measurements, the galaxy is located 11.15 million light-years away.
Because it is situated in the Sculptor Group, it is one of the closest galaxies to the Milky Way. It is obscured by a few brighter stars and galaxies (the brightest of them on the right side of the photo is 1425 light-years away from the Solar System).
The galaxy has a diameter of 2,000 light years.
References
Dwarf irregular galaxies
Sculptor Group
Phoenix (constellation)
002933
540-032 | PGC 2933 | [
"Astronomy"
] | 132 | [
"Phoenix (constellation)",
"Constellations"
] |
44,391,105 | https://en.wikipedia.org/wiki/Pisier%E2%80%93Ringrose%20inequality | In mathematics, Pisier–Ringrose inequality is an inequality in the theory of C*-algebras which was proved by Gilles Pisier in 1978 affirming a conjecture of John Ringrose. It is an extension of the Grothendieck inequality.
Statement
Theorem. If is a bounded, linear mapping of one C*-algebra into another C*-algebra , then
for each finite set of elements of .
See also
Haagerup-Pisier inequality
Christensen-Haagerup Principle
Notes
References
.
.
Inequalities
Operator algebras | Pisier–Ringrose inequality | [
"Mathematics"
] | 113 | [
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems"
] |
44,392,172 | https://en.wikipedia.org/wiki/Rich-club%20coefficient | The rich-club coefficient is a metric on graphs and networks, designed to measure the extent to which well-connected nodes also connect to each other. Networks which have a relatively high rich-club coefficient are said to demonstrate the rich-club effect and will have many connections between nodes of high degree. The rich-club coefficient was first introduced in 2004 in a paper studying Internet topology.
The "Rich-club" effect has been measured and noted on scientific collaboration networks and air transportation networks. It has been shown to be significantly lacking on protein interaction networks.
Definition
Non-normalized form
The rich-club coefficient was first introduced as an unscaled metric parametrized by node degree ranks. More recently, this has been updated to be parameterized in terms of node degrees k, indicating a degree cut-off. The rich-club coefficient for a given network N is then defined as:
where is the number of edges between the nodes of degree greater than or equal to k, and is the number of nodes with degree greater than or equal to k. This measures how many edges are present between nodes of degree at least k, normalized by how many edges there could be between these nodes in a complete graph. When this value is close to 1 for values of k close to , it is interpreted that high degree nodes of the network are well connected. The associated subgraph of nodes with degree at least k is also called the "Rich Club" graph.
Normalized for topology randomization
A criticism of the above metric is that it does not necessarily imply the existence of the rich-club effect, as it is monotonically increasing even for random networks. In certain degree distributions, it is not possible to avoid connecting high degree hubs. To account for this, it is necessary to compare the above metric to the same metric on a degree distribution preserving randomized version of the network. This updated metric is defined as:
where is the rich-club metric on a maximally randomized network with the same degree distribution of the network under study. This new ratio discounts unavoidable structural correlations that are a result of the degree distribution, giving a better indicator of the significance of the rich-club effect.
For this metric, if for certain values of k we have , this denotes the presence of the rich-club effect.
Generalizations
General richness properties
The natural definition of a node's "richness" is its number of neighbours. If instead we replace this with a generic richness metric on nodes r, then we can rewrite the unscaled Rich-Club coefficient as:
Where we are instead considering the sub graph on only nodes with a richness measure of at least r. For example, on scientific collaboration networks, replacing the degree richness (number of coauthors) with a strength richness (number of published papers), the topology of the rich club graph changes dramatically.
Related metrics
Assortativity
The Assortativity of a network is a measurement of how connected similar nodes are, where similarity is typically viewed in terms of node degree. Rich-club can be viewed as a more specific notation of assortativity, where we are only concerned with the connectivity of nodes beyond a certain richness metric. For example, if a network consisted of a collection of hub and spokes, where the hubs were well connected, such a network would be considered disassortative. However, due to the strong connectedness of the hubs in the network, the network would demonstrate the rich-club effect.
Applications
The rich-club coefficient of a network is useful as a heuristic measurement of the robustness of a network. A high rich-club coefficient implies that the hubs are well connected, and global connectivity is resilient to any one hub being removed. It is also useful for verifying theories that generalize to other networks. For example, the consistent observation of high rich-club coefficients for scientific collaboration networks adds evidence to the theory that within social groups, the elite tend to associate with one another.
Implementations
The rich-club coefficient has been implemented in NetworkX, a Python library for network analysis. This implementation includes both the non-normalized and normalized forms as described above.
See also
Assortativity
Preferential attachment
Structural cut-off
References
External links
NetworkX Documentation of Rich Club coefficient function
Networks
Network theory | Rich-club coefficient | [
"Mathematics"
] | 888 | [
"Network theory",
"Mathematical relations",
"Graph theory"
] |
44,393,382 | https://en.wikipedia.org/wiki/China%20Zebrafish%20Resource%20Center | The China Zebrafish Resource Center (CZRC) is a non-profit organization located in 7 Donghu South Road, Wuhan, Focusing mainly on zebrafish resources. It was established in the Institute of Hydrobiology, Chinese Academy of Sciences, in October 2012, currently headed by the board chairman Meng Anming.
Introduction
CZRC is a non-profit organization jointly supported by the Ministry of Science and Technology of China, and the Chinese Academy of Sciences. CZRC mainly focuses on collecting existing zebrafish resources and developing new lines and technologies, with the purpose of providing resources, technical and informational support for colleagues in China and overseas.
Board of directors
Honorary board chairman: Zhu Zuoyan
Board chairman: Meng Anming
Board secretary-general and director: Sun Yonghua
References
Danios
Animal models
Stem cell research
Regenerative biomedicine
Chinese Academy of Sciences
Organizations based in Wuhan | China Zebrafish Resource Center | [
"Chemistry",
"Biology"
] | 183 | [
"Stem cell research",
"Model organisms",
"Translational medicine",
"Animal models",
"Tissue engineering"
] |
44,399,572 | https://en.wikipedia.org/wiki/Modern%20Meat | Modern Meat: Antibiotics, Hormones, and the Pharmaceutical Farm is a 1984 book by Orville Schell on intensive animal farming and antibiotic use in livestock.
Reviews
One reviewer said that the book is a "startling introduction to today's mass-producing factory farms" but that it had the flaw of the author's "unrestrained personal bias and overdramatization of issues".
Another reviewer said that the book was controversial and "warns of subtle—but potentially dangerous—long-range effects of 'pharmaceutical farming.'"
A reviewer summarized the book's coverage as descriptions of "the indiscriminate use of ""subtherapeutic"" antibiotics in animal feeds (probably contributing to the spread of antibiotic-resistant bacteria in both human and animal hosts); the use of diethylstilbestrol and other hormones; and (more briefly) the USDA meat-inspection programs--plus the industry's search for what could be described as nonfood feeds to simplify the stoking of four-footed machines."
The National Cattlemen's Beef Association called the book "one-sided" and "seriously flawed". Consumer advocate Ralph Nader called the book "precise and gripping".
References
External links
1984 non-fiction books
Animal welfare
Antibiotics
Books about food and drink
Intensive farming
Works about the meat industry | Modern Meat | [
"Chemistry",
"Biology"
] | 278 | [
"Biotechnology products",
"Eutrophication",
"Intensive farming",
"Antibiotics",
"Biocides"
] |
73,015,595 | https://en.wikipedia.org/wiki/Virtual%20photon | Virtual photons are a fundamental concept in particle physics and quantum field theory that play a crucial role in describing the interactions between electrically charged particles. Virtual photons are referred to as "virtual" because they do not exist as free particles in the traditional sense but instead serve as intermediate particles in the exchange of force between other particles. They are responsible for the electromagnetic force that holds matter together, making them a key component in our understanding of the physical world.
Virtual photons are thought of as fluctuations in the electromagnetic field, characterized by their energy, momentum, and polarization. These fluctuations allow electrically charged particles to interact with each other by exchanging virtual photons. The electromagnetic force between two charged particles can be understood as the exchange of virtual photons between them. These photons are constantly being created and destroyed, and the exchange of these virtual photons creates the electromagnetic force that is responsible for interaction between charged particles.
Virtual photons can be classified into positive and negative virtual photons. These classifications are based on the direction of their energy and momentum and their contribution to the electromagnetic force.
If virtual photons exchanged between particles have a positive energy, they contribute to the electromagnetic force as a repulsive force. This means that the two charged particles are repelled from each other and the electromagnetic force pushes them apart. On the other hand, if the virtual photons have a negative energy, they contribute to the electromagnetic force as an attractive force. This means that the two charged particles are attracted to each other and the electromagnetic force pulls them towards each other.
It is important to note that positive and negative virtual photons are not separate particles, but rather a way of classifying the virtual photons that exist in the electromagnetic field. These classifications are based on the direction of the energy and momentum of the virtual photons and their contribution to the electromagnetic force.
Virtual photons can have a range of polarizations, which can be described as the orientation of the electric and magnetic fields that make up the photon. The polarization of a virtual photon is determined by the direction of its momentum and its interaction with the charges that emit or absorb it. The range of polarizations for virtual photons can be compared to the range of colors for visible light, with each polarization corresponding to a specific orientation of the electric and magnetic fields.
Virtual photons are said to be "off-shell", which means that they do not obey the usual relationship between energy and momentum that applies to real particles. Real photons must always have energy equal to the speed of light times their momentum, but virtual photons can have any energy that is consistent with the uncertainty principle. This allows virtual photons to carry a wide range of energies, even if they are not physically real.
Virtual photons are responsible for Lamb shift, which is a small shift in the energy levels of hydrogen atoms caused by the interaction of the atom with virtual photons in the vacuum.
They are also responsible for the Casimir effect, which is the phenomenon of two uncharged metallic plates being attracted to each other due to the presence of virtual photons in the vacuum between them. The attractive force between the plates is caused by a difference in the density of virtual photons on either side of the plates, which creates a net force that pulls them together.
References
Quantum field theory
Photons | Virtual photon | [
"Physics"
] | 668 | [
"Quantum field theory",
"Quantum mechanics"
] |
49,260,321 | https://en.wikipedia.org/wiki/Smart%20manufacturing | Smart manufacturing is a broad category of manufacturing that employs computer-integrated manufacturing, high levels of adaptability and rapid design changes, digital information technology, and more flexible technical workforce training. Other goals sometimes include fast changes in production levels based on demand, optimization of the supply chain, efficient production and recyclability. In this concept, as smart factory has interoperable systems, multi-scale dynamic modelling and simulation, intelligent automation, strong cyber security, and networked sensors.
The broad definition of smart manufacturing covers many different technologies. Some of the key technologies in the smart manufacturing movement include big data processing capabilities, industrial connectivity devices and services, and advanced robotics.
Big data processing
Smart manufacturing leverages big data analytics to optimize complex production processes and enhance supply chain management. Big data analytics refers to a method for gathering and understanding large data sets in terms of what are known as the three V's, velocity, variety and volume. Velocity informs the frequency of data acquisition, which can be concurrent with the application of previous data. Variety describes the different types of data that may be handled. Volume represents the amount of data. Big data analytics allows an enterprise to use smart manufacturing to predict demand and the need for design changes rather than reacting to orders placed.
Some products have embedded sensors, which produce large amounts of data that can be used to understand consumer behavior and improve future versions of the product.
Advanced robotics
Advanced industrial robots, also known as smart machines, operate autonomously and can communicate directly with manufacturing systems. In some advanced manufacturing contexts, they can work with humans for co-assembly tasks. By evaluating sensory input and distinguishing between different product configurations, these machines are able to solve problems and make decisions independent of people. These robots are able to complete work beyond what they were initially programmed to do and have artificial intelligence that allows them to learn from experience. These machines have the flexibility to be reconfigured and re-purposed. This gives them the ability to respond rapidly to design changes and innovation, which is a competitive advantage over more traditional manufacturing processes. An area of concern surrounding advanced robotics is the safety and well-being of the human workers who interact with robotic systems. Traditionally, measures have been taken to segregate robots from the human workforce, but advances in robotic cognitive ability have opened up opportunities, such as cobots, for robots to work collaboratively with people.
Cloud computing allows large amounts of data storage or computational power to be rapidly applied to manufacturing, and allow a large amount of data on machine performance and output quality to be collected. This can improve machine configuration, predictive maintenance, and fault analysis. Better predictions can facilitate better strategies for ordering raw materials or scheduling production runs.
3D printing
As of 2019, 3D printing is mainly used in rapid prototyping, design iteration, and small-scale production. Improvements in speed, quality, and materials could make it useful in mass production and mass customization.
However, 3D printing developed so much in recent years that it is no longer used just as technology for prototyping. 3D printing sector is moving beyond prototyping especially it is becoming increasingly widespread in supply chains. The industries where digital manufacturing with 3D printing is the most seen are automotive, industrial and medical. In the auto industry, 3D printing is used not only for prototyping but also for the full production of final parts and products. 3D printing has also been used by suppliers and digital manufacturers coming together to help fight COVID-19.
3D printing allows to prototype more successfully, thus companies are saving time and money as significant volumes of parts can be produced in a short period. There is great potential for 3D printing to revolutionise supply chains, hence more companies are using it. The main challenge that 3D printing faces is the change of people's mindset. Moreover, some workers will need to re-learn a set of new skills to manage 3D printing technology.
Eliminating workplace inefficiencies and hazards
Smart manufacturing can also be attributed to surveying workplace inefficiencies and assisting in worker safety. Efficiency optimization is a huge focus for adopters of "smart" systems, which is done through data research and intelligent learning automation. For instance operators can be given personal access cards with inbuilt Wi-Fi and Bluetooth, which can connect to the machines and a Cloud platform to determine which operator is working on which machine in real time. An intelligent, interconnected 'smart' system can be established to set a performance target, determine if the target is obtainable, and identify inefficiencies through failed or delayed performance targets. In general, automation may alleviate inefficiencies due to human error. And in general, evolving AI eliminates the inefficiencies of its predecessors.
As robots take on more of the physical tasks of manufacturing, workers no longer need to be present and are exposed to fewer hazards.
Impact of Industry 4.0
Industry 4.0 is a project in the high-tech strategy of the German government that promotes the computerization of traditional industries such as manufacturing. The goal is the intelligent factory (Smart Factory) that is characterized by adaptability, resource efficiency, and ergonomics, as well as the integration of customers and business partners in business and value processes. Its technological foundation consists of cyber-physical systems and the Internet of Things.
This kind of "intelligent manufacturing" makes a great use of:
Wireless connections, both during product assembly and long-distance interactions with them;
Last generation sensors, distributed along the supply chain and the same products (Internet of things);
Elaboration of a great amount of data to control all phases of construction, distribution and usage of a good.
European Roadmap "Factories of the Future" and German one "Industrie 4.0″ illustrate several of the action lines to undertake and the related benefits. Some examples are:
Advanced manufacturing processes and rapid prototyping will make possible for each customer to order one-of-a-kind product without significant cost increase.
Collaborative Virtual Factory (VF) platforms will drastically reduce cost and time associated to new product design and engineering of the production process, by exploiting complete simulation and virtual testing throughout the Product Lifecycle.
Advanced Human-Machine interaction (HMI) and augmented reality (AR) devices will help increasing safety in production plants and reducing physical demand to workers (whose age has an increasing trend).
Machine learning will be fundamental to optimize the production processes, both for reducing lead times and reducing the energy consumption.
Cyber-physical systems and machine-to-machine (M2M) communication will allow to gather and share real-time data from the shop floor in order to reduce downtime and idle time by conducting extremely effective predictive maintenance.
Statistics
The Ministry of Economy, Trade and Industry in South Korea announced on 10 March 2016 that it had aided the construction of smart factories in 1,240 small and medium enterprises, which it said resulted in an average 27.6% decrease in defective products, 7.1% faster production of prototypes, and 29.2% lower cost.
See also
Open manufacturing
Fourth Industrial Revolution
References
External links
CESMII - US National Institute on Smart Manufacturing
Factories of the Future
Agnieszka Radziwon, Arne Bilberg, Marcel Bogers, Erik Skov Madsen. The Smart Factory: Exploring Adaptive and Flexible Manufacturing Solutions – Proceedings of the 24th DAAAM International Symposium on Intelligent Manufacturing and Automation, 23–26 October 2013, Zadar, Croatia. – Elsevier, Procedia Engineering, ISSN 1877-7058, 69 (2014),
Agnieszka Radziwon, Marcel Bogers, Arne Bilberg. The Smart Factory: Exploring an Open Innovation Solution for Manufacturing Ecosystems Date Written: May 28, 2014. Available at SSRN, 11 Pages. Posted: 1 Oct 2014
GE launches 'microfactory' to co-create the future of manufacturing
Manufacturing | Smart manufacturing | [
"Engineering"
] | 1,603 | [
"Manufacturing",
"Mechanical engineering"
] |
49,261,105 | https://en.wikipedia.org/wiki/Directed%20assembly%20of%20micro-%20and%20nano-structures | Directed assembly of micro- and nano-structures are methods of mass-producing micro to nano devices and materials. Directed assembly allows the accurate control of assembly of micro and nano particles to form even the most intricate and highly functional devices or materials.
Directed self-assembly
Directed self-assembly (DSA) is a type of directed assembly which utilizes block co-polymer morphology to create lines, space and hole patterns, facilitating for a more accurate control of the feature shapes. Then it uses surface interactions as well as polymer thermodynamics to finalize the formation of the final pattern shapes. To control the surface interactions enabling sub-10 nm resolution, a team consisting of Massachusetts Institute of Technology, University of Chicago, and Argonne National Laboratory developed a way to use vapor-phase deposited polymeric top layer on the block co-polymer film in 2017.
The DSA is not a standalone process, but rather is integrated with traditional manufacturing processes in order to mass-produce micro and nano structures at a lower cost. Directed self-assembly is mostly used in the semiconductor and hard drive industries. The semiconductor industry uses this assembly method in order to be able increase the resolution (trying to fit in more gates), while the hard drive industry uses DSA to manufacture "bit patterned media" according to the specified storage densities.
Micro-structures
There are many applications of directed assembly in the micro-scale, from tissue engineering to polymer thin-films. In tissue engineering, directed assembly have been able to replace scaffolding approach of building tissues. This happens by controlling the position and organization of different cells, which are the "building blocks" of the tissue, into different desired micro-structures. This eliminates the error of not being able to reproduce the same tissue, which is a major issue in the scaffolding approach.
Nanostructures
Nanotechnology provides methods to organizing materials such as molecules, polymers, building blocks, etc. to form precise nanostructures which have many applications. In the process and application of peptide self-assembly into nano tubes, the single-wall carbon nano tubes is an example which consists of a graphene sheet seamlessly wrapped to a cylinder. This produced in the outside flow of a carbon and yield by laser vaporization of graphite enriched by a transition metal.
Nanoimprint lithography is a popular method to fabricate nanometer scale pattern. The patterns are made by mechanical deformation of imprint resist (monomer or polymer formulation) and subsequent processes. Then, it is cured by heat or ultraviolet light, and tight level of the resist and template is controlled at appropriate conditions depend on our purposes. In addition, nanoimprint lithography has high resolution and throughput with low cost. Disadvantages include increased time for templating procedures, a lack of standard procedures results in multiple fabrication methods, and the patterns that are able to be formed are limited.
With the goal of mitigating these disadvantages while applying nanotechnology to electronics, researchers at the National Science Foundation's Nanoscale Science and Engineering Center for High-Rate Nanomanufacturing (CHN) at Northeastern University with partners UMass Lowell and University of New Hampshire have developed a directed assembly process of single-walled carbon nano tube (SWNT) networks to create a circuit template that can be transfer from one substrate to another.
Self-assembled monolayers on solid substrates
Self-assembled monolayers (SAMs) are made of a layer of organic molecules which forms naturally as an ordered lattice on the surface of a desired substrate. Their molecules in the lattice have connections chemically at one end (head group), while the other end (end group) creates the exposed surface of the SAM.
Many types of SAMs can be formed. For example: thiols form SAMs on gold, silver, copper, or on some compound semiconductors such as InP and GaAs. By changing the tail group of the molecules, different surface properties can be obtained; therefore SAMs can be used to render surfaces hydrophobic or hydrophilic as well as change surface states of semiconductor. With self-assembly, positioning of SAMs is used to define chemical system precisely to find the target location in a molecular-inorganic device. With this characteristic, SAMs is a good candidates for molecular electronic devices such as use SAMs to build electronic devices and maybe the circuits is an intriguing prospect. Because of their ability to provide the basis for very high-density data storage and high-speed devices.
Acoustic methods
Directed assembly using the acoustic methods manipulate waves in order to allow non-invasive assembling of micro and nano structures. Due to this, acoustics are especially widely used in the biomedical industry to manipulate droplets, cells and other molecules.
Acoustic waves are generated by a piezoelectric transducer controlled from the pulse generator. These waves are able to then manipulate droplets of liquid and move them together, in order to form a packed assembly. Moreover, the frequency and amplitude of the waves can be modified in order to achieve a more accurate control of the particular behavior of the droplet or cell.
Optical methods
Directed assembly or more specifically directed self-assembly, can produce a high pattern resolution (~10 nm) with high efficiency and compatibility. However, when using DSA in high volume manufacturing, one must have a way to quantify the degree of order of line/space patterns formed by DSA in order to reduce defect.
Normal approaches, such as critical dimension-scanning electron microscopy (CD-SEM), to obtain data for pattern quality inspection take too much time and is also labor-intensive. On the other hand, the optical scatterometer-based metrology is a non-invasive technique and has very high throughput due to its larger spot size. These result in the collection of more statistical data than by using SEM, and that data processing is also automated with the optical technique making it more feasible than traditional CD-SEM.
Magnetic methods
Magnetic field directed self-assembly (MFDSA) allows the manipulation of dispersion and subsequent assembly of magnetic nanoparticles. This is widely used in the development of advanced materials whereby inorganic nanoparticles (NPs) are dispersed in polymers in order to enhance the properties of the materials.
The magnetic field technique allows the assembling of particles in 3D by doing the assembly in a dilute suspension where the solvent does not evaporate. It also does not need to use a template, and the approach also improve the magnetic anisotropy along the chain direction.
Dielectrophoretic methods
Dielectrophoretic directed self-assembly utilizes an electric field that controls metal particles, such as gold nanorods, by inducing a dipole in the particles. By varying the polarity and strength of the electric field, the polarized particles are either attracted to positive regions or repelled from negative regions where the electric field has higher strength. This direct manipulation method transports the particles to position and orient them into a nano-structure on a receptor substrate.
References
Manufacturing
Microtechnology
Nanotechnology | Directed assembly of micro- and nano-structures | [
"Materials_science",
"Engineering"
] | 1,441 | [
"Microtechnology",
"Materials science",
"Manufacturing",
"Mechanical engineering",
"Nanotechnology"
] |
49,262,559 | https://en.wikipedia.org/wiki/Dell%20PERC | A Dell PowerEdge RAID Controller, or Dell PERC, is a series of RAID, disk array controllers made by Dell for its PowerEdge server computers. The controllers support SAS and SATA hard disk drives (HDDs) and solid-state drives (SSDs).
PERC versions
Series 5 family
These are compatible with 9th and 10th Generation Dell PowerEdge servers.
PERC 5/E – external
PERC 5/I – internal – integrated or adapter
Series 6 family
These are compatible with 10th and 11th Generation Dell PowerEdge servers.
PERC S100 – software based
PERC 6/E – external – adapter
PERC 6/I – internal – modular or adapter
Series 7 family
These are compatible with 10th and 11th Generation Dell PowerEdge servers.
PERC S300 – software based
PERC H200 – internal – integrated/adapter or modular
PERC H700 – internal – integrated/adapter or modular
PERC H800 – external – adapter
Series 8 family
These are compatible with 12th Generation Dell PowerEdge servers.
PERC S110 – software based
PERC H310 – adapter or mini mono or mini blade
PERC H710 – internal – adapter or mini mono or mini blade
PERC H710p – internal – adapter or mini mono or mini blade
PERC H810 – external
Series 9 family
These are compatible with 13th Generation Dell PowerEdge servers.
PERC S130 – software based
PERC H330 – internal – Adapter – Tower Servers, Mini-Mono- Rack Servers; no battery backup unit (BBU)
PERC H730 – internal – Adapter – Tower Servers and Secondary Controllers, Mini-Mono- Rack Servers
PERC H730p – internal
PERC H830 – external
Note: All PERC 9 series cards support RAID 6 except for the PERC H330.
Series 10 family
These are compatible with 14th and 15th Generation Dell PowerEdge Servers.
PERC H840
PERC H345
PERC H740p
PERC H745
PERC H745p MX
Series 11 family
These are compatible with 16th Generation Dell PowerEdge Servers.
PERC H750
PERC H750 ADAPTER SAS
PERC H755 ADAPTER
PERC H755 FRONT SAS
PERC H755N FRONT NVME
PERC H755 MX ADAPTER
Series 12 family
PERC H965I ADAPTER
PERC H965I FRONT
PERC H965I MX
PERC H965E ADAPTER
See also
Dell DRAC
Intel Rapid Storage Technology
List of Dell PowerEdge Servers
References
External links
Dell products
Computer storage devices | Dell PERC | [
"Technology"
] | 545 | [
"Computer storage devices",
"Recording devices"
] |
49,263,763 | https://en.wikipedia.org/wiki/Infrastructure%20as%20code | Infrastructure as code (IaC) is the process of managing and provisioning computer data center resources through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.
The IT infrastructure managed by this process comprises both physical equipment, such as bare-metal servers, as well as virtual machines, and associated configuration resources.
The definitions may be in a version control system, rather than maintaining the code through manual processes.
The code in the definition files may use either scripts or declarative definitions, but IaC more often employs declarative approaches.
Overview
IaC grew as a response to the difficulty posed by utility computing and second-generation web frameworks. In 2006, the launch of Amazon Web Services’ Elastic Compute Cloud and the 1.0 version of Ruby on Rails just months before created widespread scaling difficulties in the enterprise that were previously experienced only at large, multi-national companies. With new tools emerging to handle this ever-growing field, the idea of IaC was born. The thought of modeling infrastructure with code, and then having the ability to design, implement, and deploy application infrastructure with known software best practices appealed to both software developers and IT infrastructure administrators. The ability to treat infrastructure as code and use the same tools as any other software project would allow developers to rapidly deploy applications.
Advantages
The value of IaC can be broken down into three measurable categories: cost, speed, and risk. Cost reduction aims at helping not only the enterprise financially, but also in terms of people and effort, meaning that by removing the manual component, people are able to refocus their efforts on other enterprise tasks. Infrastructure automation enables speed through faster execution when configuring your infrastructure and aims at providing visibility to help other teams across the enterprise work quickly and more efficiently. Automation removes the risk associated with human error, like manual misconfiguration; removing this can decrease downtime and increase reliability. These outcomes and attributes help the enterprise move towards implementing a culture of DevOps, the combined working of development and operations.
Types of approaches
There are generally two approaches to IaC: declarative (functional) vs. imperative (procedural). The difference between the declarative and the imperative approach is essentially 'what' versus 'how' . The declarative approach focuses on what the eventual target configuration should be; the imperative focuses on how the infrastructure is to be changed to meet this. The declarative approach defines the desired state and the system executes what needs to happen to achieve that desired state. Imperative defines specific commands that need to be executed in the appropriate order to end with the desired conclusion.
Methods
Infrastructure as Code (IaC) allows you to manage servers and their configurations using code. There are two ways to send these configurations to servers: the 'push' and 'pull' methods. In the 'push' method, the system controlling the configuration directly sends instructions to the server. In the 'pull' method, the server retrieves its own instructions from the controlling system.
Tools
There are many tools that fulfill infrastructure automation capabilities and use IaC. Broadly speaking, any framework or tool that performs changes or configures infrastructure declaratively or imperatively based on a programmatic approach can be considered IaC. Traditionally, server (lifecycle) automation and configuration management tools were used to accomplish IaC. Now enterprises are also using continuous configuration automation tools or stand-alone IaC frameworks, such as Microsoft’s PowerShell DSC or AWS CloudFormation.
Continuous configuration automation
All continuous configuration automation (CCA) tools can be thought of as an extension of traditional IaC frameworks. They leverage IaC to change, configure, and automate infrastructure, and they also provide visibility, efficiency and flexibility in how infrastructure is managed. These additional attributes provide enterprise-level security and compliance.
Community content
Community content is a key determinant of the quality of an open source CCA tool. As Gartner states, the value of CCA tools is "as dependent on user-community-contributed content and support as it is on the commercial maturity and performance of the automation tooling". Established vendors such as Puppet and Chef have created their own communities. Chef has Chef Community Repository and Puppet has PuppetForge. Other vendors rely on adjacent communities and leverage other IaC frameworks such as PowerShell DSC. New vendors are emerging that are not content-driven, but model-driven with the intelligence in the product to deliver content. These visual, object-oriented systems work well for developers, but they are especially useful to production-oriented DevOps and operations constituents that value models versus scripting for content. As the field continues to develop and change, the community-based content will become ever more important to how IaC tools are used, unless they are model-driven and object-oriented.
Notable CCA tools include:
Other tools include AWS CloudFormation, cdist, StackStorm, Juju, and Step CI.
Relationships
Relationship to DevOps
IaC can be a key attribute of enabling best practices in DevOps. Developers become more involved in defining configuration and Ops teams get involved earlier in the development process. Tools that utilize IaC bring visibility to the state and configuration of servers and ultimately provide the visibility to users within the enterprise, aiming to bring teams together to maximize their efforts. Automation in general aims to take the confusion and error-prone aspect of manual processes and make it more efficient, and productive. Allowing for better software and applications to be created with flexibility, less downtime, and an overall cost-effective way for the company. IaC is intended to reduce the complexity that kills efficiency out of manual configuration. Automation and collaboration are considered central points in DevOps; infrastructure automation tools are often included as components of a DevOps toolchain.
Relationship to security
The 2020 Cloud Threat Report released by Unit 42 (the threat intelligence unit of cybersecurity provider Palo Alto Networks) identified around 200,000 potential vulnerabilities in infrastructure as code templates.
See also
Docker
IT infrastructure
Infrastructure as a service
Orchestration
Continuous configuration automation
Landing zone (software)
References
Agile software development
Software development process
Configuration management
Systems engineering
Orchestration software
Cloud computing | Infrastructure as code | [
"Engineering"
] | 1,264 | [
"Systems engineering",
"Configuration management"
] |
49,270,083 | https://en.wikipedia.org/wiki/Graph%20edit%20distance | In mathematics and computer science, graph edit distance (GED) is a measure of similarity (or dissimilarity) between two graphs.
The concept of graph edit distance was first formalized mathematically by Alberto Sanfeliu and King-Sun Fu in 1983.
A major application of graph edit distance is in inexact graph matching, such
as error-tolerant pattern recognition in machine learning.
The graph edit distance between two graphs is related to the
string edit distance between strings.
With the interpretation of strings as connected, directed acyclic graphs of
maximum degree one, classical definitions
of edit distance such as Levenshtein distance,
Hamming distance
and Jaro–Winkler distance may be interpreted as graph edit distances
between suitably constrained graphs. Likewise, graph edit distance is
also a generalization of tree edit distance between
rooted trees.
Formal definitions and properties
The mathematical definition of graph edit distance is dependent upon the definitions of
the graphs over which it is defined, i.e. whether and how the vertices and edges of the
graph are labeled and whether the edges are directed.
Generally, given a set of graph edit operations (also known as elementary graph operations), the graph edit distance between two graphs and , written as can be defined as
where denotes the set of edit paths transforming into (a graph isomorphic to) and is the cost of each graph edit operation .
The set of elementary graph edit operators typically includes:
vertex insertion to introduce a single new labeled vertex to a graph.
vertex deletion to remove a single (often disconnected) vertex from a graph.
vertex substitution to change the label (or color) of a given vertex.
edge insertion to introduce a new colored edge between a pair of vertices.
edge deletion to remove a single edge between a pair of vertices.
edge substitution to change the label (or color) of a given edge.
Additional, but less common operators, include operations such as edge splitting that introduces a new vertex into an edge (also creating a new edge), and edge contraction that eliminates vertices of degree two between edges (of the same color). Although such complex edit operators can be defined in terms of more elementary transformations, their use allows finer parameterization of the cost function when the operator is cheaper than the sum of its constituents.
A deep analysis of the elementary graph edit operators is presented in
And some methods have been presented to automatically deduce these elementary graph edit operators. And some algorithms learn these costs online:
Applications
Graph edit distance finds applications in handwriting recognition, fingerprint recognition and cheminformatics.
Algorithms and complexity
Exact algorithms for computing the graph edit distance between a pair of graphs typically transform the problem into one of finding the minimum cost edit path between the two graphs.
The computation of the optimal edit path is cast as a pathfinding search or shortest path problem, often implemented as an A* search algorithm.
In addition to exact algorithms, a number of efficient approximation algorithms are also known. Most of them have cubic computational time
Moreover, there is an algorithm that deduces an approximation of the GED in linear time
Despite the above algorithms sometimes working well in practice, in general the problem of computing graph edit distance is NP-hard (for a proof that's available online, see Section 2 of Zeng et al.), and is even hard to approximate (formally, it is APX-hard).
References
Graph theory
Graph algorithms
Computational problems in graph theory
Distance | Graph edit distance | [
"Physics",
"Mathematics"
] | 697 | [
"Computational problems in graph theory",
"Discrete mathematics",
"Distance",
"Physical quantities",
"Quantity",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Size",
"Combinatorics",
"Space",
"Mathematical relations",
"Spacetime",
"Wikipedia categories named after... |
49,270,505 | https://en.wikipedia.org/wiki/Hafnian | In mathematics, the hafnian is a scalar function of a symmetric matrix that generalizes the permanent.
The hafnian was named by Eduardo R. Caianiello "to mark the fruitful period of stay in Copenhagen (Hafnia in Latin)."
Definition
The hafnian of a symmetric matrix is defined as
where is the set of all partitions of the set into subsets of size .
This definition is similar to that of the Pfaffian, but differs in that the signatures of the permutations are not taken into account. Thus the relationship of the hafnian to the Pfaffian is the same as relationship of the permanent to the determinant.
Basic properties
The hafnian may also be defined as
where is the symmetric group on . The two definitions are equivalent because if , then is a partition of into subsets of size 2, and as ranges over , each such partition is counted exactly times. Note that this argument relies on the symmetry of , without which the original definition is not well-defined.
The hafnian of an adjacency matrix of a graph is the number of perfect matchings (also known as 1-factors) in the graph. This is because a partition of into subsets of size 2 can also be thought of as a perfect matching in the complete graph .
The hafnian can also be thought of as a generalization of the permanent, since the permanent can be expressed as
.
Just as the hafnian counts the number of perfect matchings in a graph given its adjacency matrix, the permanent counts the number of matchings in a bipartite graph given its biadjacency matrix.
The hafnian is also related to moments of multivariate Gaussian distributions.
By Wick's probability theorem, the hafnian of a real symmetric matrix may expressed as
where is any number large enough to make positive semi-definite. Note that the hafnian does not depend on the diagonal entries of the matrix, and the expectation on the right-hand side does not depend on .
Generating function
Let be an arbitrary complex symmetric matrix composed of four blocks , , and . Let be a set of independent variables, and let be an antidiagonal block matrix composed of entries (each one is presented twice, one time per nonzero block). Let denote the identity matrix. Then the following identity holds:
where the right-hand side involves hafnians of matrices , whose blocks , , and are built from the blocks , , and respectively in the way introduced in MacMahon's Master theorem. In particular, is a matrix built by replacing each entry in the matrix with a block filled with ; the same scheme is applied to , and . The sum runs over all -tuples of non-negative integers, and it is assumed that .
The identity can be proved by means of multivariate Gaussian integrals and Wick's probability theorem.
The expression in the left-hand side, , is in fact a multivariate generating function for a series of hafnians, and the right-hand side constitutes its multivariable Taylor expansion in the vicinity of the point As a consequence of the given relation, the hafnian of a symmetric matrix can be represented as the following mixed derivative of the order :
The hafnian generating function identity written above can be considered as a hafnian generalization of MacMahon's Master theorem, which introduces the generating function for matrix permanents and has the following form in terms of the introduced notation:
Note that MacMahon's Master theorem comes as a simple corollary from the hafnian generating function identity due to the relation .
Non-negativity
If is a Hermitian positive semi-definite matrix and is a complex symmetric matrix, then
where denotes the complex conjugate of .
A simple way to see this when is positive semi-definite is to observe that, by Wick's probability theorem, when is a complex normal random vector with mean , covariance matrix and relation matrix .
This result is a generalization of the fact that the permanent of a Hermitian positive semi-definite matrix is non-negative. This corresponds to the special case using the relation .
Loop hafnian
The loop hafnian of an symmetric matrix is defined as
where is the set of all perfect matchings of the complete graph on vertices with loops, i.e., the set of all ways to partition the set into pairs or singletons (treating a singleton as the pair ). Thus the loop hafnian depends on the diagonal entries of the matrix, unlike the hafnian. Furthermore, the loop hafnian can be non-zero when is odd.
The loop hafnian can be used to count the total number of matchings in a graph (perfect or non-perfect), also known as its Hosoya index. Specifically, if one takes the adjacency matrix of a graph and sets the diagonal elements to 1, then the loop hafnian of the resulting matrix is equal to the total number of matchings in the graph.
The loop hafnian can also be thought of as incorporating a mean into the interpretation of the hafnian as a multivariate Gaussian moment. Specifically, by Wick's probability theorem again, the loop hafnian of a real symmetric matrix can be expressed as
where is any number large enough to make positive semi-definite.
Computation
Computing the hafnian of a (0,1)-matrix is #P-complete, because computing the permanent of a (0,1)-matrix is #P-complete.
The hafnian of a matrix can be computed in time.
If the entries of a matrix are non-negative, then its hafnian can be approximated to within an exponential factor in polynomial time.
See also
Permanent
Pfaffian
Boson sampling
References
Algebraic graph theory
Matching (graph theory)
Combinatorics | Hafnian | [
"Mathematics"
] | 1,226 | [
"Discrete mathematics",
"Graph theory",
"Combinatorics",
"Mathematical relations",
"Matching (graph theory)",
"Algebra",
"Algebraic graph theory"
] |
53,133,381 | https://en.wikipedia.org/wiki/Crumpling | In geometry and topology, crumpling is the process whereby a sheet of paper or other two-dimensional manifold undergoes disordered deformation to yield a three-dimensional structure comprising a random network of ridges and facets with variable density. The geometry of crumpled structures is the subject of some interest to the mathematical community within the discipline of topology. Crumpled paper balls have been studied and found to exhibit surprisingly complex structures with compressive strength resulting from frictional interactions at locally flat facets between folds. The unusually high compressive strength of crumpled structures relative to their density is of interest in the disciplines of materials science and mechanical engineering.
Significance
The packing of a sheet by crumpling is a complex phenomenon that depends on material parameters and the packing protocol. Thus the crumpling behaviour of foil, paper and poly-membranes differs significantly and can be interpreted on the basis of material foldability. The high compressive strength exhibited by dense crumple formed cellulose paper is of interest towards impact dissipation applications and has been proposed as an approach to utilising waste paper.
From a practical standpoint, crumpled balls of paper are commonly used as toys for domestic cats.
References
Topology
Manifolds
Deformation (mechanics)
Structural analysis
Materials science
Mechanical engineering | Crumpling | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 250 | [
"Structural engineering",
"Geometry stubs",
"Applied and interdisciplinary physics",
"Deformation (mechanics)",
"Aerospace engineering",
"Structural analysis",
"Materials science",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Mechanical engineering",
"nan",
"Manifolds... |
47,489,128 | https://en.wikipedia.org/wiki/Dade%20Moeller | Dade Moeller (February 27, 1927 – September 26, 2011) was an internationally known expert in radiation safety and environmental protection.
Life
Dade William Moeller, Ph.D., CHP, P.E. was born in 1927 in Grant, Florida, a fishing community located on the intracoastal waterway near the Atlantic Ocean. His father was Robert A. Moeller and his mother was Victoria Moeller and he had 4 brothers, Charles E. Moeller, Robert L. Moeller, John A. Moeller, and Ken L. Moeller. In 1949 he married Betty Jean Radford 'Jeanie' of Decatur, Georgia. Moeller died at home from complications due to malignant lymphoma on September 26, 2011.
Military service and education
He passed the V-12 Navy College Training Program, and joined the U.S. Navy in 1944. Moeller attended Georgia Tech and graduated magna cum laude with a Bachelor of Science degree in civil engineering in 1947 and a Master of Science degree in environmental engineering in 1948. After graduating, Dade became a commissioned officer in the U.S. Public Health Service, with assignments that included Oak Ridge National Laboratory, Los Alamos National Laboratory, and the Headquarters office in Washington, D.C.
In 1957 with sponsorship from the U.S. Public Health Service, Moeller earned the Doctor of Philosophy degree in nuclear engineering from North Carolina State University. He taught radiation protection courses at the U.S. Public Health Service's Radiological Health Training Center in Cincinnati, Ohio.
In 1959, Moeller joined the Health Physics Society and became a certified health physicist and a certified environmental engineer.
In 1961, he became the officer in charge at the Northeastern Radiological Health Laboratory in Winchester, Massachusetts, and studied radioactive fallout from atomic weapons testing and monitored children's thyroids for the uptake of radioactive iodine.
In 1966 Moeller retired from the U.S. Public Health Service.
Harvard School of Public Health
Moeller held tenure for 26 years and served as:
Professor of Engineering in Environmental Health
Associate Director of the Kresge Center for Environmental Health
Associate Director of the Harvard-National Institute of Environmental Health Sciences Center for Environmental Health
Chairman of the Department of Environmental Health Sciences
Associate Dean for Continuing Education
Taught in the Center for Continuing Professional Education
Memberships
American Association for the Advancement of Science
American Industrial Hygiene Association
American Nuclear Society
American Public Health Association
Health Physics Society
Awards and honors
Health Physics Society, Fellow, 1968
National Academy of Engineering, Fellow, 1978
American Public Health Association, Fellow, 1988
American Nuclear Society, Fellow, 1988
U.S. Nuclear Regulatory Commission, Meritorious Achievement Award, 1988
National Council on Radiological Protection and Measurements, Distinguished Emeritus Member, 1997
Georgia Institute of Technology, Engineering Hall of Fame, 1999
NC State University, Distinguished Engineering Alumni Award, 2001
Health Physics Society, Robley D. Evans Commemorative Medal, 2003
William McAdams Outstanding Service Award, American Academy of Health Physics, 2005
Professor Emeritus Award of Merit, Harvard University School of Public Health, 2006
Patents
Method and apparatus for reduction of radon decay product exposure.
Radon decay product removal unit as adapted for use with a lamp.
Select publications
Thesis
Radionuclides in Reactor Cooling Water-Identification, Source and Control. (1957).
References
1927 births
2011 deaths
United States Navy officers
United States Public Health Service Commissioned Corps officers
United States Navy personnel of World War II
Georgia Tech alumni
People from Grant, Florida
North Carolina State University alumni
Health physicists
American civil engineers
Environmental engineers
Oak Ridge National Laboratory people
Harvard T.H. Chan School of Public Health faculty
Los Alamos National Laboratory personnel
Health Physics Society
American nuclear engineers
Scientists from Florida
21st-century American engineers
20th-century American engineers
Deaths from lymphoma in the United States
Deaths from cancer in North Carolina | Dade Moeller | [
"Chemistry",
"Engineering"
] | 766 | [
"Environmental engineers",
"Environmental engineering"
] |
47,489,333 | https://en.wikipedia.org/wiki/Levenspiel%20plot | A Levenspiel plot is a plot used in chemical reaction engineering to determine the required volume of a chemical reactor given experimental data on the chemical reaction taking place in it. It is named after the late chemical engineering professor Octave Levenspiel.
Derivation
For a continuous stirred-tank reactor (CSTR), the following relationship applies:
where:
is the reactor volume
is the molar flow rate per unit time of the entering reactant A
is the conversion of reactant A
is the rate of disappearance of reactant A per unit volume per unit time
For a plug flow reactor (PFR), the following relationship applies:
If is plotted as a function of , the required volume to achieve a specific conversion can be determined given an entering molar flow rate.
The volume of a CSTR necessary to achieve a certain conversion at a given flow rate is equal to the area of the rectangle with height equal to and width equal to .
The volume of a PFR necessary to achieve a certain conversion at a given flow rate is equal to the area under the curve of plotted against .
References
Chemical reaction engineering
Plots (graphics) | Levenspiel plot | [
"Chemistry",
"Engineering"
] | 227 | [
"Chemical engineering",
"Chemical reaction engineering"
] |
56,007,469 | https://en.wikipedia.org/wiki/Virus%20nanotechnology | Virus nanotechnology is the use of viruses as a source of nanoparticles for biomedical purposes.
Viruses are made up of a genome and a capsid; and some viruses are enveloped. Most virus capsids measure between 20-500 nm in diameter. Because of their nanometer size dimensions, viruses have been considered as naturally occurring nanoparticles. Virus nanoparticles have been subject to the nanoscience and nanoengineering disciplines. Viruses can be regarded as prefabricated nanoparticles. Many different viruses have been studied for various applications in nanotechnology: for example, mammalian viruses are being developed as vectors for gene delivery, and bacteriophages and plant viruses have been used in drug delivery and imaging applications as well as in vaccines and immunotherapy intervention.
Overview
Virus nanotechnology is one of the very promising and emerging disciplines in nanotechnology. A highly interdisciplinary field, viral nanotechnology occupies the interface between virology, biotechnology, chemistry, and materials science. The fields employs viral nanoparticles (VNPs) and its counterparts of virus-like nanoparticles (VLPs) for potential applications in the diverse fields of electronics, sensors, and most significantly at clinical field. VNPs and VLPs are attractive building blocks for several reasons. Both particles are on the nanometer-size scale; they are monodisperse with a high degree of symmetry and polyvalency; they can be produced with ease on large scale; they are exceptionally stable and robust, and they are biocompatible, and in some cases, orally bioavailable. They are "programmable" units that can be modified by either genetic modification or chemical bioconjugation methods.
What is nanotechnology?
Nanotechnology is the manipulation or self-assembly of individual atoms, molecules, or, molecular clusters into structures to create materials and devices with new or vastly different properties. Nanotechnology can work from the top down (which means reducing the size of the smallest structures to the nanoscale) or bottom up (which involves manipulating individual atoms and molecules into nanostructures) .The definition of nanotechnology is based on the prefix "nano" which is from the Greek word meaning "dwarf". In more technical terms, the word "nano" means 10−9, or one billionth of something. For a meaningful comparison, a virus is roughly 100 nanometers (nm) in size. So that a virus can also call as a nanoparticle. The word nanotechnology is generally used when referring to materials with the size of 0.1 to 100 nanometres, however, it is also inherent that these materials should display different properties from bulk (or micrometric and larger) materials as a result of their size. These differences include physical strength, chemical reactivity, electrical conductance, magnetism and optical effects.
Nanotechnology has an almost limitless string of applications in biology, biotechnology, and biomedicine. Nanotechnology has engendered a growing sense of excitement due to the ability to produce and utilize materials, devices, and systems through the control of matter on the nanometer scale (1 to 50 nm). This bottom-up approach requires less material and causes less pollution. Nanotechnology has had several commercial applications in advanced laser technology, hard coatings, photography, pharmaceuticals, printing, chemical-mechanical polishing, and cosmetics. Soon, there will be lighter cars using nanoparticle reinforced polymers, orally applicable insulin, artificial joints made from nanoparticulate materials, and low-calorie foods with nanoparticulate taste enhancers.
Viruses as building blocks in nanotechnology
Viruses have long been studied as deadly pathogens to cause disease in all living forms. By the 1950s, researchers had begun thinking of viruses as tools in addition of pathogens. Bacteriophage genomes and components of the protein expression machinery have been widely utilized as tools for understanding the fundamental cellular process. On the basis of these studies, several viruses have been exploited as expression systems in biotechnology. Later in the 1970s, viruses are used as a vector for the benefit of humans. Since that, often viruses are used as vectors for gene therapy, cancer control and control of harmful or damaging organisms, in both agriculture and medicine.
Recently, a new approach to exploiting viruses and their capsids for biotechnology began to change toward using them for nanotechnology application. Researchers Douglas and Young (Montana State University, Bozeman, MT, USA) were the first to consider the utility of a virus capsid as a nanomaterial. They have taken plant virus Cowpea Chlorotic Mottle Virus (CCMV) for their study. CCMV showed a highly dynamic platform with pH and metal ion dependent structural transitions. Douglas and Young made use of these capsid dynamics and exchanged the natural cargo (nucleic acid) with synthetic materials. Since then many materials have been encapsulated into CCMV and other VNPs. At about the same time, the research team led by Mann (University of Bristol, UK) pioneered a new area using the rod-shaped particles of TMV (Tobacco Mosaic Virus). The particles were used as templates for the fabrication of a range of metallized nanotube structures using mineralization techniques. TMV particles have also been utilized to generate various structures (nanotubes and nanowires) for use in batteries and data storage devices.
Viral capsids have attracted great interest in the field of nanobiology because of their nanoscale size, symmetrical structural organization, load capacity, controllable self-assembly, and ease of modification. viruses are essentially naturally occurring nanomaterials capable of self-assembly with a high degree of precision. Viral capsid- nanoparticle hybrid structures, which combine the bio-activities of virus capsids with the functions of nanoparticles, are a new class of bionanomaterials that have many potential applications as therapeutic and diagnostic vectors, imaging agents, and advanced nanomaterial synthesis reactors.
Plant viruses in nanotechnology
Plant virus-based systems, in particular, are among the most advanced and exploited for their potential use as bioinspired structured nanomaterials and nano-vectors. Plant virus nanoparticles are non-infectious to mammalian cells also proved by Raja muthuramalingam et al. 2018. Plant viruses have a size particularly suitable for nanoscale applications and can offer several advantages. In fact, they are structurally uniform, robust, biodegradable and easy to produce. Moreover, many are the examples regarding functionalization of plant virus-based nanoparticles by means of modification of their external surface and by loading cargo molecules into their internal cavity. This plasticity in terms of nanoparticles engineering is the ground on which multivalency, payload containment and targeted delivery can be fully exploited.
George P. Lomonossoff writing in "Recent Advances in Plant Virology",
The capsids of most plant viruses are simple and robust structures consisting of multiple copies of one or a few types of protein subunit arranged with either icosahedral or helical symmetry. The capsids can be produced in large quantities either by the infection of plants or by the expression of the subunit(s) in a variety of heterologous systems. In view of their relative simplicity and ease of production, plant virus particles or virus-like particles (VLPs) have attracted much interest over the past 20 years for applications in both bio- and nanotechnology [Lomonossoff, 2011]. As result, plant virus particles have been subjected to both genetic and chemical modification, have been used to encapsulate foreign material and have themselves, been incorporated into supramolecular structures. Significantly, plant viruses studied are not human pathogens, which have no natural tendency to interact with human cell surface receptors. Recently, a plant pathogenic virus was reportedly used to synthesize a noble hybrid metal nanomaterials used as bio-semiconductor.
Plant viruses
Viruses cause several destructive plant diseases and are accountable for massive losses in crop production and quality in all parts of the world. Infected plants may show a range of symptoms depending on the disease but often there is severe leaf curling, stunting (abnormalities in the whole plant) and leaf yellowing (either of the whole leaf or in a pattern of stripes or blotches). Most plant viruses are therefore transmitted by a vector organism (insects, nematodes, plasmodiophorids and mites) that feeds on the plant or (in some diseases) are introduced through wounds made, for example during agriculture practices (e.g. pruning). Many plant viruses, for example, tobacco mosaic virus, have been used as model systems to provide a basic understanding of how viruses express genes and replicate. Others permitted the elucidation of the processes underlying RNA silencing, now recognised as a core epigenetic mechanism underpinning numerous areas of biology.
Some properties of viral nanoparticles
Plant viruses come in many shapes and sizes: for example, the plant virus Tobacco mosaic virus (TMV) measures 300x18 nm in size; it forms a hollow rod. The plant virus Potato virus X (PVX) forms flexible filaments of 515x13 nm. The following viruses have an icosahedral symmetry and measure between 25-30 nm: plant virus Cowpea mosaic virus (CPMV), bacteriophage Qbeta and mammalian adeno-associated virus (AAV).
These are just some examples, many different viruses are being engineered and studied for their potential applications in medicine, some examples of plant viruses include Cowpea chlorotic mottle virus, Red clover necrotic mottle virus, Physalis mosaic virus, Papaya mosaic virus.
Plant viruses and bacteriophages are not infectious toward mammals. In contrast to mammalian viruses, there is no risk of a viral infection.
Virus-like particles (VLPs) can be produced that lack the viral genome; these VLPs are non-infectious also toward plants and thus considered safe also from an agricultural point of view.
Viruses and their non-infectious counterparts can be produced through molecular farming in plants or fermentation in cell culture.
The virus-based nanoparticles can be tailored for specific applications using a number of chemical biology approaches:
Genetic modification can be used to modify the amino acid sequence of the capsid protein (also known as coat protein).
Bioconjugate chemistry can be used to introduce non-biological or biological cargos.
Lastly, while often shown as rigid materials, the viruses are dynamic materials that undergo swelling and other conformational changes allowing for cargo to be infused or encapsulated into their viral capsids.
Manifold plant virus platform technologies are being developed and studied for many applications including:
Vaccines: VLPs or epitope display platforms
Immunotherapies: in situ vaccines
Molecular imaging contrast agents
Drug delivery: targeting both human health and plant health
Battery electrodes
Sensor applications
References
Nanotechnology | Virus nanotechnology | [
"Materials_science",
"Engineering"
] | 2,287 | [
"Nanomedicine",
"Nanotechnology",
"Materials science"
] |
56,013,204 | https://en.wikipedia.org/wiki/Polycarboxylates | Polycarboxylates are organic compounds with several carboxylic acid groups. Butane-1,2,3,4-tetracarboxylate is one example. Often, polycarboxylate refers to linear polymers with a high molecular mass (Mr ≤ 100 000) and with many carboxylate groups. They are polymers of acrylic acid or copolymers of acrylic acid and maleic acid. The polymer is used as the sodium salt (see: sodium polyacrylate).
Use
Polycarboxylates are used as builders in detergents. Their high chelating power, even at low concentrations, reduces deposits on the laundry and inhibits the crystal growth of calcite.
Polycarboxylate ethers (PCE) are used as superplasticizers in concrete production.
Safety
Polycarboxylates are poorly biodegradable but have a low ecotoxicity. In the sewage treatment plant, the polymer remains largely in the sludge and is separated from the wastewater.
Polyamino acids like polyaspartic acid and polyglutamic acid have better biodegradability but lower chelating performance than polyacrylates. They are also less stable towards heat and alkali. Since they contain nitrogen, they contribute to eutrophication.
See also
tricarboxylic acids
References
Polymers
Salts and esters of carboxylic acids | Polycarboxylates | [
"Chemistry",
"Materials_science"
] | 295 | [
"Polymers",
"Polymer chemistry"
] |
56,013,705 | https://en.wikipedia.org/wiki/Fatigue%20of%20welded%20joints | Fatigue of welded joints can occur when poorly made or highly stressed welded joints are subjected to cyclic loading. Welding is a manufacturing method used to join various materials in order to form an assembly. During welding, joints are formed between two or more separate pieces of material which can introduce defects or residual stresses. Under cyclic loading these defects can grow a fatigue crack, causing the assembly to fail even if these cyclic stresses are low and smaller than the base material and weld filler material yield stress. Hence, the fatigue strength of a welded joint does not correlate to the fatigue strength of the base material. Incorporating design considerations in the development phase can reduce failures due to fatigue in welded joints.
Stress-Life method
Similar to high cycle fatigue analysis, the stress life method utilizing stress-cycle curves (also known as Wöhler curves) can be used to determine the strength of a welded joint under fatigue loading. Welded sample specimens undergo repeated loading at a specified stress amplitude, or fatigue strength, until the material fails. This same test is then repeated with various stress amplitudes in order to determine its corresponding cycles, N, to failure. With the data collected, fatigue strength can be plotted against the corresponding number of cycles for a specific material, welded joint and loading. From these curves, the endurance limit, finite-life and infinite-life region can then be determined.
Factors affecting fatigue
Welding residual stresses
During the welding process, residual stresses can present themselves in the area of the weld, either in the heat affected zone or fusion zone. The mean stress a welded joint may see in application, can be altered due to the welding processes implementing residual stresses, changing the fatigue life and can render S-N laboratory testing results. Welded assemblies, with geometrical imperfections, can also introduce residual stresses. Although there are stress and strain relief methods to reduce residual stresses, the complete removal of residual stress is not possible.
Member thickness
An increase in thickness of a base material decreases the fatigue strength when a crack propagates from the toe of a welded joint. This is due to an increase in residual stress concentrations in thick material cross sections.
Material type
In welded joints, an increase in the base material's ultimate tensile strength does not necessarily lead to an increase in fatigue strength of the welded joint.
Welding process
Many welding processes are available for various applications and environments. Stress-cycle curves are not available for all of these processes and still need to be developed so proper fatigue analysis can be performed. The most abundant process found in stress-cycle curves is developed from specimens prepared by arc welding.
Surrounding environment
The surrounding environment of a welded assembly can affect the fatigue life of the welded joints, often lowering them. Variables such as temperature, moisture, and geographical location are considered part of the surrounding environment. Environments which contain sea water may see decreased fatigue life due to the increase in crack growth rates. Little information is available in this area, but it is known that if a base material is subjected to corrosion, the fatigue strength can decrease compared to similar welded joints.
Avoiding fatigue failures of a welded joint
Since the presence of cracks reduces fatigue life and accelerates failure, it is important to avoid all cracking mechanisms in order to prolong the fatigue life of a welded joint. Other weld defects, such as inclusions and lack of penetration, should also be avoided due to these defects being the source of where cracks can initiate. Detailed review of the welded joint during the design is another way to reduce failures. Ensuring that the design is able to handle the cyclic loading profile will prevent premature failures. Additional resources through design handbooks are also available to aid in designing the welded joint to optimize fatigue life. Finite element analysis can also be used to successfully predict fatigue failure.
References
Welding
Mechanical failure
Fracture mechanics | Fatigue of welded joints | [
"Materials_science",
"Engineering"
] | 758 | [
"Structural engineering",
"Welding",
"Fracture mechanics",
"Materials science",
"Mechanical engineering",
"Materials degradation",
"Mechanical failure"
] |
56,013,977 | https://en.wikipedia.org/wiki/G%C3%B6ran%20Lindblad%20%28physicist%29 | Göran Lindblad (9 July 1940–30 November 2022) was a Swedish theoretical physicist and a professor at the KTH Royal Institute of Technology, Stockholm. He made major foundational contributions in mathematical physics and quantum information theory, having to do with open quantum systems, entropy inequalities, and quantum measurements.
Personal life
Lindblad was born in Boden, Sweden on July 9, 1940, and grew up in Örebro, Sweden. Besides physics, he took an interest in history. He died on November 30, 2022, near his home in Johanneshov, Sweden.
Career
Lindblad spent his entire career, starting from his undergraduate days, at the KTH Royal Institute of Technology. He defended his PhD thesis, entitled "The concepts of information and entropy applied to the measurement process in quantum theory and statistical mechanics," on May 29, 1974. His PhD thesis summarized the contents of some important contributions to quantum information theory, including his proof of the data-processing inequality for quantum relative entropy, communicated in a series of three research publications.
Shortly after his PhD thesis work, he derived his most well known scientific contribution, what is known as the Lindblad equation. As the Schrödinger equation describes the evolution of a closed quantum system, the Lindblad equation is a generalization, describing the evolution of an open quantum system, in which a system of interest is interacting with an uncontrollable environment. The Lindblad equation is a significant theoretical contribution and is widely used in many fields of physics, including quantum optics and condensed matter. It is also now the most common method for describing noise that affects various quantum technologies, in the domains of quantum communication and computation.
He published a monograph on two related conceptual problems in the foundations of statistical mechanics, concerning the derivation of the irreversibility of the observed macroscopic behavior from the reversible microscopic laws of motion and the definition of an entropy function on non-equilibrium quantum states.
He retired on July 1, 2005, and was a professor emeritus since that time.
References
Swedish physicists
Quantum physicists
1940 births
2022 deaths
People from Boden Municipality | Göran Lindblad (physicist) | [
"Physics"
] | 437 | [
"Quantum physicists",
"Quantum mechanics"
] |
56,015,648 | https://en.wikipedia.org/wiki/Rheotrauma | Rheotrauma is a medical term for the harm caused to a patient's lungs by high gas flows as delivered by mechanical ventilation. Although mechanical ventilation may prevent death of a patient from the hypoxia or hypercarbia which may be caused by respiratory failure, it can also be damaging to the lungs, leading to ventilator-associated lung injury. Rheotrauma is one of the ways in which mechanical ventilation may do this, alongside volutrauma, barotrauma, atelectotrauma and biotrauma. Attempts have been made to combine all of the mechanical forces caused by the ventilator on the patient's lungs in an all encompassing term: mechanical power.
References
Respiratory therapy
Pulmonology
Emergency medicine
Medical equipment
Intensive care medicine
Lung disorders | Rheotrauma | [
"Biology"
] | 165 | [
"Medical equipment",
"Medical technology"
] |
56,018,407 | https://en.wikipedia.org/wiki/NOYB | NOYB – European Center for Digital Rights (styled as "noyb", from "none of your business") is a non-profit organization based in Vienna, Austria established in 2017 with a pan-European focus. Co-founded by Austrian lawyer and privacy activist Max Schrems, NOYB aims to launch strategic court cases and media initiatives in support of the General Data Protection Regulation (GDPR), the proposed ePrivacy Regulation, and information privacy in general. The organisation was established after a funding period during which it has raised annual donations of €250,000 by supporting members. Currently, NOYB is financed by more than 4,400 supporting members.
While many privacy organisations focus attention on governments, NOYB puts its focus on privacy issues and privacy violations in the private sector. Under Article 80, the GDPR foresees that non-profit organizations can take action or represent users. NOYB is also recognized as a "qualified entity" to bring consumer class actions in Belgium.
Notable actions
EU–US data transfers/"Schrems I" (2016)
The Irish Data Protection Commission (DPC) filed a lawsuit against Schrems and Facebook in 2016, based on a complaint from 2013, which had led to the so-called "Safe Harbor Decision". Back then, the Court of Justice of the European Union (CJEU) had invalidated the Safe Harbor data transfer system with its decision. When the case was referred back to the DPC the Irish regulator found that Facebook had in fact relied on Standard Contact Clauses, not on the invalidated Safe Harbor. The DPC then found that there were "well-founded" concerns by Schrems under these instruments too, but instead of taking action against Facebook, initiated proceedings against Facebook and Schrems before the Irish High Court. The case was ultimately referred to the CJEU in C-311/18 (called "Schrems II"; see Max Schrems#Schrems II). NOYB supported this private case of Schrems.
"Forced consent" complaints (2018)
Within hours after General Data Protection Regulation rules went into effect on 25 May 2018, NOYB filed complaints against Facebook and subsidiaries WhatsApp and Instagram, as well as Google (targeting Android), for allegedly violating Article 7(4) by attempting to completely block use of their services if users decline to accept all data processing consents, in a bundled grant which also includes consents deemed unnecessary to use the service. Based on the complaint, the French data protection authority CNIL has issued a €50 million fine against Google. The other cases are still pending.
Spotify case (2019)
Since Spotify is based in Sweden, the Swedish data protection authority (IMY) was responsible. However, this authority took its time. For over four years, no decision was made on the complaint against the streaming service. So in 2022, NOYB first filed a complaint for inaction in Sweden. The lawsuit was decided in favor of the privacy activists. The IMY then imposed a GDPR fine of 58 million Swedish kronor (about EUR 5 million) on Spotify.
Apple tracking case (2020)
In mid November 2020, NOYB announced that complaints were filed to both the German and Spanish Data Protection Authorities, claiming "IDFA (Apple's Identifier for Advertisers) allows Apple and all apps on the phone to track a user and combine information about online and mobile behaviour". In a slight change from their previous legal strategy in other similar cases, NOYB notes that, because the complaint is based on Article5(3) of the ePrivacy Directive and not the GDPR, the Spanish and German authorities can directly fine Apple, without appealing to EU Data Protection Authorities under the GDPR.
Open letter on GDPR cooperation mechanism (2020)
NOYB also focuses on putting pressure on regulators to enforce privacy laws on the books. In an open letter, the NGO has accused the Irish Data Protection Commission of acting too slowly and having 10 meetings with Facebook before the coming into application of the GDPR.
Schrems II – Court of Justice Judgment on Privacy Shield (2020)
On July 16, 2020, the Court of Justice of the European Union (CJEU) invalidated Privacy Shield and decided that Facebook and other companies that fall under US surveillance laws cannot rely on "Standard Contractual Clauses" (SCCs) since US surveillance laws were found to be conflicting EU fundamental rights. This judgement was based on a long lasting case of Max Schrems and NOYB. US companies' foreign customers' data are not protected from the U.S. intelligence services. The CJEU found that this violates the "essence" of certain EU fundamental rights.
The Court has also clarified that EU data protection authorities (DPAs) have a duty to take action. The Court highlighted that a DPA is "required to execute its responsibility for ensuring that the GDPR is fully enforced with all due diligence".
Despite the invalidations made by the judgment, absolutely "necessary" data flows can continue to flow under Article 49 of the GDPR. Any situation where users want their data to flow abroad is still legal, as this can be based on the informed consent of the user, which can be withdrawn at any time. Equally the law allows data flows for what is "necessary" to fulfil a contract.
Mass complaints on EU–US data transfers (2020)
After the Schrems II judgment, B filed 101 complaints against EU/EEA companies against controllers using Google Analytics or Facebook Connect and thereby transferring data to the US despite the Court finding (link to Privacy Shield) that US surveillance laws violate the essence of EU fundamental rights. The organization thereby wanted to point out the lack of enforcement of Schrems II. These model complaints led to the creation of a special taskforce by the European Data Protection Board (EDPB) which is tasked to coordinate the complaints and to prepare recommendations for controllers and processors. On January 12, 2022, the Austrian Data Protection Authority (DSB) reached a partial decision in favour of NOYB, stating that the continuous use of Google Analytics violates the GDPR. This decision affects most websites in the European Union since Google Analytics is the most common traffic analysis tool.
Google Advertising ID tracking (2021)
On April 7, 2021, NOYB filed a complaint in France charging that Android users were being tracked by Google without giving consent.
"Google's software creates the AAID without the user's knowledge or consent. The identification number functions like a license plate that uniquely identifies the phone of a user and can be shared among companies. After its creation, Google and third parties (e.g. applications providers and advertisers) can access the AAID to track users' behaviour, elaborate consumption preferences and provide personalised advertising. Such tracking is strictly regulated by the EU "Cookie Law" (Article 5(3) of the e-Privacy Directive) and requires the users' informed and unambiguous consent."
Facebook and DPC complaint (2021)
NOYB filed a complaint against the Irish Data Protection Commissioner (DPC) for corruption and possible bribery in 2021 under Austrian law for an affair concerning Facebook.
Administrative fine for Grindr over illegal sharing of user data (2021)
Together with the Norwegian Consumer Council, NOYB filed three strategic complaints against the dating app Grindr and several adtech companies over illegal sharing of users' data in January 2020. The data shared was GPS location, IP address, Advertising ID, age, gender and the fact that the user in question was on Grindr. Users could be identified through the data shared, and the recipients could potentially further share the data. These complaints are based on the report "Out of Control" by the Norwegian Consumer Council.
One year after the complaint was filed, the Norwegian Data Protection Authority upheld the complaint against Grindr, confirming that Grindr did not receive valid consent from users in an advance notification. The Authority imposed a fine of 100 million NOK (€9.63 million) on Grindr, which was then reduced to 65 million NOK (€6.5 million) in the final decision since Grindr's actual revenue was lower than previously assumed and the company undertook measures to remedy deficiencies in their previous consent management platform.
Action against the use of dark patterns in cookie banners (2021)
On August 10, 2021, NOYB filed 422 complaints against companies using deceptive cookie banners on their website. This wave of complaints was the outcome of a "Legal Tech" initiative by the organization in the course of which thousands of websites in Europe had been automatically checked for violations with a tool that was developed specifically for this purpose. In response to those complaints an EDPB task force was set up to exchange views on legal analysis and possible infringements and to streamline communication. In its effort to overcome the necessity of cookie banners, NOYB has also co-developed Advanced Data Protection Control together with the Sustainable Computing Lab of the Vienna University of Economics. The ADPC browser signal poses a feasible alternative to cookie banners through its automated mechanism for the communication of users' privacy decisions and data controllers' responses.
Austrian Court: Google Analytics illegal in Europe (2022)
In early 2022, an Austrian court ruled that the use of Google Analytics on European websites was illegal. The case in question was filed in August 2020, from a Google user accessing an Austrian website for health related issues. The website used Google Analytics, and data about the user was transmitted to Google. The Google user complained to the Austrian data protection authority alongside NOYB. The issue at hand has a direct reference to Article 44 under GDPR, since the user cannot be afforded the correct level of protections established, thus making it a clear violation of GDPR. France's data watchdog CNIL concurred with the Austrian ruling in mid February 2022. Schrems duly commented:
Furthermore, in mid 2022, the Austrian DPA also ruled that Google's anonymization was insufficient in protecting user privacy, and that Article 44 of GDPR does not allow for a risk-based approach that Google had argued for.
References
External links
2017 establishments in Austria
Information privacy
Information technology organisations based in Austria
Internet privacy organizations
Data protection
Cross-European advocacy groups
Privacy organizations | NOYB | [
"Engineering"
] | 2,122 | [
"Cybersecurity engineering",
"Information privacy"
] |
62,038,537 | https://en.wikipedia.org/wiki/BioMedical%20Engineering%20OnLine | BioMedical Engineering OnLine is a peer-reviewed online-only open access scientific journal covering biomedical engineering. It was established in 2002 and is published by BioMed Central. The editors-in-chief are Ervin Sejdic (University of Pittsburgh and Fong-Chin Su (National Cheng Kung University). According to the Journal Citation Reports, the journal has a 2018 impact factor of 2.013.
References
External links
BioMed Central academic journals
Online-only journals
Academic journals established in 2002
English-language journals
Biomedical engineering journals | BioMedical Engineering OnLine | [
"Engineering",
"Biology"
] | 107 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
51,572,572 | https://en.wikipedia.org/wiki/Photo-reflectance | Photo-reflectance is an optical technique for investigating the material and electronic properties of thin films. Photo-reflectance measures the change in reflectivity of a sample in response to the application of an amplitude modulated light beam. In general, a photo-reflectometer consists of an intensity modulated "pump" light beam used to modulate the reflectivity of the sample, a second "probe" light beam used to measure the reflectance of the sample, an optical system for directing the pump and probe beams to the sample, and for directing the reflected probe light onto a photodetector, and a signal processor to record the differential reflectance. The pump light is typically modulated at a known frequency so that a lock-in amplifier may be used to suppress unwanted noise, resulting in the ability to detect reflectance changes at the ppm level.
The utility of photo-reflectance for characterization of semiconductor samples has been recognized since the late 1960s. In particular,
conventional photo-reflectance is closely related to electroreflectance in that the sample's internal electric field is modulated by the photo-injection of electron-hole pairs. The electro-reflectance response is sharply peaked near semiconductor interband transitions, which accounts for its usefulness in semiconductor characterization. Photo-reflectance spectroscopy has been used to determine semiconductor bandstructures, internal electric fields, and other material properties such as crystallinity, composition, physical strain, and doping concentration.
Etymology
The name "photo-reflectance" or "photoreflectance" is shortened from the term "photo-modulated reflectance," which describes the use of an intensity modulated light beam to perturb the reflectance of a sample. The technique has also been referred to as "modulated photo-reflectance," "modulated optical reflectance," and "photo-modulated optical reflectance." It has been known at least since 1967.
Basic principles
Photo-reflectance is a particularly convenient type of modulation spectroscopy, as it may be performed at room temperature and only requires the sample have a reflecting surface. It is an established tool for non-contact determination of material and electronic properties of semiconductor films. In photo-reflectance, a pump laser beam is used to modulate the free charge density in a semiconductor sample (via photo-injection), thereby modulating one or more physical quantities (e.g. the internal electric field). The measured signal ΔR is the change in amplitude of the reflected probe light as the intensity modulated pump radiation interacts with the sample. The normalized signal is ΔR/R, i.e. the pump-induced change in reflectance (AC) divided by the baseline reflectance (DC). The conventional photo-reflectance apparatus uses a spectroscopic source for the probe beam, such that the signal may be recorded as a function of the probe light's wavelength. Generally, the signal may be written:
where ΔR/R is the normalized change in reflectance, α (≡1/R×∂R/∂ε1) and β (≡1/R×∂R/∂ε2) are the "Seraphin coefficients" which contain filmstack information, and Δε1 and Δε2 are the pump induced changes in the complex dielectric function. However, in conventional photo-reflectance analysis, it is not necessary to independently determine the refractive and absorptive components (the first and second terms in ΔR/R, respectively) of the signal. Rather, a fit to the overall signal is performed using the third derivative functional form given by Aspnes. This fit procedure yields the interband transition energies, amplitudes, and widths. However, because the signal depends on the uniformity of the perturbation, the extraction of such parameters must be treated with care.
Experimental setup
The conventional photo-reflectance experimental setup uses a xenon or tungsten based lamp source passed through a monochromator to form the incident probe beam. The pump beam may be formed by the output of a continuous wave (CW) laser (e.g. a He-Ne or He-Cd laser) passed through a chopper wheel, or may be formed by the output of a directly modulated semiconductor diode laser. The pump beam is focused to a spot on the sample where it interacts with the sample. The probe beam is co-focused onto the sample where it is reflected. The reflected probe beam is collected and passed through an optical filter to eliminate any unwanted pump light and/or photoluminescence signal. Thereafter the probe beam is directed onto a photodetector (e.g. a Si or InGaAs photodiode), which converts the probe intensity to an electrical signal. The electrical signal is processed to eliminate unwanted noise, typically using a lock-in circuit referenced to the modulation frequency. The photo-reflectance signal is then recorded as a function of probe beam wavelength using a computer or the like.
Experimental considerations
In photo-reflectance, the sample's internal electric field is modulated by the photo-injection of electron-hole pairs (thus reducing the latent field). In order to achieve photo-injection, the energy of photons in the pump beam must exceed the band gap of material within the sample. Furthermore, semiconductors with little or no electric field will exhibit little or no electro-reflectance response. While this situation is not common, this point makes clear the importance of maintaining the probe intensity at a minimum, since any photo-injection of electron-hole pairs from the probe will necessarily offset the sample baseline condition by reducing the latent field. (Likewise, any CW component of the pump is undesirable.) Conversely, if the probe intensity is too low, detection may not be possible with conventional photodiodes. A further consideration is that phase-locked detection is a practical necessity due to the small size of the experimental signals (~ppm) and the unique ability of phase-locked detection methods to reject noise outside a narrow bandwidth centered on the modulation frequency.
Applications
Photo-reflectance is a highly sensitive measurement technique and provides unmatched capability for characterizing the material and electronic properties of thin films. Photo-reflectance has been particularly important in basic research on semiconductors due to its ability to precisely determine semiconductor bandstructures (even at room temperature). As an optical technique, photo-reflectance would appear suited to industrial applications because it is non-contact, and because it has good spatial resolution. However, the need for spectroscopic information limits measurement speed, and consequently the adoption of spectroscopic photo-reflectance in industrial applications such as process control of microelectronics manufacturing.
Nevertheless, where spectroscopic information is not required, photo-reflectance techniques have been implemented in semiconductor manufacturing process control. For example, in the late 1980s, Therma-Wave, Inc. introduced the "Therma-Probe" photo-modulated reflectance system to the market for semiconductor process control equipment. The original Therma-Probe focused an intensity modulated pump laser beam onto a spot on a silicon sample, modulating the sample reflectance. The reflectance changes were detected by a coincident laser probe beam of 633 nanometer wavelength. At this wavelength no electro-reflectance signal is present, since it is far removed from any interband transitions in silicon. Rather, the mechanisms responsible for the Therma-Probe signal are thermo-modulation and the Drude free carrier effect. The Therma-Probe was used primarily for monitoring of the ion implantation process in silicon semiconductor manufacturing. Measurement systems such as the Therma-Probe are particularly desirable in process control of microelectronics manufacturing because they provide the ability to quickly verify the correct execution of process steps, without contacting the wafer or removing the wafer from the clean room. Generally a number of measurements will be made on certain areas of the wafer and compared with expected values. As long as the measured values are within a certain range, the wafers are passed for continued processing. (This is known as statistical process control.) Other photo-modulated reflectance systems marketed for process control of implant processes are the "TWIN" metrology system marketed by PVA TePla AG, and the "PMR-3000" marketed by Semilab Co. Ltd (originally Boxer-Cross, Inc.).
However, by the mid 2000s, new manufacturing processes were requiring new process control capabilities, for example the need for control of new "diffusion-less" annealing processes and advanced strained silicon processes. To address these new process control requirements, in 2007, Xitronix Corporation introduced a photo-reflectance system to the semiconductor process control market. Like the Therma-Probe, the Xitronix metrology system utilized a fixed wavelength probe beam generated by a laser. However, the probe beam of the Xitronix system had a wavelength of approximately 375 nanometers, near the first major interband transition in silicon. At this wavelength the electro-modulation signal is dominant, which enabled the Xitronix system to precisely measure active doping concentration in diffusion-less annealing processes. This probe beam wavelength also provided excellent sensitivity to strain in strained silicon processes. More recently, the use of laser photo-reflectance technology for precision measurement of carrier diffusion lengths, recombination lifetimes, and mobilities has been demonstrated.
Spectroscopic vs. laser photo-reflectance
Spectroscopic photo-reflectance employs a broad band probe light source, which may cover wavelengths from the infrared to the ultraviolet. By fitting spectroscopic photo-reflectance data with the conventional third derivative functional form, a comprehensive set of interband transition energies, amplitudes, and widths may be obtained, providing an essentially complete characterization of the electronic properties of the sample of interest. However, owing to the need to keep the probe light intensity to a minimum and to the practical necessity of phase-locked detection, spectroscopic photo-reflectance measurements must be made sequentially, i.e. probe one wavelength at a time.
This constraint limits the speed of spectroscopic photo-reflectance measurements, and coupled with the need for a careful fit procedure, renders spectroscopic photo-reflectance more suitable for analytical applications. Conversely, laser photo-reflectance employs a monochromatic light source, and hence is well suited for industrial applications. Moreover, in commonly encountered situations, the coherent wavefront of laser probe beam may be used to isolate the refractive component of the photo-reflectance signal, greatly simplifying the data analysis.
Advantages
Photo-reflectance measures differential reflectivities as small as one part per million, whereas ellipsometry and/or standard reflectance measure differential reflectivities on the order of one part per thousand.
Photo-reflectance spectra exhibits sharp derivative-like structures localized at interband transition energies, whereas ellipsometry and/or standard reflectance exhibit broad slowly varying spectra.
The photo-reflectance response at a particular wavelength typically arises from specific interband transitions confined to specific materials within the sample.
By using phase-locked detection methods, ambient (nonsynchronous) light does not influence photo-reflectance measurements.
By using a laser probe beam, the refractive part of the photo-reflectance response can be isolated without the necessity to take spectroscopic data or perform a fit procedure.
Laser photo-reflectance has been proven in statistical process control for microelectronics manufacturing for over three decades.
See also
Spectroscopy
Ellipsometry
Photoluminescence
Raman spectroscopy
Surface photovoltage
Terahertz time-domain spectroscopy
References
Further reading
Semiconductors and Semimetals, Vol. 9 ("Modulation Techniques"), edited by R.K Willardson and A.C. Beer, (Academic Press, New York, 1972).
F.H. Pollack, "Modulation Spectroscopy of Semiconductors and Semiconductor Microstructures," in Handbook on Semiconductors, Vol. 2 ("Optical Properties of Semiconductors"), edited by M. Balkanski, pp. 527–635 (North-Holland, Amsterdam, 1994).
A.M. Mansanares, "Optical Detection of Photothermal Phenomena in Operating Electronic Devices: Temperature and Defect Imaging," in Progress in Photothermal and Photoacoustic Science and Technology, Vol. 4 ("Semiconductors and Electronic Materials"), edited by A. Mandelis and P. Hess, pp. 73–108 (SPIE Press, Bellingham, WA, 2000).
Optical metrology
Spectroscopy
Semiconductor analysis
Condensed matter physics | Photo-reflectance | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,593 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Spectroscopy",
"Matter"
] |
51,574,459 | https://en.wikipedia.org/wiki/Time%20Warp%20Edit%20Distance | In the data analysis of time series, Time Warp Edit Distance (TWED) is a measure of similarity (or dissimilarity) between pairs of discrete time series, controlling the relative distortion of the time units of the two series using the physical notion of elasticity. In comparison to other distance measures, (e.g. DTW (dynamic time warping) or LCS (longest common subsequence problem)), TWED is a metric. Its computational time complexity is , but can be drastically reduced in some specific situations by using a corridor to reduce the search space. Its memory space complexity can be reduced to . It was first proposed in 2009 by P.-F. Marteau.
Definition
whereas
Whereas the recursion
is initialized as:
with
Implementations
An implementation of the TWED algorithm in C with a Python wrapper is available at
TWED is also implemented into the Time Series Subsequence Search Python package (TSSEARCH for short) available at .
An R implementation of TWED has been integrated into the TraMineR, a R package for mining, describing and visualizing sequences of states or events, and more generally discrete sequence data.
Additionally, cuTWED is a CUDA- accelerated implementation of TWED which uses an improved algorithm due to G. Wright (2020). This method is linear in memory and massively parallelized. cuTWED is written in CUDA C/C++, comes with Python bindings, and also includes Python bindings for Marteau's reference C implementation.
Python
import numpy as np
def dlp(A, B, p=2):
cost = np.sum(np.power(np.abs(A - B), p))
return np.power(cost, 1 / p)
def twed(A, timeSA, B, timeSB, nu, _lambda):
"""Compute Time Warp Edit Distance (TWED) for given time series A and B."""
# [distance, DP] = TWED(A, timeSA, B, timeSB, lambda, nu)
#
# A := Time series A (e.g. [ 10 2 30 4])
# timeSA := Time stamp of time series A (e.g. 1:4)
# B := Time series B
# timeSB := Time stamp of time series B
# lambda := Penalty for deletion operation
# nu := Elasticity parameter - nu >=0 needed for distance measure
# Reference :
# Marteau, P.; F. (2009). "Time Warp Edit Distance with Stiffness Adjustment for Time Series Matching".
# IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (2): 306–318. arXiv:cs/0703033
# http://people.irisa.fr/Pierre-Francois.Marteau/
# Check if input arguments
if len(A) != len(timeSA):
print("The length of A is not equal length of timeSA")
return None, None
if len(B) != len(timeSB):
print("The length of B is not equal length of timeSB")
return None, None
if nu < 0:
print("nu is negative")
return None, None
# Add padding
A = np.array([0] + list(A))
timeSA = np.array([0] + list(timeSA))
B = np.array([0] + list(B))
timeSB = np.array([0] + list(timeSB))
n = len(A)
m = len(B)
# Dynamical programming
DP = np.zeros((n, m))
# Initialize DP matrix and set first row and column to infinity
DP[0, :] = np.inf
DP[:, 0] = np.inf
DP[0, 0] = 0
# Compute minimal cost
for i in range(1, n):
for j in range(1, m):
# Calculate and save cost of various operations
C = np.ones((3, 1)) * np.inf
# Deletion in A
C[0] = (
DP[i - 1, j]
+ dlp(A[i - 1], A[i])
+ nu * (timeSA[i] - timeSA[i - 1])
+ _lambda
)
# Deletion in B
C[1] = (
DP[i, j - 1]
+ dlp(B[j - 1], B[j])
+ nu * (timeSB[j] - timeSB[j - 1])
+ _lambda
)
# Keep data points in both time series
C[2] = (
DP[i - 1, j - 1]
+ dlp(A[i], B[j])
+ dlp(A[i - 1], B[j - 1])
+ nu * (abs(timeSA[i] - timeSB[j]) + abs(timeSA[i - 1] - timeSB[j - 1]))
)
# Choose the operation with the minimal cost and update DP matrix
DP[i, j] = np.min(C)
distance = DP[n - 1, m - 1]
return distance, DP
Backtracking, to find the most cost-efficient path:
def backtracking(DP):
"""Compute the most cost-efficient path."""
# [ best_path ] = BACKTRACKING (DP)
# DP := DP matrix of the TWED function
x = np.shape(DP)
i = x[0] - 1
j = x[1] - 1
# The indices of the paths are save in opposite direction
# path = np.ones((i + j, 2 )) * np.inf;
best_path = []
steps = 0
while i != 0 or j != 0:
best_path.append((i - 1, j - 1))
C = np.ones((3, 1)) * np.inf
# Keep data points in both time series
C[0] = DP[i - 1, j - 1]
# Deletion in A
C[1] = DP[i - 1, j]
# Deletion in B
C[2] = DP[i, j - 1]
# Find the index for the lowest cost
idx = np.argmin(C)
if idx == 0:
# Keep data points in both time series
i = i - 1
j = j - 1
elif idx == 1:
# Deletion in A
i = i - 1
j = j
else:
# Deletion in B
i = i
j = j - 1
steps = steps + 1
best_path.append((i - 1, j - 1))
best_path.reverse()
return best_path[1:]
MATLAB
function [distance, DP] = twed(A, timeSA, B, timeSB, lambda, nu)
% [distance, DP] = TWED( A, timeSA, B, timeSB, lambda, nu )
% Compute Time Warp Edit Distance (TWED) for given time series A and B
%
% A := Time series A (e.g. [ 10 2 30 4])
% timeSA := Time stamp of time series A (e.g. 1:4)
% B := Time series B
% timeSB := Time stamp of time series B
% lambda := Penalty for deletion operation
% nu := Elasticity parameter - nu >=0 needed for distance measure
%
% Code by: P.-F. Marteau - http://people.irisa.fr/Pierre-Francois.Marteau/
% Check if input arguments
if length(A) ~= length(timeSA)
warning('The length of A is not equal length of timeSA')
return
end
if length(B) ~= length(timeSB)
warning('The length of B is not equal length of timeSB')
return
end
if nu < 0
warning('nu is negative')
return
end
% Add padding
A = [0 A];
timeSA = [0 timeSA];
B = [0 B];
timeSB = [0 timeSB];
% Dynamical programming
DP = zeros(length(A), length(B));
% Initialize DP Matrix and set first row and column to infinity
DP(1, :) = inf;
DP(:, 1) = inf;
DP(1, 1) = 0;
n = length(timeSA);
m = length(timeSB);
% Compute minimal cost
for i = 2:n
for j = 2:m
cost = Dlp(A(i), B(j));
% Calculate and save cost of various operations
C = ones(3, 1) * inf;
% Deletion in A
C(1) = DP(i - 1, j) + Dlp(A(i - 1), A(i)) + nu * (timeSA(i) - timeSA(i - 1)) + lambda;
% Deletion in B
C(2) = DP(i, j - 1) + Dlp(B(j - 1), B(j)) + nu * (timeSB(j) - timeSB(j - 1)) + lambda;
% Keep data points in both time series
C(3) = DP(i - 1, j - 1) + Dlp(A(i), B(j)) + Dlp(A(i - 1), B(j - 1)) + ...
nu * (abs(timeSA(i) - timeSB(j)) + abs(timeSA(i - 1) - timeSB(j - 1)));
% Choose the operation with the minimal cost and update DP Matrix
DP(i, j) = min(C);
end
end
distance = DP(n, m);
% Function to calculate euclidean distance
function [cost] = Dlp(A, B)
cost = sqrt(sum((A - B) .^ 2, 2));
end
end
Backtracking, to find the most cost-efficient path:
function [path] = backtracking(DP)
% [ path ] = BACKTRACKING ( DP )
% Compute the most cost-efficient path
% DP := DP matrix of the TWED function
x = size(DP);
i = x(1);
j = x(2);
% The indices of the paths are save in opposite direction
path = ones(i + j, 2) * Inf;
steps = 1;
while (i ~= 1 || j ~= 1)
path(steps, :) = [i; j];
C = ones(3, 1) * inf;
% Keep data points in both time series
C(1) = DP(i - 1, j - 1);
% Deletion in A
C(2) = DP(i - 1, j);
% Deletion in B
C(3) = DP(i, j - 1);
% Find the index for the lowest cost
[~, idx] = min(C);
switch idx
case 1
% Keep data points in both time series
i = i - 1;
j = j - 1;
case 2
% Deletion in A
i = i - 1;
j = j;
case 3
% Deletion in B
i = i;
j = j - 1;
end
steps = steps + 1;
end
path(steps, :) = [i j];
% Path was calculated in reversed direction.
path = path(1:steps, :);
path = path(end: - 1:1, :);
end
References
Time series
Algorithms | Time Warp Edit Distance | [
"Mathematics"
] | 2,574 | [
"Algorithms",
"Mathematical logic",
"Applied mathematics"
] |
65,820,188 | https://en.wikipedia.org/wiki/Qaisar%20Shafi | Qaisar Shafi is a Pakistani-American theoretical physicist and the Inaugural Bartol Research Institute Professor of Physics at the University of Delaware.
Biography
Shafi grew up in Karachi, Pakistan and lived there until his early teens when his family moved to London, United Kingdom. After graduating as valedictorian from Holland Park School, London, UK, he studied physics at Imperial College, London, where he received both his B.Sc. Honors and PhD. His PhD advisor was the late Nobel Laureate Professor Abdus Salam, whom he subsequently joined at the International Centre for Theoretical Physics (ICTP) in Trieste, Italy. Shafi was awarded an Alexander von Humboldt Prize and spent some years in Germany (Munich, Aachen, and Freiburg). In 1978, he received his Habilitation with Venia Legendi from the University of Freiburg. He then spent two years at CERN (Geneva, Switzerland) after which he moved to the United States. Since 1983, Shafi has been a faculty member at the Bartol Research Institute, University of Delaware, which in 2005 merged with the Department of Physics and Astronomy.
Shafi has done pioneering research in areas ranging from Grand Unification to Kaluza-Klein theories, to inflationary cosmology and supersymmetric theories, and he is widely regarded as a leader in these fields. He has published more than 300 papers in refereed journals, among them many of the most prestigious in the field, lectured at close to 250 conferences, workshops, and universities.
Research Work
Contemporary high energy physics could be subdivided into the energy frontier, the cosmic frontier and the intensity frontier. Shafi, whose work is highly interdisciplinary, has made pioneering contributions in all three areas.
Shafi’s work has focused on Grand Unified Theories (GUTs), Yukawa coupling unification, dark matter and collider physics, inflationary cosmology, topological defects, thermal inflation, superstring phenomenology and related topics. His pioneering works include:
Discovery of stable cosmic strings with Sir Tom Kibble and George Lazarides in Grand Unified Theories
Discovery of discrete Z_2 symmetry in SO (10) with Sir Tom Kibble and George Lazarides. This gauge Z_2 symmetry plays a critical role in explaining why the dark matter in the universe is stable.
Discovery of topological defects called “walls bounded by strings” with Sir Tom Kibble and George Lazarides. These topological structures were discovered recently in superfluid 3He and are called Kibble-Lazarides-Shafi (KLS) walls.
Discovery of type II seesaw mechanism with George Lazarides and Christof Wetterich in Grand Unified Theories
Discovery that axionic strings are superconducting with George Lazarides
Pioneering paper with George Lazarides on non-thermal leptogenesis in inflationary cosmology
Discovery of Yukawa unification in supersymmetric GUTs with Balasubramanian Ananthanarayan and George Lazarides
Novel mechanism (Lazarides-Shafi mechanism) for solving the axion domain wall problem
D-brane inflation with Giorgi Dvali and Sviatoslav Solganik
Shafi-Vilenkin Inflationary Model
Fermion mass hierarchies in five dimensional models with Stephan Huber
Supersymmetric Hybrid Inflation with Giorgi Dvali and Robert Schaefer
Outreach Work
Shafi has done also extensive outreach work for the scientific community. From the early 1980s until 1997, he organized/co-organized several weeks long summer schools at the International Centre for Theoretical Physics in Trieste. For more than fifteen years, Shafi was one of the key organizers for each summer school.
In addition, he was also one of the principal organizers of the BCVSPIN (acronym denoting the countries Bangladesh-China-Vietnam-Sri Lanka-Pakistan-India-Nepal) schools, which he co-founded in 1989 with Professors Abdus Salam, Jogesh Pati and Yu-Lu. The concept underlying BCVSPIN was to allow young scientists living in underserved regions to engage in research. Professor Shafi organized, lectured at, and led numerous BCVSPIN schools as well as associated preparatory schools, and thus helped lay the groundwork for the successful careers of many graduate students and postdoctoral fellows while also keeping track of their progress. He directed or co-directed the BCVSPIN summer schools from 1989 to 1997 and after a hiatus of several years, caused by the shifting political climate in Nepal, single-handedly resurrected the school in 2007, organizing highly successful schools in China, Vietnam and also branching out to Mexico.
Personal life
Qaisar Shafi is married to Monika Shafi, the Elias Ahuja Professor Emerita of German Literature at the University of Delaware. They have a daughter and a son.
References
External links
Alumni of Imperial College London
University of Delaware people
University of Freiburg alumni
Living people
Theoretical physicists
Fellows of the American Physical Society
English physicists
American academics of Pakistani descent
Year of birth missing (living people)
People associated with CERN | Qaisar Shafi | [
"Physics"
] | 1,029 | [
"Theoretical physics",
"Theoretical physicists"
] |
65,820,239 | https://en.wikipedia.org/wiki/Tau%20function%20%28integrable%20systems%29 | Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.
The term tau function, or -function, was first used systematically by Mikio Sato and his students in the specific context of the Kadomtsev–Petviashvili (or KP) equation and related integrable hierarchies. It is a central ingredient in the theory of solitons. In this setting, given any -function satisfying a Hirota-type system of bilinear equations (see below), the corresponding solutions of the equations of the integrable hierarchy are explicitly expressible in terms of it and its logarithmic derivatives up to a finite order. Tau functions also appear as matrix model partition functions in the spectral theory of random matrices, and may also serve as generating functions, in the sense of combinatorics and enumerative geometry, especially in relation to moduli spaces of Riemann surfaces, and enumeration of branched coverings, or so-called Hurwitz numbers.
There are two notions of -functions, both introduced by the Sato school. The first is isospectral -functions of the Sato–Segal–Wilson type for integrable hierarchies, such as the KP hierarchy, which are parametrized by linear operators satisfying isospectral deformation equations of Lax type. The second is isomonodromic -functions.
Depending on the specific application, a -function may either be: 1) an analytic function of a finite or infinite number of independent, commuting flow variables, or deformation parameters; 2) a discrete function of a finite or infinite number of denumerable variables; 3) a formal power series expansion in a finite or infinite number of expansion variables, which need have no convergence domain, but serves as generating function for certain enumerative invariants appearing as the coefficients of the series; 4) a finite or infinite (Fredholm) determinant whose entries are either specific polynomial or quasi-polynomial functions, or parametric integrals, and their derivatives; 5) the Pfaffian of a skew symmetric matrix (either finite or infinite dimensional) with entries similarly of polynomial or quasi-polynomial type. Examples of all these types are given below.
In the Hamilton–Jacobi approach to Liouville integrable Hamiltonian systems, Hamilton's principal function, evaluated on the level surfaces of a complete set of Poisson commuting invariants, plays a role similar to the -function, serving both as a generating function for the canonical transformation to linearizing canonical coordinates and, when evaluated on simultaneous level sets of a complete set of Poisson commuting invariants, as a complete solution of the Hamilton–Jacobi equation.
Tau functions: isospectral and isomonodromic
A -function of isospectral type is defined as a solution of the Hirota bilinear equations (see below), from which the linear operator undergoing isospectral evolution can be uniquely reconstructed. Geometrically, in the Sato and Segal-Wilson sense, it is the value of the determinant of a Fredholm integral operator, interpreted as the orthogonal projection of an element of a suitably defined (infinite dimensional) Grassmann manifold onto the origin, as that element evolves under the linear exponential action of a maximal abelian subgroup of the general linear group. It typically arises as a partition function, in the sense of statistical mechanics, many-body quantum mechanics or quantum field theory, as the underlying measure undergoes a linear exponential deformation.
Isomonodromic -functions for linear systems of Fuchsian type are defined below in . For the more general case of linear ordinary differential equations with rational coefficients, including irregular singularities, they are developed in reference.
Hirota bilinear residue relation for KP tau functions
A KP (Kadomtsev–Petviashvili) -function
is a function of an infinite collection of variables (called KP flow variables) that satisfies the bilinear formal residue equation
identically in the variables, where is the
coefficient in the formal Laurent expansion resulting from expanding all factors as Laurent series in , and
As explained below in the section , every such -function determines a set of solutions to the equations of the KP hierarchy.
Kadomtsev–Petviashvili equation
If is a KP -function satisfying
the Hirota residue equation () and we identify the first three flow variables as
it follows that the function
satisfies the (spatial) (time) dimensional nonlinear partial differential equation
known as the Kadomtsev-Petviashvili (KP) equation. This equation plays a prominent role in plasma physics and in shallow water ocean waves.
Taking further logarithmic derivatives of gives an infinite sequence of functions that satisfy further systems of nonlinear autonomous PDE's, each involving partial derivatives of finite order with respect to a finite number of the KP flow parameters . These are collectively known as the KP hierarchy.
Formal Baker–Akhiezer function and the KP hierarchy
If we define the (formal) Baker-Akhiezer function
by Sato's formula
and expand it as a formal series in the powers of the variable
this satisfies an infinite sequence of compatible evolution equations
where is a linear ordinary differential operator of degree
in the variable , with coefficients that are functions of the flow variables
, defined as follows
where is the formal pseudo-differential operator
with ,
is the wave operator and
denotes the projection to the part of containing
purely non-negative powers of ; i.e. the differential operator part of .
The pseudodifferential operator satisfies the infinite system of isospectral deformation equations
and the compatibility conditions for both the system () and
() are
This is a compatible infinite system of nonlinear partial differential equations, known as the KP (Kadomtsev-Petviashvili) hierarchy, for the functions , with respect to the set of independent variables, each of which contains only a finite number of 's, and derivatives only with respect to the three independent variables . The first nontrivial case of these
is the Kadomtsev-Petviashvili equation ().
Thus, every KP -function provides a solution, at least in the formal sense, of this infinite system of nonlinear partial differential equations.
Isomonodromic systems. Isomonodromic tau functions
Fuchsian isomonodromic systems. Schlesinger equations
Consider the overdetermined system of first order matrix partial differential equations
where are a set of traceless matrices,
a set of complex parameters, a complex variable, and is an invertible matrix valued function of and .
These are the necessary and sufficient conditions for the based monodromy representation of the fundamental group
of the Riemann sphere punctured at
the points corresponding to the rational covariant derivative operator
to be independent of the parameters ; i.e. that changes in these parameters induce an isomonodromic deformation. The compatibility conditions for this system are the Schlesinger equations
Isomonodromic -function
Defining functions
the Schlesinger equations () imply that the differential form
on the space of parameters is closed:
and hence, locally exact. Therefore, at least locally, there exists a function
of the parameters, defined within a multiplicative constant, such that
The function is called the isomonodromic -function
associated to the fundamental solution of the system (), ().
Hamiltonian structure of the Schlesinger equations
Defining the Lie Poisson brackets on the space of -tuples of matrices:
and viewing the functions defined in () as Hamiltonian functions on this Poisson space, the Schlesinger equations ()
may be expressed in Hamiltonian form as
for any differentiable function .
Reduction of , case to
The simplest nontrivial case of the Schlesinger equations is when and . By applying a Möbius transformation to the variable ,
two of the finite poles may be chosen to be at and , and the third viewed as the independent variable.
Setting the sum of the matrices appearing in
(), which is an invariant of the Schlesinger equations, equal to a constant, and quotienting by its stabilizer under conjugation, we obtain a system equivalent to the most generic case of the six Painlevé transcendent equations, for which many detailed classes of explicit solutions are known.
Non-Fuchsian isomonodromic systems
For non-Fuchsian systems, with higher order poles, the generalized monodromy data include Stokes matrices and connection matrices, and there are further isomonodromic deformation parameters associated with the local asymptotics, but the isomonodromic -functions may be defined in a similar way, using differentials on the extended parameter space.
There is similarly a Poisson bracket structure on the space of rational matrix valued functions of the spectral parameter and corresponding spectral invariant Hamiltonians that generate the isomonodromic deformation dynamics.
Taking all possible confluences of the poles appearing in () for the and case, including the one at , and making the corresponding reductions, we obtain all other instances
of the Painlevé transcendents, for which
numerous special solutions are also known.
Fermionic VEV (vacuum expectation value) representations
The fermionic Fock space , is a semi-infinite exterior product space
defined on a (separable) Hilbert space with basis elements
and dual basis elements
for .
The free fermionic creation and annihilation operators
act as endomorphisms on
via exterior and interior multiplication by the basis elements
and satisfy the canonical anti-commutation relations
These generate the standard fermionic representation of the Clifford algebra
on the direct sum , corresponding to the scalar product
with the Fock space as irreducible module.
Denote the vacuum state, in the zero fermionic charge sector , as
,
which corresponds to the Dirac sea of states along the real integer lattice in which all negative integer locations are occupied and all non-negative ones are empty.
This is annihilated by the following operators
The dual fermionic Fock space vacuum state, denoted , is annihilated by the adjoint operators, acting to the left
Normal ordering of a product of
linear operators (i.e., finite or infinite linear combinations of creation and annihilation operators) is defined so that its vacuum expectation value (VEV) vanishes
In particular, for a product of a pair of linear operators, one has
The fermionic charge operator is defined as
The subspace is the eigenspace of
consisting of all eigenvectors with eigenvalue
.
The standard orthonormal basis for the zero fermionic charge sector is labelled by integer partitions
,
where
is a weakly decreasing sequence of positive integers, which can equivalently be represented by a Young diagram, as depicted here for the partition
.
An alternative notation for a partition consists of the
Frobenius indices
, where
denotes the arm length; i.e. the number of boxes in the Young diagram to the right of the 'th diagonal box, denotes the leg length, i.e. the number of boxes in the Young diagram below the 'th diagonal box, for , where is the Frobenius rank, which is the number of elements along the principal diagonal.
The basis element is then given by acting on the vacuum with a product
of pairs of creation and annihilation operators, labelled by the Frobenius indices
The integers indicate, relative to the Dirac sea, the occupied non-negative sites on the integer lattice while
indicate the unoccupied negative integer sites.
The corresponding diagram, consisting of infinitely many occupied and unoccupied sites on the integer lattice that are a finite perturbation of the Dirac sea are referred to as a Maya diagram.
The case of the null (emptyset) partition gives the vacuum state, and the dual basis is defined by
Any KP -function can be expressed as a sum
where are the KP flow variables,
is the Schur function
corresponding to the partition , viewed as a function of the normalized power sum variables
in terms of an auxiliary (finite or infinite) sequence of variables
and the constant coefficients
may be viewed as the Plücker coordinates of an
element
of the infinite dimensional Grassmannian consisting of the orbit, under the action of
the general linear group , of the subspace
of the Hilbert space .
This corresponds, under the Bose-Fermi correspondence, to a decomposable element
of the Fock space which, up to projectivization, is the image of the Grassmannian element under the
Plücker map
where is a basis for the subspace
and denotes projectivization of
an element of .
The Plücker coordinates satisfy an infinite set of bilinear
relations, the Plücker relations, defining the image of the Plücker embedding
into the projectivization of the fermionic Fock space,
which are equivalent to the Hirota bilinear residue relation ().
If for a group element
with fermionic representation , then the -function can be expressed as the fermionic vacuum state expectation value (VEV):
where
is the abelian subgroup of that generates the KP flows, and
are the ""current"" components.
Examples of solutions to the equations of the KP hierarchy
Schur functions
As seen in equation (), every KP -function can be represented (at least formally) as a linear combination of Schur functions, in which the coefficients satisfy the bilinear set of Plucker relations corresponding to an element of an infinite (or finite) Grassmann manifold. In fact, the simplest class of (polynomial) tau functions consists of the Schur functions themselves, which correspond to the special element of the Grassmann manifold whose image under the Plücker map is .
Multisoliton solutions
If we choose complex constants
with 's all distinct, , and define the functions
we arrive at the Wronskian determinant formula
which gives the general -soliton -function.
Theta function solutions associated to algebraic curves
Let be a compact Riemann surface of genus and fix a canonical homology basis
of with intersection numbers
Let be a basis for the space of holomorphic differentials satisfying the standard normalization conditions
where is the Riemann matrix of periods.
The matrix belongs to the Siegel upper half space
The Riemann function on corresponding to the period matrix is defined to be
Choose a point , a local parameter in a neighbourhood of with and
a positive divisor of degree
For any positive integer let be the unique meromorphic differential of the second kind characterized by the following conditions:
The only singularity of is a pole of order at with vanishing residue.
The expansion of around is
.
is normalized to have vanishing -cycles:
Denote by the vector of -cycles of :
Denote the image of under the Abel map
with arbitrary base point .
Then the following is a KP -function:
.
Matrix model partition functions as KP -functions
Let be the Lebesgue measure on the dimensional space of complex Hermitian matrices.
Let be a conjugation invariant integrable density function
Define a deformation family of measures
for small and let
be the partition function for this random matrix model.
Then satisfies the bilinear Hirota residue equation (), and hence is a -function of the KP hierarchy.
-functions of hypergeometric type. Generating function for Hurwitz numbers
Let be a (doubly) infinite sequence of complex numbers.
For any integer partition define the content product coefficient
,
where the product is over all pairs of positive integers that correspond to boxes of the Young diagram of the partition , viewed as positions of matrix elements of the corresponding
matrix.
Then, for every pair of infinite sequences and of complex variables, viewed as (normalized) power sums
of the infinite sequence of auxiliary variables
and ,
defined by:
,
the function
is a double KP -function, both in the and the variables, known as a -function of hypergeometric type.
In particular, choosing
for some small parameter , denoting the corresponding content product coefficient as
and setting
,
the resulting -function can be equivalently expanded as
where are the simple Hurwitz numbers, which are
times the number of ways in which an element
of the symmetric group in elements, with cycle lengths equal to the parts of the partition , can be factorized as a product of -cycles
,
and
is the power sum symmetric function. Equation () thus shows that the (formal) KP hypergeometric -function () corresponding to the content product coefficients is a generating function, in the combinatorial sense, for simple Hurwitz numbers.
References
Bibliography
Dynamical systems
Mathematical physics
Integrable systems
Solitons
Special functions
Generating functions
Partition functions
Random matrices
Combinatorics | Tau function (integrable systems) | [
"Physics",
"Mathematics"
] | 3,492 | [
"Random matrices",
"Sequences and series",
"Discrete mathematics",
"Mathematical structures",
"Special functions",
"Integrable systems",
"Applied mathematics",
"Generating functions",
"Theoretical physics",
"Mathematical objects",
"Combinatorics",
"Matrices (mathematics)",
"Mechanics",
"Pa... |
67,296,089 | https://en.wikipedia.org/wiki/AdS%20black%20brane | An anti de Sitter black brane is a solution of the Einstein equations in the presence of a negative cosmological constant which possesses a planar event horizon. This is distinct from an anti de Sitter black hole solution which has a spherical event horizon. The negative cosmological constant implies that the spacetime will asymptote to an anti de Sitter spacetime at spatial infinity.
Math development
The Einstein equation is given bywhere is the Ricci curvature tensor, R is the Ricci scalar, is the cosmological constant and is the metric we are solving for.
We will work in d spacetime dimensions with coordinates where and . The line element for a spacetime that is stationary, time reversal invariant, space inversion invariant, rotationally invariant
and translationally invariant in the directions is given by,.
Replacing the cosmological constant with a length scale L,
we find that,
with and integration constants, is a solution to the Einstein equation.
The integration constant is associated with a residual symmetry associated with a rescaling of the time coordinate. If we require that the line element takes the form,
, when r goes to infinity, then we must set .
The point represents a curvature singularity and the point is a coordinate singularity when . To see this, we switch to the coordinate system where and is defined by the differential equation,.The line element in this coordinate system is given by,,which is regular at . The surface is an event horizon.
References
Equations of physics
Black holes | AdS black brane | [
"Physics",
"Astronomy",
"Mathematics"
] | 310 | [
"Black holes",
"Physical phenomena",
"Equations of physics",
"Physical quantities",
"Unsolved problems in physics",
"Mathematical objects",
"Astrophysics",
"Equations",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
59,429,614 | https://en.wikipedia.org/wiki/Spectral%20gap%20%28physics%29 | In quantum mechanics, the spectral gap of a system is the energy difference between its ground state and its first excited state. The mass gap is the spectral gap between the vacuum and the lightest particle. A Hamiltonian with a spectral gap is called a gapped Hamiltonian, and those that do not are called gapless.
In solid-state physics, the most important spectral gap is for the many-body system of electrons in a solid material, in which case it is often known as an energy gap.
In quantum many-body systems, ground states of gapped Hamiltonians have exponential decay of correlations.
In 2015, it was shown that the problem of determining the existence of a spectral gap is undecidable in two or more dimensions. The authors used an aperiodic tiling of quantum Turing machines and showed that this hypothetical material becomes gapped if and only if the machine halts. The one-dimensional case was also proven undecidable in 2020 by constructing a chain of interacting qudits divided into blocks that gain energy if and only if they represent a full computation by a Turing machine, and showing that this system becomes gapped if and only if the machine does not halt.
See also
List of undecidable problems
Spectral gap, in mathematics
References
Quantum mechanics
Physical quantities
Undecidable problems | Spectral gap (physics) | [
"Physics",
"Mathematics"
] | 269 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Theoretical physics",
"Quantum mechanics",
"Computational problems",
"Undecidable problems",
"Mathematical problems",
"Physical properties"
] |
42,964,044 | https://en.wikipedia.org/wiki/OceanoScientific | The OceanoScientific Programme is a scientific process studying causes and consequences of climate change at the ocean - atmosphere interface.
Sailing dedicated to the Science
In November 2006, Yvan Griboval creates the OceanoScientific Programme for SailingOne. The OceanoScientific Programme is the set of activities designed to enable the international scientific community and the IPCC to enrich their knowledge about the causes and consequences of climate change, through the repeated collection of quality data at the ocean - atmosphere interface (oceanographic and atmospheric), especially on sea routes subject to little or no scientific exploration, aboard all kind of vessels but especially sailing ships; guided by JCOMMOPS, the support centre of the Joint WMO - IOC Technical Commission for Oceanography and Marine Meteorology of the United Nations.
IPCC, WMO and IOC are specialized agencies of the United Nations (UN).
The two main contractual partners of the OceanoScientific Programme from the start are the French research institutes IFREMER and Météo-France. The industrial partner of the OceanoScientific Programme is the German company SubCtech, headed and created by Stefan Marx, who created a p sensor. This sensor was compared to others on board the research vessel Polarstern.
Yvan Griboval - Its creator
Self-made man with a scholarship in Rouen (France), Yvan Griboval has early married his passions for sailing and for media, combining the job of professional sportsman, as from 1975 and of journalist-reporter, as from 1979 in French media: L’Équipe, Agence France Presse, Voiles & Voiliers, France Télévision.
Yvan Griboval took part in the win of L’Esprit d’Équipe with Lionel Péan at the Whitbread 1985-86 (now the Volvo Ocean Race), the crewed race around the world.
Then he put his experience and know-how at the service of companies as from 1987 to 1988, to guide them in their event-driven communication processes based on the exploitation of the Yachting in general and the Sailing, of its competitions, of its champions.
Founder and president of the company SailingOne ever since its inception (December 1994), he created the Trophée of Sailing Champions in 1990. As from its first edition, the French version of this event was named the Trophée Clairefontaine thanks to the partnership established in the spring 1990 with the eponymous papermaker group (Groupe des Papeteries de Clairefontaine). In November 2006, Yvan Griboval creates the OceanoScientific Programme.
A unique material (collecting oceanographic and meteorologic data)
As from early 2007, Yvan Griboval creates the OceanoScientific System, which is a component of the OceanoScientific Programme and the tool for OceanoScientific Campaigns. This is a "Plug & Play" equipment for the automatic acquisition and transmission by satellite of at least twelve scientific (oceanographic and atmospheric) parameters - formatted according to the standards of UN agencies. The OSC System is a technological development without any equivalent anywhere in the world when it works for the first time, on 14 October 2009. It makes possible for scientists the access to a new fleet of vessels of opportunity: sailing boats especially dedicated to these scientific missions. A scientific publication was carried out after the first test-expedition in the French Revue de l’Electricité et de l’Electronique (REE), in November 2010.
Data transmission to the International Scientific Community
Yvan Griboval initiates then the OceanoScientific Campaigns, standing for the implementation of the OceanoScientific System. It therefore involves the repeated collection of quality data at the ocean - atmosphere interface and their transmission to the oceanography and meteorology international platforms of the specialized agencies of the United Nations (UN). Data of OceanoScientific Campaigns are transmitted free of charge to the international scientific community considering its own criteria. They are integrated into the Global Ocean Observing System (GOOS), for example via WMO's Global Telecommunication System (GTS). These observations contribute to improve meteorological and climatological forecasts in scarcely or not at all scientifically explored ocean areas.
The two first campaigns carried out are: The ARTIC MISSION 2012 on the schooner La Louise in the Baffin Bay between 65° and 70° North; the 96-days ANTARCTIC MISSION 2013, on the three-master Bark EUROPA (Netherlands) between 50° and 63° South, especially in the Drake Passage.
OceanoScientific Campaign 2013 - 2014
Started from Brest (France) on 28 November 2013, the OceanoScientific Campaign - ATLANTIC MISSION 2013 - 2014, led on the 16-meter Navire A Voile d’Observation Scientifique de l’Environnement (NAVOSE) bearing the colours of the MEROCEANS foundation, came to an end in Monaco on 26 April 2014. In addition to the tests of the OceanoScientific System (OSC System) Version 3.0, a unique material collecting data of ten scientific parameters at the ocean - atmosphere interface, this 10,000 nautical miles (18,520 km) expedition enabled the deployment of drifting scientific materials in compliance with the requirements of JCOMMOPS, a UNESCO agency. In the meantime, the OSC System won the 2013 French-German Prize of Economy in the Environment category. As from now one, the scientific data collected by the OSC System will be transmitted to the CORIOLIS network. This campaign was carried out in energy self-sufficiency, without any emission, thanks to the hydro generators installed on board.
References of the OceanoScientific Programme
Earthzine - Fostering Earth Observation & Global Awareness
ICCIP - International Climate Change Information Programme
Mariners Weather Log - Published by the National Weather Service of the NOAA - April 2014
References
External links
Official blog of the OceanoScientific Programme
Oceanography | OceanoScientific | [
"Physics",
"Environmental_science"
] | 1,255 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
42,966,106 | https://en.wikipedia.org/wiki/Instrumentation%20and%20control%20engineering | Instrumentation and control engineering (ICE) is a branch of engineering that studies the measurement and control of process variables, and the design and implementation of systems that incorporate them. Process variables include pressure, temperature, humidity, flow, pH, force and speed.
ICE combines two branches of engineering. Instrumentation engineering is the science of the measurement and control of process variables within a production or manufacturing area. Meanwhile, control engineering, also called control systems engineering, is the engineering discipline that applies control theory to design systems with desired behaviors.
Control engineers are responsible for the research, design, and development of control devices and systems, typically in manufacturing facilities and process plants. Control methods employ sensors to measure the output variable of the device and provide feedback to the controller so that it can make corrections toward desired performance. Automatic control manages a device without the need of human inputs for correction, such as cruise control for regulating a car's speed.
Control systems engineering activities are multi-disciplinary in nature. They focus on the implementation of control systems, mainly derived by mathematical modeling. Because instrumentation and control play a significant role in gathering information from a system and changing its parameters, they are a key part of control loops.
As profession
High demand for engineering professionals is found in fields associated with process automation. Specializations include industrial instrumentation, system dynamics, process control, and control systems. Additionally, technological knowledge, particularly in computer systems, is essential to the job of an instrumentation and control engineer; important technology-related topics include human–computer interaction, programmable logic controllers, and SCADA. The tasks center around designing, developing, maintaining and managing control systems.
The goals of the work of an instrumentation and control engineer are to maximize:
Productivity
Optimization
Stability
Reliability
Safety
Continuity
As academic discipline
Instrumentation and control engineering is a vital field of study offered at many universities worldwide at both the graduate and postgraduate levels. This discipline integrates principles from various branches of engineering, providing a comprehensive understanding of the design, analysis, and management of automated systems.
Typical coursework for this discipline includes, but is not limited to, subjects such as control system design, instrumentation fundamentals, process control, sensors and signal processing, automation, robotics, and industrial data communications. Advanced courses may delve into topics like intelligent control systems, digital signal processing, and embedded systems design.
Students often have the opportunity to engage in hands-on laboratory work and industry-relevant projects, which foster practical skills alongside theoretical knowledge. These experiences are crucial in preparing graduates for careers in diverse sectors including manufacturing, power generation, oil and gas, and healthcare, where they may design and maintain systems that automate processes, improve efficiency, and enhance safety.
Interdisciplinary by nature, the field is accessible to students from various engineering backgrounds. Most commonly, students with a foundation in Electrical Engineering and Mechanical Engineering are drawn to this field due to their strong base in control systems, system dynamics, electro-mechanical machines and devices, and electric circuits (course work). However, with the growing complexity and integration of systems, students from fields like computer engineering, chemical engineering, and even biomedical engineering are increasingly contributing to and benefiting from studies in instrumentation and control engineering.
Furthermore, the rapid advancement of technology in areas like the Internet of Things (IoT), artificial intelligence (AI), and machine learning is continuously shaping the curriculum of this discipline, making it an ever-evolving and dynamic field of study.
See also
Industrial system
Instrumentation in petrochemical industries
List of sensors
Metrology
Measurement
Programmable logic controller
International Society of Automation
References
External links
Industrial Instrumentation and Controls Technology Alliance
Process engineering
Sensors
Measuring instruments | Instrumentation and control engineering | [
"Technology",
"Engineering"
] | 722 | [
"Process engineering",
"Sensors",
"Mechanical engineering by discipline",
"Measuring instruments"
] |
42,967,409 | https://en.wikipedia.org/wiki/Eigenstrain | In continuum mechanics an eigenstrain is any mechanical deformation in a material that is not caused by an external mechanical stress, with thermal expansion often given as a familiar example. The term was coined in the 1970s by Toshio Mura, who worked extensively on generalizing their mathematical treatment. A non-uniform distribution of eigenstrains in a material (e.g., in a composite material) leads to corresponding eigenstresses, which affect the mechanical properties of the material.
Overview
Many distinct physical causes for eigenstrains exist, such as crystallographic defects, thermal expansion, the inclusion of additional phases in a material, and previous plastic strains. All of these result from internal material characteristics, not from the application of an external mechanical load. As such, eigenstrains have also been referred to as “stress-free strains” and “inherent strains”. When one region of material experiences a different eigenstrain than its surroundings, the restraining effect of the surroundings leads to a stress state on both regions. Analyzing the distribution of this residual stress for a known eigenstrain distribution or inferring the total eigenstrain distribution from a partial data set are both two broad goals of eigenstrain theory.
Analysis of eigenstrains and eigenstresses
Eigenstrain analysis usually relies on the assumption of linear elasticity, such that different contributions to the total strain are additive. In this case, the total strain of a material is divided into the elastic strain e and the inelastic eigenstrain :
where and indicate the directional components in 3 dimensions in Einstein notation.
Another assumption of linear elasticity is that the stress can be linearly related to the elastic strain and the stiffness by Hooke’s Law:
In this form, the eigenstrain is not in the equation for stress, hence the term "stress-free strain". However, a non-uniform distribution of eigenstrain alone will cause elastic strains to form in response, and therefore a corresponding elastic stress. When performing these calculations, closed-form expressions for (and thus, the total stress and strain fields) can only be found for specific geometries of the distribution of .
Ellipsoidal inclusion in an infinite medium
One of the earliest examples providing such a closed-form solution analyzed a ellipsoidal inclusion of material with a uniform eigenstrain, constrained by an infinite medium with the same elastic properties. This can be imagined with the figure on the right. The inner ellipse represents the region . The outer region represents the extent of if it fully expanded to the eigenstrain without being constrained by the surrounding . Because the total strain, shown by the solid outlined ellipse, is the sum of the elastic and eigenstrains, it follows that in this example the elastic strain in the region is negative, corresponding to a compression by on the region .
The solutions for the total stress and strain within are given by:
Where is the Eshelby Tensor, whose value for each component is determined only by the geometry of the ellipsoid. The solution demonstrates that the total strain and stress state within the inclusion are uniform. Outside of , the stress decays towards zero with increasing distance away from the inclusion. In the general case, the resulting stresses and strains may be asymmetric, and due to the asymmetry of , the eigenstrain may not be coaxial with the total strain.
Inverse problem
Eigenstrains and the residual stresses that accompany them are difficult to measure (see:Residual stress). Engineers can usually only acquire partial information about the eigenstrain distribution in a material. Methods to fully map out the eigenstrain, called the inverse problem of eigenstrain, are an active area of research. Understanding the total residual stress state, based on knowledge of the eigenstrains, informs the design process in many fields.
Applications
Structural engineering
Residual stresses, e.g. introduced by manufacturing processes or by welding of structural members, reflect the eigenstrain state of the material. This can be unintentional or by design, e.g. shot peening. In either case, the final stress state can affect the fatigue, wear, and corrosion behavior of structural components. Eigenstrain analysis is one way to model these residual stresses.
Composite materials
Since composite materials have large variations in the thermal and mechanical properties of their components, eigenstrains are particularly relevant to their study. Local stresses and strains can cause decohesion between composite phases or cracking in the matrix. These may be driven by changes in temperature, moisture content, piezoelectric effects, or phase transformations. Particular solutions and approximations to the stress fields taking into account the periodic or statistical character of the composite material's eigenstrain have been developed.
Strain engineering
Lattice misfit strains are also a class of eigenstrains, caused by growing a crystal of one lattice parameter on top of a crystal with a different lattice parameter. Controlling these strains can improve the electronic properties of an epitaxially grown semiconductor. See: strain engineering.
See also
Residual stress
Strain (mechanics)
References
Continuum mechanics | Eigenstrain | [
"Physics"
] | 1,061 | [
"Classical mechanics",
"Continuum mechanics"
] |
42,968,488 | https://en.wikipedia.org/wiki/BMVA%20Summer%20School | BMVA Summer School is an annual summer school on computer vision, organised by the British Machine Vision Association and Society for Pattern Recognition (BMVA). The course is residential, usually held over five days, and consists of lectures and practicals in topics in image processing, computer vision, pattern recognition. It is intended that the course will complement and extend the material in existing technical courses that many students/researchers will encounter in their early stage of postgraduate training or careers. It aims to broaden awareness of knowledge and techniques in Vision, Image Computing and Pattern Recognition, and to develop appropriate research skills, and for students to interact with their peers, and to make contacts among those who will be the active researchers of their own generation. It is open to students from both UK and non-UK universities. The registration fees vary based on time of registration and are in general slightly higher for non-UK students. The summer school has been hosted locally by various universities in UK that carry out Computer Vision research, e.g., Kingston University, the University of Manchester, Swansea University and University of Lincoln.
It has run since the mid-1990s, and content is updated every year. Speakers at the Summer School are active academic researchers or experienced practitioners from industry, mainly in the UK. It has received financial support from EPSRC from 2009 to 2012.
Delegates of the summer school are usually encouraged to bring posters to summer school to present their work to peers and lecturers. A best poster is selected by the summer school lecturers.
References
External links
26th BMVA Summer School 2023, University of East Anglia
25th BMVA Summer School 2022, University of East Anglia
2017 BMVA summer school, University of Lincoln
2015-16 BMVA summer school, Swansea University
2014 BMVA summer school, Swansea University
2013 BMVA summer school, Manchester University
2012 BMVA summer school, Manchester University
2011 BMVA summer school, Manchester University
Annual events in the United Kingdom
Computer science education in the United Kingdom
Computer vision research infrastructure
Engineering and Physical Sciences Research Council
Information technology organisations based in the United Kingdom
Machine vision
Summer schools | BMVA Summer School | [
"Engineering"
] | 429 | [
"Machine vision",
"Robotics engineering"
] |
42,970,651 | https://en.wikipedia.org/wiki/Key%20Performance%20Parameters | Key Performance Parameters (KPPs) specify what the critical performance goals are in a United States Department of Defense (DoD) acquisition under the JCIDS process.
The JCIDS intent for KPPs is to have a few measures stated where the acquisition product either meets the stated performance measure or else the program will be considered a failure per instructions CJCSI 3170.01H – Joint Capabilities Integration and Development System. The mandates require 3 to 8 KPPs be specified for a United States Department of Defense major acquisition, known as Acquisition Category 1 or ACAT-I.
The term is defined as "Performance attributes of a system considered critical to the development of an effective military capability. A KPP normally has a threshold representing the minimum acceptable value achievable at low-to-moderate risk, and an objective, representing the desired operational goal but at higher risk in cost, schedule, and performance. KPPs are contained in the Capability Development Document (CDD) and the Capability Production Document (CPD) and are included verbatim in the Acquisition Program Baseline (APB). KPPs are considered Measures of Performance (MOPs) by the operational test community."
Commentary notes that metrics must be chosen carefully, and that they are hard to define and apply throughout a projects life cycle. It is also desired that KPPs of a program avoid repetition, and to be something applicable among different programs such as fuel efficiency. Higher numbers of KPPs are associated to program and schedule instability.
See also
Analysis of Alternatives
Requirement (example mention of Net-Ready KPP, a mandated KPP)
References
External links
CJCSI 3170.01H at DAU collection
CJCSM 3170.01C at everyspec.com
United States Department of Defense
United States defense procurement
Systems engineering | Key Performance Parameters | [
"Engineering"
] | 366 | [
"Systems engineering"
] |
42,971,530 | https://en.wikipedia.org/wiki/Armillaria%20altimontana | Armillaria altimontana is a species of agaric fungus in the family Physalacriaceae. The species, found in the Pacific Northwest region of North America, was officially described as new to science in 2012. It was previously known as North American biological species (NABS) X. It grows in high-elevation mesic habitats in dry coniferous forests. This species has been found on hardwoods and conifers and is associated most commonly with fir-dominated forest types in southern British Columbia, Washington, Oregon, Idaho and northern California.
A. Altimontana competes directly with A. solidipes, and evidence suggests it's beneficial and can increase tree survival.
See also
List of Armillaria species
References
External links
altimontana
Fungi of North America
Fungal tree pathogens and diseases
Pacific Northwest
Fungi described in 2012
Fungus species | Armillaria altimontana | [
"Biology"
] | 176 | [
"Fungi",
"Fungus species"
] |
42,972,744 | https://en.wikipedia.org/wiki/Anhembi%20orthobunyavirus | Anhembi orthobunyavirus, also called Anhembi virus (AMBV), is a species of virus. It was initially considered a strain of Wyeomyia virus, belonging serologically to the Bunyamwera serogroup of bunyaviruses. In 2018 it was made its own species. It was isolated from the rodent - Proechimys iheringi - and a mosquito - Phoniomyia pilicauda - in São Paulo, Brazil.
Until 2001 this virus has not been reported to cause disease in humans.
References
Orthobunyaviruses | Anhembi orthobunyavirus | [
"Biology"
] | 127 | [
"Virus stubs",
"Viruses"
] |
42,972,915 | https://en.wikipedia.org/wiki/Regulation%20of%20fracking | Countries using or considering to use fracking have implemented different regulations, including developing federal and regional legislation, and local zoning limitations. In 2011, after public pressure France became the first nation to ban hydraulic fracturing, based on the precautionary principle as well as the principal of preventive and corrective action of environmental hazards. The ban was upheld by an October 2013 ruling of the Constitutional Council. Some other countries have placed a temporary moratorium on the practice. Countries like the United Kingdom and South Africa, have lifted their bans, choosing to focus on regulation instead of outright prohibition. Germany has announced draft regulations that would allow using hydraulic fracturing for the exploitation of shale gas deposits with the exception of wetland areas.
The European Union has adopted a recommendation for minimum principles for using high-volume hydraulic fracturing. Its regulatory regime requires full disclosure of all additives. In the United States, the Ground Water Protection Council launched FracFocus.org, an online voluntary disclosure database for hydraulic fracturing fluids funded by oil and gas trade groups and the U.S. Department of Energy. Hydraulic fracturing is excluded from the Safe Drinking Water Act's underground injection control's regulation, except when diesel fuel is used. The EPA assures surveillance of the issuance of drilling permits when diesel fuel is employed.
On 17 December 2014, New York state issued a statewide ban on hydraulic fracturing, becoming the second state in the United States to issue such a ban after Vermont.
Approaches
Risk-based approach
The main tool used by this approach is risk assessment. A risk assessment method, based on experimenting and assessing risk ex-post, once the technology is in place. In the context of hydraulic fracturing, it means that drilling permits are issued and exploitation conducted before the potential risks on the environment and human health are known. The risk-based approach mainly relies on a discourse that sacralizes technological innovations as an intrinsic good, and the analysis of such innovations, such as hydraulic fracturing, is made on a sole cost-benefit framework, which does not allow prevention or ex-ante debates on the use of the technology. This is also referred to as "learning-by-doing". A risk assessment method has for instance led to regulations that exist in the hydraulic fracturing in the United States (EPA will release its study on the effect of hydraulic fracturing on groundwater in 2014, though hydraulic fracturing has been used for more than 60 years. Commissions that have been implemented in the US to regulate the use of hydraulic fracturing have been created after hydraulic fracturing had started in their area of regulation. This is for instance the case in the Marcellus shale area where three regulatory committees were implemented ex-post.
Academic scholars who have studied the perception of hydraulic fracturing in the North of England have raised two main critiques of this approach. Firstly, it takes scientific issues out of the public debate since there is no debate on the use of a technology but on its effects. Secondly, it does not prevent environmental harm from happening since risks are taken then assessed instead of evaluated then taken as it would be the case with a precautionary approach to scientific debates. The relevance and reliability of risk assessments in hydraulic fracturing communities has also been debated amongst environmental groups, health scientists, and industry leaders. A study has epitomized this point: the participants to regulatory committees of the Marcellus shale have, for a majority, raised concerns about public health although nobody in these regulatory committees had expertise in public health. That highlights a possible underestimation of public health risks due to hydraulic fracturing. Moreover, more than a quarter of the participants raised concerns about the neutrality of the regulatory committees given the important weigh of the hydraulic fracturing industry. The risks, to some like the participants of the Marcellus Shale regulatory committees, are overplayed and the current research is insufficient in showing the link between hydraulic fracturing and adverse health effects, while to others like local environmental groups the risks are obvious and risk assessment is underfunded.
Precaution-based approach
The second approach relies on the precautionary principle and the principal of preventive and corrective action of environmental hazards, using the best available techniques with an acceptable economic cost to insure the protection, the valuation, the restoration, management of spaces, resources and natural environments, of animal and vegetal species, of ecological diversity and equilibriums. The precautionary approach has led to regulations as implemented in France and in Vermont, banning hydraulic fracturing.
Such an approach is called upon by social sciences and the public as studies have shown in the North of England and Australia. Indeed, in Australia, the anthropologist who studied the use of hydraulic fracturing concluded that the risk-based approach was closing down the debate on the ethics of such a practice, therefore avoiding questions on broader concerns that merely the risks implied by hydraulic fracturing. In the North of England, levels of concerns registered in the deliberative focus groups studied were higher regarding the framing of the debate, meaning the fact that people did not have a voice in the energetic choices that were made, including the use of hydraulic fracturing. Concerns relative to risks of seismicity and health issues were also important to the public, but less than this. A reason for that is that being withdrawn the right to participate in the decision-making triggered opposition of both supporters and opponents of hydraulic fracturing.
The points made to defend such an approach often relate to climate change and the impact on the direct environment; related to public concerns on the rural landscape for instance in the UK. Energetic choices indeed affect climate change since greenhouse gas emissions from fossil fuels extraction such as shale gas and oil contribute to climate change. Therefore, people have in the UK raised concerns about the exploitation of these resources, not just hydraulic fracturing as a method. They would hence prefer a precaution-based approach to decide whether or not, regarding the issue of climate change, they want to exploit shale gas and oil.
Framing of the debate
There are two main areas of interest regarding how debates on hydraulic fracturing for the exploitation of unconventional oil and gas have been conducted.
"Learning-by-doing" and the displacement of ethics
A risk-based approach is often referred to as "learning-by-doing" by social sciences. Social sciences have raised two main critiques of this approach. Firstly, it takes scientific issues out of the public debate since there is no debate on the use of a technology but on its impacts. Secondly, it does not prevent environmental harm from happening since risks are taken then assessed instead of evaluated then taken. Public concerns are shown to be really linked to these issues of scientific approach. Indeed, the public in the North of England for instance fears "the denial of the deliberation of the values embedded in the development and application of that technology, as well as the future it is working towards" more than risks themselves. The legitimacy of the method is only questioned after its implementation, not before. This vision separates risks and effects from the values entitled by a technology. For instance, hydraulic fracturing entitles a transitional fuel for its supporters whereas for its opponents it represents a fossil fuel exacerbating the greenhouse effect and global warming. Not asking these questions leads to seeing only the mere economic cost-benefit analysis.
This is linked to a pattern of preventing non-experts from taking part in scientific-technological debates, including their ethical issues. An answer to that problem is seen to be increased public participation so as to have the public deciding which issues to address and what political and ethical norms to adopt as a society. Another public concern with the "learning-by-doing" approach is that the speed of innovation may exceed the speed of regulation and since innovation is seen as serving private interests, potentially at the expense of social good, it is a matter of public concern. Science and Technology Studies have theorized "slowing-down" and the precautionary principle as answers. The claim is that the possibility of an issue is legitimate and should be taken into account before any action is taken.
Variations in risk-assessment of environmental effects of hydraulic fracturing
Issues also exist regarding the way risk assessment is conducted and whether it reflects some interests more than others. Firstly, an issue exists about whether risk assessment authorities are able to judge the impact of hydraulic fracturing in public health. A study conducted on the advisory committees of the Marcellus Shale gas area has shown that not a single member of these committees had public health expertise and that some concern existed about whether the commissions were not biased in their composition. Indeed, among 51 members of the committees, there is no evidence that a single one has any expertise in environmental public health, even after enlarging the category of experts to "include medical and health professionals who could be presumed to have some health background related to environmental health, however minimal". This cannot be explained by the purpose of the committee since all three executive orders of the different committees mentioned environmental public health related issues. Another finding of the authors is that a quarter of the opposed comments mentioned the possibility of bias in favor of gas industries in the composition of committees. The authors conclude saying that political leaders may not want to raise public health concerns not to handicap further economic development due to hydraulic fracturing.
Secondly, the conditions to allow hydraulic fracturing are being increasingly strengthened due to the move from governmental agencies' authority over the issue to elected officials' authority over it. The Shale Gas Drilling Safety Review Act of 2014 issued in Maryland forbids the issuance of drilling permits until a high standard "risk assessment of public health and environmental hazards relating to hydraulic fracturing activities" is conducted for at least 18 months based on the Governor's executive order.
Institutional discourse and the public
A qualitative study using deliberative focus groups has been conducted in the North of England, where the Bowland-Hodder shale, a big shale gas reservoir, is exploited by hydraulic fracturing. These group discussions reflect many concerns on the issue of the use of unconventional oil and unconventional gas. There is a concern about trust linked with a doubt on the ability or will of public authorities to work for the greater social good since private interests and profits of industrial companies are seen as corruptive powers. Alienation is also a concern since the feeling of a game rigged against the public rises due to "decision making being made on your behalf without being given the possibility to voice an opinion". Exploitation also arises since economic rationality that is seen as favoring short-termism is accused of seducing policy-makers and industry. Risk is accentuated by what is hydraulic fracturing as well as what is at stake, and "blind spots" of current knowledge as well as risk assessment analysis are accused of increasing the potentiality of negative outcomes. Uncertainty and ignorance are seen as too important in the issue of hydraulic fracturing and decisions are therefore perceived as rushed, which is why participants favored some form of precautionary approach. There is a major fear on the possible disconnection between the public's and the authorities' visions of what is a good choice for the good reasons.
It also appears that media coverage and institutional responses are widely inaccurate to answer public concerns. Indeed, institutional responses to public concerns are mostly inadequate since they focus on risk assessment and giving information to the public that is considered anxious because ignorant. But public concerns are much wider and it appears that public knowledge on hydraulic fracturing is rather good.
The hydraulic fracturing industry has lobbied for permissive regulation in Europe, the US federal government, and US states. On 20 March 2015 the rules for disclosing the chemicals used were tightened by the Obama administration. The new rules give companies involved 30 days from the beginning of an operation on federal land to disclose those chemicals.
See also
Regulation of hydraulic fracturing in the United States
Hydraulic fracturing by country
References
External links
The British Columbia (Canada) Oil and Gas Commission mandatory disclosure of hydraulic fracturing fluids
Hydraulic Fracturing: Selected Legal Issues Congressional Research Service
Environmental law
Hydraulic fracturing
Energy law
Mining law and governance
Regulation of technologies | Regulation of fracking | [
"Chemistry"
] | 2,440 | [
"Petroleum technology",
"Natural gas technology",
"Hydraulic fracturing"
] |
68,694,975 | https://en.wikipedia.org/wiki/Yttrium%28II%29%20oxide | Yttrium(II) oxide or yttrium monoxide is a chemical compound with the formula YO. This chemical compound was first created in its solid form by pulsed laser deposition, using yttrium(III) oxide as the target at 350 °C. The film was deposited on calcium fluoride using a krypton monofluoride laser. This resulted in a 200 nm flim of yttrium monoxide.
References
Yttrium compounds
Oxides | Yttrium(II) oxide | [
"Chemistry"
] | 96 | [
"Oxides",
"Salts"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.